P.S. Homann; B.T. Bormann; J.R. Boyle; R.L. Darbyshire; R. Bigley
2008-01-01
Detecting changes in forest soil C and N is vital to the study of global budgets and long-term ecosystem productivity. Identifying differences among land-use practices may guide future management. Our objective was to determine the relation of minimum detectable changes (MDCs) and minimum detectable differences between treatments (MDDs) to soil C and N variability at...
Zhang, Xu; Jin, Weiqi; Li, Jiakun; Wang, Xia; Li, Shuo
2017-04-01
Thermal imaging technology is an effective means of detecting hazardous gas leaks. Much attention has been paid to evaluation of the performance of gas leak infrared imaging detection systems due to several potential applications. The minimum resolvable temperature difference (MRTD) and the minimum detectable temperature difference (MDTD) are commonly used as the main indicators of thermal imaging system performance. This paper establishes a minimum detectable gas concentration (MDGC) performance evaluation model based on the definition and derivation of MDTD. We proposed the direct calculation and equivalent calculation method of MDGC based on the MDTD measurement system. We build an experimental MDGC measurement system, which indicates the MDGC model can describe the detection performance of a thermal imaging system to typical gases. The direct calculation, equivalent calculation, and direct measurement results are consistent. The MDGC and the minimum resolvable gas concentration (MRGC) model can effectively describe the performance of "detection" and "spatial detail resolution" of thermal imaging systems to gas leak, respectively, and constitute the main performance indicators of gas leak detection systems.
DIF Detection Using Multiple-Group Categorical CFA with Minimum Free Baseline Approach
ERIC Educational Resources Information Center
Chang, Yu-Wei; Huang, Wei-Kang; Tsai, Rung-Ching
2015-01-01
The aim of this study is to assess the efficiency of using the multiple-group categorical confirmatory factor analysis (MCCFA) and the robust chi-square difference test in differential item functioning (DIF) detection for polytomous items under the minimum free baseline strategy. While testing for DIF items, despite the strong assumption that all…
Minimum Detectable Dose as a Measure of Bioassay Programme Capability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carbaugh, Eugene H.
2003-01-01
This paper suggests that minimum detectable dose (MDD) be used to describe the capability of bioassay programs for which intakes are expected to be rare. This allows expression of the capability in units that correspond directly to primary dose limits. The concept uses the well-established analytical statistic minimum detectable amount (MDA) as the starting point and assumes MDA detection at a prescribed time post intake. The resulting dose can then be used as an indication of the adequacy or capability of the program for demonstrating compliance with the performance criteria. MDDs can be readily tabulated or plotted to demonstrate themore » effectiveness of different types of monitoring programs. The inclusion of cost factors for bioassay measurements can allow optimisation.« less
Minimum detectable dose as a measure of bioassay programme capability.
Carbaugh, E H
2003-01-01
This paper suggests that minimum detectable dose (MDD) be used to describe the capability of bioassay programmes for which intakes are expected to be rare. This allows expression of the capability in units that correspond directly to primary dose limits. The concept uses the well established analytical statistic minimum detectable amount (MDA) as the starting point, and assumes MDA detection at a prescribed time post-intake. The resulting dose can then be used as an indication of the adequacy or capability of the programme for demonstrating compliance with the performance criteria. MDDs can be readily tabulated or plotted to demonstrate the effectiveness of different types of monitoring programmes. The inclusion of cost factors for bioassay measurements can allow optimisation.
Tang, Xiao-Bin; Meng, Jia; Wang, Peng; Cao, Ye; Huang, Xi; Wen, Liang-Sheng; Chen, Da
2016-04-01
A small-sized UAV (NH-UAV) airborne system with two gamma spectrometers (LaBr3 detector and HPGe detector) was developed to monitor activity concentration in serious nuclear accidents, such as the Fukushima nuclear accident. The efficiency calibration and determination of minimum detectable activity concentration (MDAC) of the specific system were studied by MC simulations at different flight altitudes, different horizontal distances from the detection position to the source term center and different source term sizes. Both air and ground radiation were considered in the models. The results obtained may provide instructive suggestions for in-situ radioactivity measurements of NH-UAV. Copyright © 2016 Elsevier Ltd. All rights reserved.
Varley, Matthew C; Jaspers, Arne; Helsen, Werner F; Malone, James J
2017-09-01
Sprints and accelerations are popular performance indicators in applied sport. The methods used to define these efforts using athlete-tracking technology could affect the number of efforts reported. This study aimed to determine the influence of different techniques and settings for detecting high-intensity efforts using global positioning system (GPS) data. Velocity and acceleration data from a professional soccer match were recorded via 10-Hz GPS. Velocity data were filtered using either a median or an exponential filter. Acceleration data were derived from velocity data over a 0.2-s time interval (with and without an exponential filter applied) and a 0.3-second time interval. High-speed-running (≥4.17 m/s 2 ), sprint (≥7.00 m/s 2 ), and acceleration (≥2.78 m/s 2 ) efforts were then identified using minimum-effort durations (0.1-0.9 s) to assess differences in the total number of efforts reported. Different velocity-filtering methods resulted in small to moderate differences (effect size [ES] 0.28-1.09) in the number of high-speed-running and sprint efforts detected when minimum duration was <0.5 s and small to very large differences (ES -5.69 to 0.26) in the number of accelerations when minimum duration was <0.7 s. There was an exponential decline in the number of all efforts as minimum duration increased, regardless of filtering method, with the largest declines in acceleration efforts. Filtering techniques and minimum durations substantially affect the number of high-speed-running, sprint, and acceleration efforts detected with GPS. Changes to how high-intensity efforts are defined affect reported data. Therefore, consistency in data processing is advised.
Analysis of Aircraft Fuels and Related Materials
1982-09-01
content by the Karl Fischer method . Each 2040 solvent sample represented a different step in a clean-up procedure conducted by Aero Propulsion...izes a potentiometric titration with alcoholic silver nitrate. This method has a minimum detectability of 1 ppm. It has a re- peatability of 0.1 ppm... Method 163-80, which util- izes a potentiometric titration with alcoholic silver nitrate. This method has a minimum detectability of 1 ppm and has a
Are There Long-Run Effects of the Minimum Wage?
Sorkin, Isaac
2014-01-01
An empirical consensus suggests that there are small employment effects of minimum wage increases. This paper argues that these are short-run elasticities. Long-run elasticities, which may differ from short-run elasticities, are policy relevant. This paper develops a dynamic industry equilibrium model of labor demand. The model makes two points. First, long-run regressions have been misinterpreted because even if the short- and long-run employment elasticities differ, standard methods would not detect a difference using US variation. Second, the model offers a reconciliation of the small estimated short-run employment effects with the commonly found pass-through of minimum wage increases to product prices. PMID:25937790
Are There Long-Run Effects of the Minimum Wage?
Sorkin, Isaac
2015-04-01
An empirical consensus suggests that there are small employment effects of minimum wage increases. This paper argues that these are short-run elasticities. Long-run elasticities, which may differ from short-run elasticities, are policy relevant. This paper develops a dynamic industry equilibrium model of labor demand. The model makes two points. First, long-run regressions have been misinterpreted because even if the short- and long-run employment elasticities differ, standard methods would not detect a difference using US variation. Second, the model offers a reconciliation of the small estimated short-run employment effects with the commonly found pass-through of minimum wage increases to product prices.
Kidney function endpoints in kidney transplant trials: a struggle for power.
Ibrahim, A; Garg, A X; Knoll, G A; Akbari, A; White, C A
2013-03-01
Kidney function endpoints are commonly used in randomized controlled trials (RCTs) in kidney transplantation (KTx). We conducted this study to estimate the proportion of ongoing RCTs with kidney function endpoints in KTx where the proposed sample size is large enough to detect meaningful differences in glomerular filtration rate (GFR) with adequate statistical power. RCTs were retrieved using the key word "kidney transplantation" from the National Institute of Health online clinical trial registry. Included trials had at least one measure of kidney function tracked for at least 1 month after transplant. We determined the proportion of two-arm parallel trials that had sufficient sample sizes to detect a minimum 5, 7.5 and 10 mL/min difference in GFR between arms. Fifty RCTs met inclusion criteria. Only 7% of the trials were above a sample size of 562, the number needed to detect a minimum 5 mL/min difference between the groups should one exist (assumptions: α = 0.05; power = 80%, 10% loss to follow-up, common standard deviation of 20 mL/min). The result increased modestly to 36% of trials when a minimum 10 mL/min difference was considered. Only a minority of ongoing trials have adequate statistical power to detect between-group differences in kidney function using conventional sample size estimating parameters. For this reason, some potentially effective interventions which ultimately could benefit patients may be abandoned from future assessment. © Copyright 2013 The American Society of Transplantation and the American Society of Transplant Surgeons.
Choi, Tayoung; Ganapathy, Sriram; Jung, Jaehak; Savage, David R.; Lakshmanan, Balasubramanian; Vecasey, Pamela M.
2013-04-16
A system and method for detecting a low performing cell in a fuel cell stack using measured cell voltages. The method includes determining that the fuel cell stack is running, the stack coolant temperature is above a certain temperature and the stack current density is within a relatively low power range. The method further includes calculating the average cell voltage, and determining whether the difference between the average cell voltage and the minimum cell voltage is greater than a predetermined threshold. If the difference between the average cell voltage and the minimum cell voltage is greater than the predetermined threshold and the minimum cell voltage is less than another predetermined threshold, then the method increments a low performing cell timer. A ratio of the low performing cell timer and a system run timer is calculated to identify a low performing cell.
The Impact of a City-Level Minimum-Wage Policy on Supermarket Food Prices in Seattle-King County.
Otten, Jennifer J; Buszkiewicz, James; Tang, Wesley; Aggarwal, Anju; Long, Mark; Vigdor, Jacob; Drewnowski, Adam
2017-09-09
Background : Many states and localities throughout the U.S. have adopted higher minimum wages. Higher labor costs among low-wage food system workers could result in higher food prices. Methods : Using a market basket of 106 foods, food prices were collected at affected chain supermarket stores in Seattle and same-chain unaffected stores in King County (n = 12 total, six per location). Prices were collected at 1 month pre- (March 2015) and 1-month post-policy enactment (May 2015), then again 1-year post-policy enactment (May 2016). Unpaired t-tests were used to detect price differences by location at fixed time while paired t-tests were used to detect price difference across time with fixed store chain. A multi-level, linear differences-in-differences model, was used to detect the changes in the average market basket item food prices over time across regions, overall and by food group. Results : There were no significant differences in overall market basket or item-level costs at one-month (-$0.01, SE = 0.05, p = 0.884) or one-year post-policy enactment (-$0.02, SE = 0.08, p = 0.772). No significant increases were observed by food group. Conclusions : There is no evidence of change in supermarket food prices by market basket or increase in prices by food group in response to the implementation of Seattle's minimum wage ordinance.
Applying six classifiers to airborne hyperspectral imagery for detecting giant reed
USDA-ARS?s Scientific Manuscript database
This study evaluated and compared six different image classifiers, including minimum distance (MD), Mahalanobis distance (MAHD), maximum likelihood (ML), spectral angle mapper (SAM), mixture tuned matched filtering (MTMF) and support vector machine (SVM), for detecting and mapping giant reed (Arundo...
The Impact of a City-Level Minimum-Wage Policy on Supermarket Food Prices in Seattle-King County
Tang, Wesley; Aggarwal, Anju; Vigdor, Jacob; Drewnowski, Adam
2017-01-01
Background: Many states and localities throughout the U.S. have adopted higher minimum wages. Higher labor costs among low-wage food system workers could result in higher food prices. Methods: Using a market basket of 106 foods, food prices were collected at affected chain supermarket stores in Seattle and same-chain unaffected stores in King County (n = 12 total, six per location). Prices were collected at 1 month pre- (March 2015) and 1-month post-policy enactment (May 2015), then again 1-year post-policy enactment (May 2016). Unpaired t-tests were used to detect price differences by location at fixed time while paired t-tests were used to detect price difference across time with fixed store chain. A multi-level, linear differences-in-differences model, was used to detect the changes in the average market basket item food prices over time across regions, overall and by food group. Results: There were no significant differences in overall market basket or item-level costs at one-month (−$0.01, SE = 0.05, p = 0.884) or one-year post-policy enactment (−$0.02, SE = 0.08, p = 0.772). No significant increases were observed by food group. Conclusions: There is no evidence of change in supermarket food prices by market basket or increase in prices by food group in response to the implementation of Seattle’s minimum wage ordinance. PMID:28891937
ERIC Educational Resources Information Center
Dong, Nianbo; Maynard, Rebecca
2013-01-01
This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual- and…
Ion-neutral Coupling During Deep Solar Minimum
NASA Technical Reports Server (NTRS)
Huang, Cheryl Y.; Roddy, Patrick A.; Sutton, Eric K.; Stoneback, Russell; Pfaff, Robert F.; Gentile, Louise C.; Delay, Susan H.
2013-01-01
The equatorial ionosphere under conditions of deep solar minimum exhibits structuring due to tidal forces. Data from instruments carried by the Communication Navigation Outage Forecasting System (CNOFS) which was launched in April 2008 have been analyzed for the first 2 years following launch. The Planar Langmuir Probe (PLP), Ion Velocity Meter (IVM) and Vector Electric Field Investigation (VEFI) all detect periodic structures during the 20082010 period which appear to be tides. However when the tidal features detected by these instruments are compared, there are distinctive and significant differences between the observations. Tides in neutral densities measured by the Gravity Recovery and Climate Experiment (GRACE) satellite were also observed during June 2008. In addition, Broad Plasma Decreases (BPDs) appear as a deep absolute minimum in the plasma and neutral density tidal pattern. These are co-located with regions of large downward-directed ion meridional velocities and minima in the zonal drifts, all on the nightside. The region in which BPDs occur coincides with a peak in occurrence rate of dawn depletions in plasma density observed on the Defense Meterological Satellite Program (DMSP) spacecraft, as well as a minimum in radiance detected by UV imagers on the Thermosphere Ionosphere Mesosphere Energetics and Dynamics (TIMED) and IMAGE satellites
Petitot, Maud; Manceau, Nicolas; Geniez, Philippe; Besnard, Aurélien
2014-09-01
Setting up effective conservation strategies requires the precise determination of the targeted species' distribution area and, if possible, its local abundance. However, detection issues make these objectives complex for most vertebrates. The detection probability is usually <1 and is highly dependent on species phenology and other environmental variables. The aim of this study was to define an optimized survey protocol for the Mediterranean amphibian community, that is, to determine the most favorable periods and the most effective sampling techniques for detecting all species present on a site in a minimum number of field sessions and a minimum amount of prospecting effort. We visited 49 ponds located in the Languedoc region of southern France on four occasions between February and June 2011. Amphibians were detected using three methods: nighttime call count, nighttime visual encounter, and daytime netting. The detection nondetection data obtained was then modeled using site-occupancy models. The detection probability of amphibians sharply differed between species, the survey method used and the date of the survey. These three covariates also interacted. Thus, a minimum of three visits spread over the breeding season, using a combination of all three survey methods, is needed to reach a 95% detection level for all species in the Mediterranean region. Synthesis and applications: detection nondetection surveys combined to site occupancy modeling approach are powerful methods that can be used to estimate the detection probability and to determine the prospecting effort necessary to assert that a species is absent from a site.
Venkatarajappa, P
2001-01-01
The toxic effect of Cypermethrin 10 EC (0.125, 0.25 and 0.5%) was estimated in the bodywall and digestive system of the larvae of Oryctes rhinoceros by HPLC after exposing them to different concentrations (0.125, 0.25 and 0.5%). Among the various concentrations used maximum residues were detected in bodywall (0.25%), whereas at higher concentration (0.5%) the residue detected was minimum. The treatment of Cypermethrin was found to be highly toxic upto 12 h of treatment, after which it declined reaching the minimum by 24 h. The residue of Cypermethrin could not be detected in digestive system. The experiments indicate the pesticide get concentrated in the bodywall to a maximum extent.
NASA Astrophysics Data System (ADS)
Tehsin, Sara; Rehman, Saad; Riaz, Farhan; Saeed, Omer; Hassan, Ali; Khan, Muazzam; Alam, Muhammad S.
2017-05-01
A fully invariant system helps in resolving difficulties in object detection when camera or object orientation and position are unknown. In this paper, the proposed correlation filter based mechanism provides the capability to suppress noise, clutter and occlusion. Minimum Average Correlation Energy (MACE) filter yields sharp correlation peaks while considering the controlled correlation peak value. Difference of Gaussian (DOG) Wavelet has been added at the preprocessing stage in proposed filter design that facilitates target detection in orientation variant cluttered environment. Logarithmic transformation is combined with a DOG composite minimum average correlation energy filter (WMACE), capable of producing sharp correlation peaks despite any kind of geometric distortion of target object. The proposed filter has shown improved performance over some of the other variant correlation filters which are discussed in the result section.
Minimum time search in uncertain dynamic domains with complex sensorial platforms.
Lanillos, Pablo; Besada-Portas, Eva; Lopez-Orozco, Jose Antonio; de la Cruz, Jesus Manuel
2014-08-04
The minimum time search in uncertain domains is a searching task, which appears in real world problems such as natural disasters and sea rescue operations, where a target has to be found, as soon as possible, by a set of sensor-equipped searchers. The automation of this task, where the time to detect the target is critical, can be achieved by new probabilistic techniques that directly minimize the Expected Time (ET) to detect a dynamic target using the observation probability models and actual observations collected by the sensors on board the searchers. The selected technique, described in algorithmic form in this paper for completeness, has only been previously partially tested with an ideal binary detection model, in spite of being designed to deal with complex non-linear/non-differential sensorial models. This paper covers the gap, testing its performance and applicability over different searching tasks with searchers equipped with different complex sensors. The sensorial models under test vary from stepped detection probabilities to continuous/discontinuous differentiable/non-differentiable detection probabilities dependent on distance, orientation, and structured maps. The analysis of the simulated results of several static and dynamic scenarios performed in this paper validates the applicability of the technique with different types of sensor models.
Minimum Time Search in Uncertain Dynamic Domains with Complex Sensorial Platforms
Lanillos, Pablo; Besada-Portas, Eva; Lopez-Orozco, Jose Antonio; de la Cruz, Jesus Manuel
2014-01-01
The minimum time search in uncertain domains is a searching task, which appears in real world problems such as natural disasters and sea rescue operations, where a target has to be found, as soon as possible, by a set of sensor-equipped searchers. The automation of this task, where the time to detect the target is critical, can be achieved by new probabilistic techniques that directly minimize the Expected Time (ET) to detect a dynamic target using the observation probability models and actual observations collected by the sensors on board the searchers. The selected technique, described in algorithmic form in this paper for completeness, has only been previously partially tested with an ideal binary detection model, in spite of being designed to deal with complex non-linear/non-differential sensorial models. This paper covers the gap, testing its performance and applicability over different searching tasks with searchers equipped with different complex sensors. The sensorial models under test vary from stepped detection probabilities to continuous/discontinuous differentiable/non-differentiable detection probabilities dependent on distance, orientation, and structured maps. The analysis of the simulated results of several static and dynamic scenarios performed in this paper validates the applicability of the technique with different types of sensor models. PMID:25093345
The investigation of solar activity signals by analyzing of tree ring chronological scales
NASA Astrophysics Data System (ADS)
Nickiforov, M. G.
2017-07-01
The present study examines the ability of detecting short-cycles and global minima of solar activity by analyzing dendrochronologies. Starting with the study of Douglass, which was devoted to the question of climatic cycles and the growth of trees, it is believed that the analysis of dendrochronologies allows to detect the cycle of Wolf-Schwabe. According to his results, the cycle was absent during Maunder's minimum and appeared after its completion. Having checked Douglass's conclusions by using 10 dendrochronologies of yellow pines from Arizona, which cover the time period from 1600 to 1900, we have come to the opposite results. The verification shows that: a) none of the considered dendroscale allows to detect an 11-year cycle; 2) the behaviour of a short peroid-signal does not undergo significant changes before, during or after Maunder's minimum. A similar attempt to detect global minima of solar activity by using five dendrochronologies from different areas has not led to positive results. On the one hand, the signal of global extremum is not always recorded in dendrochronology, on the other hand, the deep depression of annual rings allows to suppose the existence of a global minimum of solar activity, which is actually absent.
Statistical detection of patterns in unidimensional distributions by continuous wavelet transforms
NASA Astrophysics Data System (ADS)
Baluev, R. V.
2018-04-01
Objective detection of specific patterns in statistical distributions, like groupings or gaps or abrupt transitions between different subsets, is a task with a rich range of applications in astronomy: Milky Way stellar population analysis, investigations of the exoplanets diversity, Solar System minor bodies statistics, extragalactic studies, etc. We adapt the powerful technique of the wavelet transforms to this generalized task, making a strong emphasis on the assessment of the patterns detection significance. Among other things, our method also involves optimal minimum-noise wavelets and minimum-noise reconstruction of the distribution density function. Based on this development, we construct a self-closed algorithmic pipeline aimed to process statistical samples. It is currently applicable to single-dimensional distributions only, but it is flexible enough to undergo further generalizations and development.
Van Beek, T A; Blaakmeer, A
1989-03-03
A method has been developed for the quantitation of the bitter component limonin in grapefruit juice and other citrus juices. The sample clean-up consisted of centrifugation, filtration and a selective, rapid and reproducible purification with a C2 solid-phase extraction column. The limonin concentration was determined by high-performance liquid chromatography on a C18 column with UV detection at 210 nm. A linear response was obtained from 0.0 to 45 ppm limonin. The minimum detectable amount was 2 ng. The minimum concentration which was detected without concentration with good precision was 0.1 ppm. The method was also used for the determination of limonin in different types of oranges, including navel oranges, mandarins, lemons, limes, pomelos and uglis.
NASA Astrophysics Data System (ADS)
Fathy, Ibrahim
2016-07-01
This paper presents a statistical study of different types of large-scale geomagnetic pulsation (Pc3, Pc4, Pc5 and Pi2) detected simultaneously by two MAGDAS stations located at Fayum (Geo. Coordinates 29.18 N and 30.50 E) and Aswan (Geo. Coordinates 23.59 N and 32.51 E) in Egypt. The second order butter-worth band-pass filter has been used to filter and analyze the horizontal H-component of the geomagnetic field in one-second data. The data was collected during the solar minimum of the current solar cycle 24. We list the most energetic pulsations detected by the two stations instantaneously, in addition; the average amplitude of the pulsation signals was calculated.
Walton, David M; Macdermid, Joy C; Nielson, Warren; Teasell, Robert W; Chiasson, Marco; Brown, Lauren
2011-09-01
Clinical measurement. To evaluate the intrarater, interrater, and test-retest reliability of an accessible digital algometer, and to determine the minimum detectable change in normal healthy individuals and a clinical population with neck pain. Pressure pain threshold testing may be a valuable assessment and prognostic indicator for people with neck pain. To date, most of this research has been completed using algometers that are too resource intensive for routine clinical use. Novice raters (physiotherapy students or clinical physiotherapists) were trained to perform algometry testing over 2 clinically relevant sites: the angle of the upper trapezius and the belly of the tibialis anterior. A convenience sample of normal healthy individuals and a clinical sample of people with neck pain were tested by 2 different raters (all participants) and on 2 different days (healthy participants only). Intraclass correlation coefficient (ICC), standard error of measurement, and minimum detectable change were calculated. A total of 60 healthy volunteers and 40 people with neck pain were recruited. Intrarater reliability was almost perfect (ICC = 0.94-0.97), interrater reliability was substantial to near perfect (ICC = 0.79-0.90), and test-retest reliability was substantial (ICC = 0.76-0.79). Smaller change was detectable in the trapezius compared to the tibialis anterior. This study provides evidence that novice raters can perform digital algometry with adequate reliability for research and clinical use in people with and without neck pain.
Frank, C; Bray, D; Rademaker, A; Chrusch, C; Sabiston, P; Bodie, D; Rangayyan, R
1989-01-01
To establish a normal baseline for comparison, thirty-one thousand collagen fibril diameters were measured in calibrated transmission electron (TEM) photomicrographs of normal rabbit medial collateral ligaments (MCL's). A new automated method of quantitation was used to compare statistically fibril minimum diameter distributions in one midsubstance location in both MCL's from six animals at 3 months of age (immature) and three animals at 10 months of age (mature). Pooled results demonstrate that rabbit MCL's have statistically different (p less than 0.001) mean minimum diameters at these two ages. Interanimal differences in mean fibril minimum diameters were also significant (p less than 0.001) and varied by 20% to 25% in both mature and immature animals. Finally, there were significant differences (p less than 0.001) in mean diameters and distributions from side-to-side in all animals. These mean left-to-right differences were less than 10% in all mature animals but as much as 62% in some immature animals. Statistical analysis of these data demonstrate that animal-to-animal comparisons using these protocols require a large number of animals with appropriate numbers of fibrils being measured to detect small intergroup differences. With experiments which compare left to right ligaments, far fewer animals are required to detect similarly small differences. These results demonstrate the necessity for rigorous control of sampling, an extensive normal baseline and statistically confirmed experimental designs in any TEM comparisons of collagen fibril diameters.
NASA Astrophysics Data System (ADS)
Hashemi, H.; Tax, D. M. J.; Duin, R. P. W.; Javaherian, A.; de Groot, P.
2008-11-01
Seismic object detection is a relatively new field in which 3-D bodies are visualized and spatial relationships between objects of different origins are studied in order to extract geologic information. In this paper, we propose a method for finding an optimal classifier with the help of a statistical feature ranking technique and combining different classifiers. The method, which has general applicability, is demonstrated here on a gas chimney detection problem. First, we evaluate a set of input seismic attributes extracted at locations labeled by a human expert using regularized discriminant analysis (RDA). In order to find the RDA score for each seismic attribute, forward and backward search strategies are used. Subsequently, two non-linear classifiers: multilayer perceptron (MLP) and support vector classifier (SVC) are run on the ranked seismic attributes. Finally, to capitalize on the intrinsic differences between both classifiers, the MLP and SVC results are combined using logical rules of maximum, minimum and mean. The proposed method optimizes the ranked feature space size and yields the lowest classification error in the final combined result. We will show that the logical minimum reveals gas chimneys that exhibit both the softness of MLP and the resolution of SVC classifiers.
Kang, Sinkyu; Hong, Suk Young
2016-01-01
A minimum composite method was applied to produce a 15-day interval normalized difference vegetation index (NDVI) dataset from Moderate Resolution Imaging Spectroradiometer (MODIS) daily 250 m reflectance in the red and near-infrared bands. This dataset was applied to determine lake surface areas in Mongolia. A total of 73 lakes greater than 6.25 km2in area were selected, and 28 of these lakes were used to evaluate detection errors. The minimum composite NDVI showed a better detection performance on lake water pixels than did the official MODIS 16-day 250 m NDVI based on a maximum composite method. The overall lake area detection performance based on the 15-day minimum composite NDVI showed -2.5% error relative to the Landsat-derived lake area for the 28 evaluated lakes. The errors increased with increases in the perimeter-to-area ratio but decreased with lake size over 10 km2. The lake area decreased by -9.3% at an annual rate of -53.7 km2 yr-1 during 2000 to 2011 for the 73 lakes. However, considerable spatial variations, such as slight-to-moderate lake area reductions in semi-arid regions and rapid lake area reductions in arid regions, were also detected. This study demonstrated applicability of MODIS 250 m reflectance data for biweekly monitoring of lake area change and diagnosed considerable lake area reduction and its spatial variability in arid and semi-arid regions of Mongolia. Future studies are required for explaining reasons of lake area changes and their spatial variability. PMID:27007233
Kang, Sinkyu; Hong, Suk Young
2016-01-01
A minimum composite method was applied to produce a 15-day interval normalized difference vegetation index (NDVI) dataset from Moderate Resolution Imaging Spectroradiometer (MODIS) daily 250 m reflectance in the red and near-infrared bands. This dataset was applied to determine lake surface areas in Mongolia. A total of 73 lakes greater than 6.25 km2in area were selected, and 28 of these lakes were used to evaluate detection errors. The minimum composite NDVI showed a better detection performance on lake water pixels than did the official MODIS 16-day 250 m NDVI based on a maximum composite method. The overall lake area detection performance based on the 15-day minimum composite NDVI showed -2.5% error relative to the Landsat-derived lake area for the 28 evaluated lakes. The errors increased with increases in the perimeter-to-area ratio but decreased with lake size over 10 km(2). The lake area decreased by -9.3% at an annual rate of -53.7 km(2) yr(-1) during 2000 to 2011 for the 73 lakes. However, considerable spatial variations, such as slight-to-moderate lake area reductions in semi-arid regions and rapid lake area reductions in arid regions, were also detected. This study demonstrated applicability of MODIS 250 m reflectance data for biweekly monitoring of lake area change and diagnosed considerable lake area reduction and its spatial variability in arid and semi-arid regions of Mongolia. Future studies are required for explaining reasons of lake area changes and their spatial variability.
Tactical Miniature Crystal Oscillator.
1980-08-01
manufactured by this process are expected to require 30 days to achieve minimum aging rates. (4) FUNDEMENTAL CRYSTAL RETRACE MEASUREMENT. An important crystal...considerable measurement time to detect differences and characterize components. Before investing considerable time in a candidate reactive element, a
USDA-ARS?s Scientific Manuscript database
Accurate estimation of soil organic carbon (SOC) is crucial to efforts to improve soil fertility and stabilize atmospheric CO2 concentrations by sequestering carbon (C) in soils. Soil organic C measurements are, however, often highly variable and management practices can take a long time to produce ...
Gibson, Eli; Fenster, Aaron; Ward, Aaron D
2013-10-01
Novel imaging modalities are pushing the boundaries of what is possible in medical imaging, but their signal properties are not always well understood. The evaluation of these novel imaging modalities is critical to achieving their research and clinical potential. Image registration of novel modalities to accepted reference standard modalities is an important part of characterizing the modalities and elucidating the effect of underlying focal disease on the imaging signal. The strengths of the conclusions drawn from these analyses are limited by statistical power. Based on the observation that in this context, statistical power depends in part on uncertainty arising from registration error, we derive a power calculation formula relating registration error, number of subjects, and the minimum detectable difference between normal and pathologic regions on imaging, for an imaging validation study design that accommodates signal correlations within image regions. Monte Carlo simulations were used to evaluate the derived models and test the strength of their assumptions, showing that the model yielded predictions of the power, the number of subjects, and the minimum detectable difference of simulated experiments accurate to within a maximum error of 1% when the assumptions of the derivation were met, and characterizing sensitivities of the model to violations of the assumptions. The use of these formulae is illustrated through a calculation of the number of subjects required for a case study, modeled closely after a prostate cancer imaging validation study currently taking place at our institution. The power calculation formulae address three central questions in the design of imaging validation studies: (1) What is the maximum acceptable registration error? (2) How many subjects are needed? (3) What is the minimum detectable difference between normal and pathologic image regions? Copyright © 2013 Elsevier B.V. All rights reserved.
Night Vision Laboratory Static Performance Model for Thermal Viewing Systems
1975-04-01
Research and Development Technical Report f ECOM- • i’.__1’=• =•NIGHT VISION LABORATORY STATIC PERFORMANCE MODEL 1 S1=• : FOR THERMAL VIEWING...resolvable temperature Infrared imaging Minimum detectable temperature1.Detection and recognition performance Night visi,-)n Noise equivalent temperature...modulation transfer function (MTF). The noise charactcristics are specified by the noise equivalent temper- ature difference (NE AT), The next sections
Reflective measurement of water concentration using millimeter wave illumination
NASA Astrophysics Data System (ADS)
Sung, Shijun; Bennett, David; Taylor, Zachary; Bajwa, Neha; Tewari, Priyamvada; Maccabi, Ashkan; Culjat, Martin; Singh, Rahul; Grundfest, Warren
2011-04-01
THz and millimeter wave technology have shown the potential to become a valuable medical imaging tool because of its sensitivity to water and safe, non-ionizing photon energy. Using the high dielectric constant of water in these frequency bands, reflectionmode THz sensing systems can be employed to measure water content in a target with high sensitivity. This phenomenology may lead to the development of clinical systems to measure the hydration state of biological targets. Such measurements may be useful in fast and convenient diagnosis of conditions whose symptoms can be characterized by changes in water concentration such as skin burns, dehydration, or chemical exposure. To explore millimeter wave sensitivity to hydration, a reflectometry system is constructed to make water concentration measurements at 100 GHz, and the minimum detectable water concentration difference is measured. This system employs a 100 GHz Gunn diode source and Golay cell detector to perform point reflectivity measurements of a wetted polypropylene towel as it dries on a mass balance. A noise limited, minimum detectable concentration difference of less than 0.5% by mass can be detected in water concentrations ranging from 70% to 80%. This sensitivity is sufficient to detect hydration changes caused by many diseases and pathologies and may be useful in the future as a diagnostic tool for the assessment of burns and other surface pathologies.
NASA Astrophysics Data System (ADS)
Tóbiás, Roland; Furtenbacher, Tibor; Császár, Attila G.
2017-12-01
Cycle bases of graph theory are introduced for the analysis of transition data deposited in line-by-line rovibronic spectroscopic databases. The principal advantage of using cycle bases is that outlier transitions -almost always present in spectroscopic databases built from experimental data originating from many different sources- can be detected and identified straightforwardly and automatically. The data available for six water isotopologues, H
Metameric MIMO-OOK transmission scheme using multiple RGB LEDs.
Bui, Thai-Chien; Cusani, Roberto; Scarano, Gaetano; Biagi, Mauro
2018-05-28
In this work, we propose a novel visible light communication (VLC) scheme utilizing multiple different red green and blue triplets each with a different emission spectrum of red, green and blue for mitigating the effect of interference due to different colors using spatial multiplexing. On-off keying modulation is considered and its effect on light emission in terms of flickering, dimming and color rendering is discussed so as to demonstrate how metameric properties have been considered. At the receiver, multiple photodiodes with color filter-tuned on each transmit light emitting diode (LED) are employed. Three different detection mechanisms of color zero forcing, minimum mean square error estimation and minimum mean square error equalization are then proposed. The system performance of the proposed scheme is evaluated both with computer simulations and tests with an Arduino board implementation.
cWINNOWER algorithm for finding fuzzy dna motifs
NASA Technical Reports Server (NTRS)
Liang, S.; Samanta, M. P.; Biegel, B. A.
2004-01-01
The cWINNOWER algorithm detects fuzzy motifs in DNA sequences rich in protein-binding signals. A signal is defined as any short nucleotide pattern having up to d mutations differing from a motif of length l. The algorithm finds such motifs if a clique consisting of a sufficiently large number of mutated copies of the motif (i.e., the signals) is present in the DNA sequence. The cWINNOWER algorithm substantially improves the sensitivity of the winnower method of Pevzner and Sze by imposing a consensus constraint, enabling it to detect much weaker signals. We studied the minimum detectable clique size qc as a function of sequence length N for random sequences. We found that qc increases linearly with N for a fast version of the algorithm based on counting three-member sub-cliques. Imposing consensus constraints reduces qc by a factor of three in this case, which makes the algorithm dramatically more sensitive. Our most sensitive algorithm, which counts four-member sub-cliques, needs a minimum of only 13 signals to detect motifs in a sequence of length N = 12,000 for (l, d) = (15, 4). Copyright Imperial College Press.
cWINNOWER Algorithm for Finding Fuzzy DNA Motifs
NASA Technical Reports Server (NTRS)
Liang, Shoudan
2003-01-01
The cWINNOWER algorithm detects fuzzy motifs in DNA sequences rich in protein-binding signals. A signal is defined as any short nucleotide pattern having up to d mutations differing from a motif of length l. The algorithm finds such motifs if multiple mutated copies of the motif (i.e., the signals) are present in the DNA sequence in sufficient abundance. The cWINNOWER algorithm substantially improves the sensitivity of the winnower method of Pevzner and Sze by imposing a consensus constraint, enabling it to detect much weaker signals. We studied the minimum number of detectable motifs qc as a function of sequence length N for random sequences. We found that qc increases linearly with N for a fast version of the algorithm based on counting three-member sub-cliques. Imposing consensus constraints reduces qc, by a factor of three in this case, which makes the algorithm dramatically more sensitive. Our most sensitive algorithm, which counts four-member sub-cliques, needs a minimum of only 13 signals to detect motifs in a sequence of length N = 12000 for (l,d) = (15,4).
Compact TDLAS based sensor design using interband cascade lasers for mid-IR trace gas sensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, Lei; Tittel, Frank K.; Li, Chunguang
2016-02-25
Two compact TDLAS sensor systems based on different structural optical cores were developed. The two optical cores combine two recent developments, gallium antimonide (GaSb)-based ICL and a compact multipass gas cell (MPGC) with the goal to create compact TDLAS based sensors for the mid-IR gas detection with high detection sensitivity and low power consumption. The sensors achieved minimum detection limits of ~5 ppbv and ~8 ppbv, respectively, for CH 4 and C 2H 6 concentration measurements with a 3.7-W power consumption.
Method For Detecting The Presence Of A Ferromagnetic Object
Roybal, Lyle G.
2000-11-21
A method for detecting a presence or an absence of a ferromagnetic object within a sensing area may comprise the steps of sensing, during a sample time, a magnetic field adjacent the sensing area; producing surveillance data representative of the sensed magnetic field; determining an absolute value difference between a maximum datum and a minimum datum comprising the surveillance data; and determining whether the absolute value difference has a positive or negative sign. The absolute value difference and the corresponding positive or negative sign thereof forms a representative surveillance datum that is indicative of the presence or absence in the sensing area of the ferromagnetic material.
A brief overview on radon measurements in drinking water.
Jobbágy, Viktor; Altzitzoglou, Timotheos; Malo, Petya; Tanner, Vesa; Hult, Mikael
2017-07-01
The aim of this paper is to present information about currently used standard and routine methods for radon analysis in drinking waters. An overview is given about the current situation and the performance of different measurement methods based on literature data. The following parameters are compared and discussed: initial sample volume and sample preparation, detection systems, minimum detectable activity, counting efficiency, interferences, measurement uncertainty, sample capacity and overall turnaround time. Moreover, the parametric levels for radon in drinking water from the different legislations and directives/guidelines on radon are presented. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
The Frequency of Low-Mass Exoplanets
NASA Astrophysics Data System (ADS)
O'Toole, S. J.; Jones, H. R. A.; Tinney, C. G.; Butler, R. P.; Marcy, G. W.; Carter, B.; Bailey, J.; Wittenmyer, R. A.
2009-08-01
We report first results from the Anglo-Australian Telescope Rocky Planet Search—an intensive, high-precision Doppler planet search targeting low-mass exoplanets in contiguous 48 night observing blocks. On this run, we targeted 24 bright, nearby and intrinsically stable Sun-like stars selected from the Anglo-Australian Planet Search's main sample. These observations have already detected one low-mass planet reported elsewhere (HD 16417b), and here we reconfirm the detection of HD 4308b. Further, we have Monte Carlo simulated data from this run on a star-by-star basis to produce robust detection constraints. These simulations demonstrate clear differences in the exoplanet detectability functions from star to star due to differences in sampling, data quality and intrinsic stellar stability. They reinforce the importance of star-by-star simulation when interpreting the data from Doppler planet searches. These simulations indicate that for some of our target stars we are sensitive to close-orbiting planets as small as a few Earth masses. The two low-mass planets present in our 24-star sample indicate that the exoplanet minimum mass function at low masses is likely to be a flat α ~ -1 (for dN/dM vprop M α) and that between 15% ± 10% (at α = -0.3) and 48% ± 34% (at α = -1.3) of stars host planets with orbital periods of less than 16 days and minimum masses greater than 3 M ⊕.
NASA Technical Reports Server (NTRS)
Fern, Lisa Carolynn
2017-01-01
The primary activity for the UAS-NAS Human Systems Integration (HSI) sub-project in Phase 1 was support of RTCA Special Committee 228 Minimum Operational Performance Standards (MOPS). We provide data on the effect of various Detect and Avoid (DAA) display features with respect to pilot performance of the remain well clear function in order to determine the minimum requirements for DAA displays.
Wang, Chao; Guo, Xiao-Jing; Xu, Jin-Fang; Wu, Cheng; Sun, Ya-Lin; Ye, Xiao-Fei; Qian, Wei; Ma, Xiu-Qiang; Du, Wen-Min; He, Jia
2012-01-01
The detection of signals of adverse drug events (ADEs) has increased because of the use of data mining algorithms in spontaneous reporting systems (SRSs). However, different data mining algorithms have different traits and conditions for application. The objective of our study was to explore the application of association rule (AR) mining in ADE signal detection and to compare its performance with that of other algorithms. Monte Carlo simulation was applied to generate drug-ADE reports randomly according to the characteristics of SRS datasets. Thousand simulated datasets were mined by AR and other algorithms. On average, 108,337 reports were generated by the Monte Carlo simulation. Based on the predefined criterion that 10% of the drug-ADE combinations were true signals, with RR equaling to 10, 4.9, 1.5, and 1.2, AR detected, on average, 284 suspected associations with a minimum support of 3 and a minimum lift of 1.2. The area under the receiver operating characteristic (ROC) curve of the AR was 0.788, which was equivalent to that shown for other algorithms. Additionally, AR was applied to reports submitted to the Shanghai SRS in 2009. Five hundred seventy combinations were detected using AR from 24,297 SRS reports, and they were compared with recognized ADEs identified by clinical experts and various other sources. AR appears to be an effective method for ADE signal detection, both in simulated and real SRS datasets. The limitations of this method exposed in our study, i.e., a non-uniform thresholds setting and redundant rules, require further research.
Vlamis, Aristidis; Katikou, Panagiota; Rodriguez, Ines; Rey, Verónica; Alfonso, Amparo; Papazachariou, Angelos; Zacharaki, Thetis; Botana, Ana M.; Botana, Luis M.
2015-01-01
During official shellfish control for the presence of marine biotoxins in Greece in year 2012, a series of unexplained positive mouse bioassays (MBA) for lipophilic toxins with nervous symptomatology prior to mice death was observed in mussels from Vistonikos Bay–Lagos, Rodopi. This atypical toxicity coincided with (a) absence or low levels of regulated and some non-regulated toxins in mussels and (b) the simultaneous presence of the potentially toxic microalgal species Prorocentrum minimum at levels up to 1.89 × 103 cells/L in the area’s seawater. Further analyses by different MBA protocols indicated that the unknown toxin was hydrophilic, whereas UPLC-MS/MS analyses revealed the presence of tetrodotoxins (TTXs) at levels up to 222.9 μg/kg. Reviewing of official control data from previous years (2006–2012) identified a number of sample cases with atypical positive to asymptomatic negative MBAs for lipophilic toxins in different Greek production areas, coinciding with periods of P. minimum blooms. UPLC-MS/MS analysis of retained sub-samples from these cases revealed that TTXs were already present in Greek shellfish since 2006, in concentrations ranging between 61.0 and 194.7 μg/kg. To our knowledge, this is the earliest reported detection of TTXs in European bivalve shellfish, while it is also the first work to indicate a possible link between presence of the toxic dinoflagellate P. minimum in seawater and that of TTXs in bivalves. Confirmed presence of TTX, a very heat-stable toxin, in filter-feeding mollusks of the Mediterranean Sea, even at lower levels to those inducing symptomatology to humans, indicates that this emerging risk should be seriously taken into account by the EU to protect the health of shellfish consumers. PMID:26008234
Optical tomographic detection of rheumatoid arthritis with computer-aided classification schemes
NASA Astrophysics Data System (ADS)
Klose, Christian D.; Klose, Alexander D.; Netz, Uwe; Beuthan, Jürgen; Hielscher, Andreas H.
2009-02-01
A recent research study has shown that combining multiple parameters, drawn from optical tomographic images, leads to better classification results to identifying human finger joints that are affected or not affected by rheumatic arthritis RA. Building up on the research findings of the previous study, this article presents an advanced computer-aided classification approach for interpreting optical image data to detect RA in finger joints. Additional data are used including, for example, maximum and minimum values of the absorption coefficient as well as their ratios and image variances. Classification performances obtained by the proposed method were evaluated in terms of sensitivity, specificity, Youden index and area under the curve AUC. Results were compared to different benchmarks ("gold standard"): magnet resonance, ultrasound and clinical evaluation. Maximum accuracies (AUC=0.88) were reached when combining minimum/maximum-ratios and image variances and using ultrasound as gold standard.
NASA Technical Reports Server (NTRS)
Bozoki, Zoltan; Mohacsi, Arpad; Szabo, Gabor; Bor, Zsolt; Erdelyi, Miklos; Chen, Weidong; Tittel, Frank K.
2002-01-01
A photoacoustic spectroscopic (PAS) and a direct optical absorption spectroscopic (OAS) gas sensor, both using continuous-wave room-temperature diode lasers operating at 1531.8 nm, were compared on the basis of ammonia detection. Excellent linear correlation between the detector signals of the two systems was found. Although the physical properties and the mode of operation of both sensors were significantly different, their performances were found to be remarkably similar, with a sub-ppm level minimum detectable concentration of ammonia and a fast response time in the range of a few minutes.
Wang, Shiying; Herbst, Elizabeth B.; Mauldin, F. William; Diakova, Galina B.; Klibanov, Alexander L.; Hossack, John A.
2016-01-01
Objectives The objective of this study is to evaluate the minimum microbubble dose for ultrasound molecular imaging to achieve statistically significant detection of angiogenesis in a mouse model. Materials and Methods The pre-burst minus post-burst method was implemented on a Verasonics ultrasound research scanner using a multi-frame compounding pulse inversion imaging sequence. Biotinylated lipid (distearoyl phosphatidylcholine, DSPC-based) microbubbles that were conjugated with anti-vascular endothelial growth factor 2 (VEGFR2) antibody (MBVEGFR2) or isotype control antibody (MBControl) were injected into mice carrying adenocarcinoma xenografts. Different injection doses ranging from 5 × 104 to 1 × 107 microbubbles per mouse were evaluated to determine the minimum diagnostically effective dose. Results The proposed imaging sequence was able to achieve statistically significant detection (p < 0.05, n = 5) of VEGFR2 in tumors with a minimum MBVEGFR2 injection dose of only 5 × 104 microbubbles per mouse (DSPC at 0.053 ng/g mouse body mass). Non-specific adhesion of MBControl at the same injection dose was negligible. Additionally, the targeted contrast ultrasound signal of MBVEGFR2 decreased with lower microbubble doses, while non-specific adhesion of MBControl increased with higher microbubble doses. Conclusions 5 × 104 microbubbles per animal is now the lowest injection dose on record for ultrasound molecular imaging to achieve statistically significant detection of molecular targets in vivo. Findings in this study provide us with further guidance for future developments of clinically translatable ultrasound molecular imaging applications using a lower dose of microbubbles. PMID:27654582
Measuring electrically charged particle fluxes in space using a fiber optic loop sensor
NASA Technical Reports Server (NTRS)
1992-01-01
The purpose of this program was to demonstrate the potential of a fiber optic loop sensor for the measurement of electrically charged particle fluxes in space. The key elements of the sensor are a multiple turn loop of low birefringence, single mode fiber, with a laser diode light source, and a low noise optical receiver. The optical receiver is designed to be shot noise limited, with this being the limiting sensitivity factor for the sensor. The sensing element is the fiber optic loop. Under a magnetic field from an electric current flowing along the axis of the loop, there is a non-vanishing line integral along the fiber optic loop. This causes a net birefringence producing two states of polarization whose phase difference is correlated to magnetic field strength and thus, current in the optical receiver electronic processing. The objectives in this program were to develop a prototype laser diode powered fiber optic sensor. The performance specification of a minimum detectable current density of 1 (mu)amp/sq m-(radical)Hz, should be at the shot noise limit of the detection electronics. OPTRA has successfully built and tested a 3.2 m diameter loop with 137 turns of low birefringence optical fiber and achieved a minimum detectable current density of 5.4 x 10(exp-5) amps/(radical)Hz. If laboratory space considerations were not an issue, with the length of optical fiber available to us, we would have achieved a minimum detectable current density of 4 x 10(exp -7) amps/(radical)Hz.
Zhou, Chennan; Zhang, Xueyin; Huang, Xinxin; Guo, Xishan; Cai, Qiang; Zhu, Songming
2014-01-01
A colloidal gold immunochromatographic assay (GICA) was developed for rapid detection of chloramphenicol (CAP) residues in aquatic products. A nitrocellulose (NC) membrane was used as the carrier, and the polyclonal CAP antibody was used as the marker protein. The average diameter of as-prepared colloidal gold nanoparticles (AuNPs) was about 20 nm. The optimal pH value of colloidal gold solutions and the amount of the antibody of CAP were 8.0 and 7.2 μg/mL, respectively. The CAP antibody was immobilized onto the conjugate pad after purification. The CAP conjugate and goat anti-rabbit IgG (secondary antibody) were coated onto the NC membrane. Next, the non-specific sites were blocked with 1% bovine serum albumin. The minimum detectable concentration of CAP in standard solution is 0.5 ng/mL, with good reproducibility. For the real samples from crucian carps injected with a single-dose of CAP in the dorsal muscles, the minimum detectable concentration of CAP residues was 0.5 μg/kg. The chromatographic analysis time was less than 10 min, and the strip had a long storage lifetime of more than 90 days at different temperatures. The strips provide a means for rapid detection of CAP residues in aquatic products. PMID:25412221
ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kashyap, Vinay L.; Siemiginowska, Aneta; Van Dyk, David A.
2010-08-10
A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error),more » and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper limits that applies to all detection algorithms.« less
Examination of soil effect upon GPR detectability of landmine with different orientations
NASA Astrophysics Data System (ADS)
Ebrahim, Shereen M.; Medhat, N. I.; Mansour, Khamis K.; Gaber, A.
2018-06-01
Landmines represent a serious environmental problem for several countries as it causes severe injured and many victims. In this paper, the response of GPR from different parameters of the landmine targets has been shown and the data is correlated with observed field experiment made in 2012 at Miami Crandon Park test site. The ability of GPR for detecting non-metallic mines with different orientations was revealed and soil effect upon the GPR signal was examined putting into consideration the soil parameters in different locations in Egypt such as in Sinai and El Alamein. The simulation results showed that PMN-2 landmine was detected at 5 cm and 15 cm depths, even at the minimum radar cross section vertical orientation. The B-Scan (2D GPR profiles) of PMN-2 target at 15 cm depth figured out high reflectivity for Wadi deposits due to large contrast between PMN-2 landmine material and soil of sand dunes.
Prinz, P; Ronacher, B
2002-08-01
The temporal resolution of auditory receptors of locusts was investigated by applying noise stimuli with sinusoidal amplitude modulations and by computing temporal modulation transfer functions. These transfer functions showed mostly bandpass characteristics, which are rarely found in other species at the level of receptors. From the upper cut-off frequencies of the modulation transfer functions the minimum integration times were calculated. Minimum integration times showed no significant correlation to the receptor spike rates but depended strongly on the body temperature. At 20 degrees C the average minimum integration time was 1.7 ms, dropping to 0.95 ms at 30 degrees C. The values found in this study correspond well to the range of minimum integration times found in birds and mammals. Gap detection is another standard paradigm to investigate temporal resolution. In locusts and other grasshoppers application of this paradigm yielded values of the minimum detectable gap widths that are approximately twice as large than the minimum integration times reported here.
UNUSUAL TRENDS IN SOLAR P-MODE FREQUENCIES DURING THE CURRENT EXTENDED MINIMUM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tripathy, S. C.; Jain, K.; Hill, F.
2010-03-10
We investigate the behavior of the intermediate-degree mode frequencies of the Sun during the current extended minimum phase to explore the time-varying conditions in the solar interior. Using contemporaneous helioseismic data from the Global Oscillation Network Group (GONG) and the Michelson Doppler Imager (MDI), we find that the changes in resonant mode frequencies during the activity minimum period are significantly greater than the changes in solar activity as measured by different proxies. We detect a seismic minimum in MDI p-mode frequency shifts during 2008 July-August but no such signature is seen in mean shifts computed from GONG frequencies. We alsomore » analyze the frequencies of individual oscillation modes from GONG data as a function of latitude and observe a signature of the onset of the solar cycle 24 in early 2009. Thus, the intermediate-degree modes do not confirm the onset of the cycle 24 during late 2007 as reported from the analysis of the low-degree Global Oscillations at Low Frequency frequencies. Further, both the GONG and MDI frequencies show a surprising anti-correlation between frequencies and activity proxies during the current minimum, in contrast to the behavior during the minimum between cycles 22 and 23.« less
Evaluation of Contamination Inspection and Analysis Methods through Modeling System Performance
NASA Technical Reports Server (NTRS)
Seasly, Elaine; Dever, Jason; Stuban, Steven M. F.
2016-01-01
Contamination is usually identified as a risk on the risk register for sensitive space systems hardware. Despite detailed, time-consuming, and costly contamination control efforts during assembly, integration, and test of space systems, contaminants are still found during visual inspections of hardware. Improved methods are needed to gather information during systems integration to catch potential contamination issues earlier and manage contamination risks better. This research explores evaluation of contamination inspection and analysis methods to determine optical system sensitivity to minimum detectable molecular contamination levels based on IEST-STD-CC1246E non-volatile residue (NVR) cleanliness levels. Potential future degradation of the system is modeled given chosen modules representative of optical elements in an optical system, minimum detectable molecular contamination levels for a chosen inspection and analysis method, and determining the effect of contamination on the system. By modeling system performance based on when molecular contamination is detected during systems integration and at what cleanliness level, the decision maker can perform trades amongst different inspection and analysis methods and determine if a planned method is adequate to meet system requirements and manage contamination risk.
Natural and anthropogenic radioactivity in the environment of Kopaonik mountain, Serbia.
Mitrović, Branislava; Ajtić, Jelena; Lazić, Marko; Andrić, Velibor; Krstić, Nikola; Vranješ, Borjana; Vićentijević, Mihajlo
2016-08-01
To evaluate the state of the environment in Kopaonik, a mountain in Serbia, the activity concentrations of (4) K, (226)Ra, (232)Th and (137)Cs in five different types of environmental samples are determined by gamma ray spectrometry, and radiological hazard due to terrestrial radionuclides is calculated. The mean activity concentrations of natural radionuclides in the soil are higher than the global average. However, with an exception of two sampling locations, the external radiation hazard index is below one, implying an insignificant radiation hazard. Apart from (40)K, content of the natural radionuclides is predominantly below minimum detectable activities in grass and cow milk, but not in mosses. Although (137)Cs is present in the soil, grass, mosses and herbal plants, its specific activity in cow milk is below minimum detectable activity. Amongst the investigated herbal plants, Vaccinium myrtillus L. shows accumulating properties, as a high content of (137)Cs is detected therein. Therefore, moderation is advised in consuming Vaccinium myrtillus L. tea. Copyright © 2016 Elsevier Ltd. All rights reserved.
A minimum distance estimation approach to the two-sample location-scale problem.
Zhang, Zhiyi; Yu, Qiqing
2002-09-01
As reported by Kalbfleisch and Prentice (1980), the generalized Wilcoxon test fails to detect a difference between the lifetime distributions of the male and female mice died from Thymic Leukemia. This failure is a result of the test's inability to detect a distributional difference when a location shift and a scale change exist simultaneously. In this article, we propose an estimator based on the minimization of an average distance between two independent quantile processes under a location-scale model. Large sample inference on the proposed estimator, with possible right-censorship, is discussed. The mouse leukemia data are used as an example for illustration purpose.
NEW EVIDENCE FOR CHARGE-SIGN-DEPENDENT MODULATION DURING THE SOLAR MINIMUM OF 2006 TO 2009
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di Felice, V.; Munini, R.; Vos, E. E.
The PAMELA space experiment, in orbit since 2006, has measured cosmic rays (CRs) through the most recent period of minimum solar activity with the magnetic field polarity as A < 0. During this entire time, galactic electrons and protons have been detected down to 70 MV and 400 MV, respectively, and their differential variation in intensity with time has been monitored with unprecedented accuracy. These observations are used to show how differently electrons and protons responded to the quiet modulation conditions that prevailed from 2006 to 2009. It is well known that particle drifts, as one of four major mechanisms for the solarmore » modulation of CRs, cause charge-sign-dependent solar modulation. Periods of minimum solar activity provide optimal conditions in which to study these drift effects. The observed behavior is compared to the solutions of a three-dimensional model for CRs in the heliosphere, including drifts. The numerical results confirm that the difference in the evolution of electron and proton spectra during the last prolonged solar minimum is attributed to a large extent to particle drifts. We therefore present new evidence of charge-sign-dependent solar modulation, with a perspective on its peculiarities for the observed period from 2006 to 2009.« less
Pradhan, Somarpita; Chaudhuri, Partha Roy
2015-07-10
We experimentally demonstrate single-mode optical-fiber-beam-deflection configuration for weak magnetic-field-detection using an optimized (low coercive-field) composition of cobalt-doped nickel ferrite nanoparticles. Devising a fiber-double-slit type experiment, we measure the surrounding magnetic field through precisely measuring interference-fringe yielding a minimum detectable field ∼100 mT and we procure magnetization data of the sample that fairly predicts SQUID measurement. To improve sensitivity, we incorporate etched single-mode fiber in double-slit arrangement and recorded a minimum detectable field, ∼30 mT. To further improve, we redefine the experiment as modulating fiber-to-fiber light-transmission and demonstrate the minimum field as 2.0 mT. The device will be uniquely suited for electrical or otherwise hazardous environments.
On the relationship of minimum detectable contrast to dose and lesion size in abdominal CT
NASA Astrophysics Data System (ADS)
Zhou, Yifang; Scott, Alexander, II; Allahverdian, Janet; Lee, Christina; Kightlinger, Blake; Azizyan, Avetis; Miller, Joseph
2015-10-01
CT dose optimization is typically guided by pixel noise or contrast-to-noise ratio that does not delineate low contrast details adequately. We utilized the statistically defined low contrast detectability to study its relationship to dose and lesion size in abdominal CT. A realistically shaped medium sized abdomen phantom was customized to contain a cylindrical void of 4 cm diameter. The void was filled with a low contrast (1% and 2%) insert containing six groups of cylindrical targets ranging from 1.2 mm to 7 mm in size. Helical CT scans were performed using a Siemens 64-slice mCT and a GE Discovery 750 HD at various doses. After the subtractions between adjacent slices, the uniform sections of the filtered backprojection reconstructed images were partitioned to matrices of square elements matching the sizes of the targets. It was verified that the mean values from all the elements in each matrix follow a Gaussian distribution. The minimum detectable contrast (MDC), quantified by the mean signal to background difference equal to the distribution’s standard deviation multiplied by 3.29, corresponding to 95% confidence level, was found to be related to the phantom specific dose and the element size by a power law (R^2 > 0.990). Independent readings on the 5 mm and 7 mm targets were compared to the measured contrast to the MDC ratios. The results showed that 93% of the cases were detectable when the measured contrast exceeds the MDC. The correlation of the MDC to the pixel noise and target size was also identified and the relationship was found to be the same for the scanners in the study. To quantify the impact of iterative reconstructions to the low contrast detectability, the noise structure was studied in a similar manner at different doses and with different ASIR blending fractions. The relationship of the dose to the blending fraction and low contrast detectability is presented.
Milner, Clare E; Brindle, Richard A
2016-01-01
There has been increased interest recently in measuring kinematics within the foot during gait. While several multisegment foot models have appeared in the literature, the Oxford foot model has been used frequently for both walking and running. Several studies have reported the reliability for the Oxford foot model, but most studies to date have reported reliability for barefoot walking. The purpose of this study was to determine between-day (intra-rater) and within-session (inter-trial) reliability of the modified Oxford foot model during shod walking and running and calculate minimum detectable difference for common variables of interest. Healthy adult male runners participated. Participants ran and walked in the gait laboratory for five trials of each. Three-dimensional gait analysis was conducted and foot and ankle joint angle time series data were calculated. Participants returned for a second gait analysis at least 5 days later. Intraclass correlation coefficients and minimum detectable difference were determined for walking and for running, to indicate both within-session and between-day reliability. Overall, relative variables were more reliable than absolute variables, and within-session reliability was greater than between-day reliability. Between-day intraclass correlation coefficients were comparable to those reported previously for adults walking barefoot. It is an extension in the use of the Oxford foot model to incorporate wearing a shoe while maintaining marker placement directly on the skin for each segment. These reliability data for walking and running will aid in the determination of meaningful differences in studies which use this model during shod gait. Copyright © 2015 Elsevier B.V. All rights reserved.
Antidepressant treatment of depression in rural nursing home residents.
Kerber, Cindy Sullivan; Dyck, Mary J; Culp, Kennith R; Buckwalter, Kathleen
2008-09-01
Under-diagnosis and under-treatment of depression are major problems in nursing home residents. The purpose of this study was to determine antidepressant use among nursing home residents who were diagnosed with depression using three different methods: (1) the Geriatric Depression Scale, (2) Minimum Data Set, and (3) primary care provider assessments. As one would expect, the odds of being treated with an antidepressant were about eight times higher for those diagnosed as depressed by the primary care provider compared to the Geriatric Depression Scale or the Minimum Data Set. Men were less likely to be diagnosed and treated with antidepressants by their primary care provider than women. Depression detected by nurses through the Minimum Data Set was treated at a lower rate with antidepressants, which generates issues related to interprofessional communication, nursing staff communication, and the need for geropsychiatric role models in nursing homes.
[Storage of plant protection products in farms: minimum safety requirements].
Dutto, Moreno; Alfonzo, Santo; Rubbiani, Maristella
2012-01-01
Failure to comply with requirements for proper storage and use of pesticides in farms can be extremely hazardous and the risk of accidents involving farm workers, other persons and even animals is high. There are still wide differences in the interpretation of the concept of "securing or making safe", by workers in this sector. One of the critical points detected, particularly in the fruit sector, is the establishment of an adequate storage site for plant protection products. The definition of "safe storage of pesticides" is still unclear despite the recent enactment of Legislative Decree 81/2008 regulating health and work safety in Italy. In addition, there are no national guidelines setting clear minimum criteria for storage of plant protection products in farms. The authors, on the basis of their professional experience and through analysis of recent legislation, establish certain minimum safety standards for storage of pesticides in farms.
NASA Astrophysics Data System (ADS)
Kulp, Thomas J.; Garvis, Darrel G.; Kennedy, Randall B.; McRae, Thomas G.
1991-08-01
The application of backscatter absorption gas imaging (BAGI) to the detection of gaseous chemical species associated with the production of illegal drugs is considered. BAGI is a gas visualization technique that allows the imaging of over 70 organic vapors at minimum concentrations of a few to several hundred ppm-m. Present BAGI capabilities at Lawrence Livermore National Laboratory and Laser Imaging Systems are discussed. Eighteen different species of interest in drug-law enforcement are identified as being detectable by BAGI. The chemical remote sensing needs of law enforcement officials are described, and the use of BAGI in meeting some of these needs is outlined.
Experimental evaluation of a system for human life detection under debris
NASA Astrophysics Data System (ADS)
Joju, Reshma; Konica, Pimplapure Ramya T.; Alex, Zachariah C.
2017-11-01
It is difficult to for the human beings to be found under debris or behind the walls in case of military applications. Due to which several rescue techniques such as robotic systems, optical devices, and acoustic devices were used. But if victim was unconscious then these rescue system failed. We conducted an experimental analysis on whether the microwaves could detect heart beat and breathing signals of human beings trapped under collapsed debris. For our analysis we used RADAR based on by Doppler shift effect. We calculated the minimum speed that the RADAR could detect. We checked the frequency variation by placing the RADAR at a fixed position and placing the object in motion at different distances. We checked the frequency variation by using objects of different materials as debris behind which the motion was made. The graphs of different analysis were plotted.
The weak-line T Tauri star V410 Tau. I. A multi-wavelength study of variability
NASA Astrophysics Data System (ADS)
Stelzer, B.; Fernández, M.; Costa, V. M.; Gameiro, J. F.; Grankin, K.; Henden, A.; Guenther, E.; Mohanty, S.; Flaccomio, E.; Burwitz, V.; Jayawardhana, R.; Predehl, P.; Durisen, R. H.
2003-12-01
We present the results of an intensive coordinated monitoring campaign in the optical and X-ray wavelength ranges of the low-mass, pre-main sequence star V410 Tau carried out in November 2001. The aim of this project was to study the relation between various indicators for magnetic activity that probe different emitting regions and would allow us to obtain clues on the interplay of the different atmospheric layers: optical photometric star spot (rotation) cycle, chromospheric Hα emission, and coronal X-rays. Our optical photometric monitoring has allowed us to measure the time of the minimum of the lightcurve with high precision. Joining the result with previous data we provide a new estimate for the dominant periodicity of V410 Tau (1.871970 +/- 0.000010 d). This updated value removes systematic offsets of the time of minimum observed in data taken over the last decade. The recurrence of the minimum in the optical lightcurve over such a long timescale emphasizes the extraordinary stability of the largest spot. This is confirmed by radial velocity measurements: data from 1993 and 2001 fit almost exactly onto each other when folded with the new period. The combination of the new data from November 2001 with published measurements taken during the last decade allows us to examine long-term changes in the mean light level of the photometry of V410 Tau. A variation on the timescale of 5.4 yr is suggested. Assuming that this behavior is truly cyclic V410 Tau is the first pre-main sequence star on which an activity cycle is detected. Two X-ray pointings were carried out with the Chandra satellite simultaneously with the optical observations, and centered near the maximum and minimum levels of the optical lightcurve. A relation of their different count levels to the rotation period of the dominating spot is not confirmed by a third Chandra observation carried out some months later, during another minimum of the 1.87 d cycle. Similarly we find no indications for a correlation of the Hα emission with the spots' rotational phase. The lack of detected rotational modulation in two important activity diagnostics seems to argue against a direct association of chromospheric and coronal emission with the spot distribution.
Davis, Joe M
2011-10-28
General equations are derived for the distribution of minimum resolution between two chromatographic peaks, when peak heights in a multi-component chromatogram follow a continuous statistical distribution. The derivation draws on published theory by relating the area under the distribution of minimum resolution to the area under the distribution of the ratio of peak heights, which in turn is derived from the peak-height distribution. Two procedures are proposed for the equations' numerical solution. The procedures are applied to the log-normal distribution, which recently was reported to describe the distribution of component concentrations in three complex natural mixtures. For published statistical parameters of these mixtures, the distribution of minimum resolution is similar to that for the commonly assumed exponential distribution of peak heights used in statistical-overlap theory. However, these two distributions of minimum resolution can differ markedly, depending on the scale parameter of the log-normal distribution. Theory for the computation of the distribution of minimum resolution is extended to other cases of interest. With the log-normal distribution of peak heights as an example, the distribution of minimum resolution is computed when small peaks are lost due to noise or detection limits, and when the height of at least one peak is less than an upper limit. The distribution of minimum resolution shifts slightly to lower resolution values in the first case and to markedly larger resolution values in the second one. The theory and numerical procedure are confirmed by Monte Carlo simulation. Copyright © 2011 Elsevier B.V. All rights reserved.
Taylor, Alexander J; Granwehr, Josef; Lesbats, Clémentine; Krupa, James L; Six, Joseph S; Pavlovskaya, Galina E; Thomas, Neil R; Auer, Dorothee P; Meersmann, Thomas; Faas, Henryk M
2016-01-01
Due to low fluorine background signal in vivo, 19F is a good marker to study the fate of exogenous molecules by magnetic resonance imaging (MRI) using equilibrium nuclear spin polarization schemes. Since 19F MRI applications require high sensitivity, it can be important to assess experimental feasibility during the design stage already by estimating the minimum detectable fluorine concentration. Here we propose a simple method for the calibration of MRI hardware, providing sensitivity estimates for a given scanner and coil configuration. An experimental "calibration factor" to account for variations in coil configuration and hardware set-up is specified. Once it has been determined in a calibration experiment, the sensitivity of an experiment or, alternatively, the minimum number of required spins or the minimum marker concentration can be estimated without the need for a pilot experiment. The definition of this calibration factor is derived based on standard equations for the sensitivity in magnetic resonance, yet the method is not restricted by the limited validity of these equations, since additional instrument-dependent factors are implicitly included during calibration. The method is demonstrated using MR spectroscopy and imaging experiments with different 19F samples, both paramagnetically and susceptibility broadened, to approximate a range of realistic environments.
How Different Marker Sets Affect Joint Angles in Inverse Kinematics Framework.
Mantovani, Giulia; Lamontagne, Mario
2017-04-01
The choice of marker set is a source of variability in motion analysis. Studies exist which assess the performance of marker sets when direct kinematics is used, but these results cannot be extrapolated to the inverse kinematic framework. Therefore, the purpose of this study was to examine the sensitivity of kinematic outcomes to inter-marker set variability in an inverse kinematic framework. The compared marker sets were plug-in-gait, University of Ottawa motion analysis model and a three-marker-cluster marker set. Walking trials of 12 participants were processed in opensim. The coefficient of multiple correlations was very good for sagittal (>0.99) and transverse (>0.92) plane angles, but worsened for the transverse plane (0.72). Absolute reliability indices are also provided for comparison among studies: minimum detectable change values ranged from 3 deg for the hip sagittal range of motion to 16.6 deg of the hip transverse range of motion. Ranges of motion of hip and knee abduction/adduction angles and hip and ankle rotations were significantly different among the three marker configurations (P < 0.001), with plug-in-gait producing larger ranges of motion. Although the same model was used for all the marker sets, the resulting minimum detectable changes were high and clinically relevant, which warns for caution when comparing studies that use different marker configurations, especially if they differ in the joint-defining markers.
Urinary orotic acid-to-creatinine ratios in cats with hepatic lipidosis.
VanSteenhouse, J L; Dimski, D S; Swenson, D H; Taboada, J
1999-06-01
To determine urinary orotic acid (OA) concentration and evaluate the urinary OA-to-creatinine ratio (OACR) in cats with hepatic lipidosis (HL). 20 cats with HL and 20 clinically normal cats. Hepatic lipidosis was diagnosed on the basis of clinical signs, results of serum biochemical analyses, exclusion of other concurrent illness, and cytologic or histologic evaluation of liver biopsy specimens. Urine samples were collected from each cat and frozen at -20 C until assayed. Urine creatinine concentrations were determined, using an alkaline picrate method followed by spectrophotometric assay. Urine OA concentration was determined, using high-performance liquid chromatography. Minimum amount of detectable OA in feline urine was 1 microg/ml. Because of small interfering peaks near the base of the OA peak, the minimum quantifiable concentration of OA was determined to be 5 microg/ml. Urinary OACR was compared in both groups of cats. Differences in urinary OACR were not detected between clinically normal cats and cats with HL. Peaks were not detected for urinary OA in any of the 20 clinically normal cats. Of the 20 HL cats, 14 did not have detectable peaks for urinary OA. Of the 6 HL cats that had detectable urinary OA peaks, 3 had values of <5 microg/ml. Apparently, OACR does not increase significantly in cats with HL. Urinary OACR is not a useful diagnostic test for HL in cats.
GOES Cloud Detection at the Global Hydrology and Climate Center
NASA Technical Reports Server (NTRS)
Laws, Kevin; Jedlovec, Gary J.; Arnold, James E. (Technical Monitor)
2002-01-01
The bi-spectral threshold (BTH) for cloud detection and height assignment is now operational at NASA's Global Hydrology and Climate Center (GHCC). This new approach is similar in principle to the bi-spectral spatial coherence (BSC) method with improvements made to produce a more robust cloud-filtering algorithm for nighttime cloud detection and subsequent 24-hour operational cloud top pressure assignment. The method capitalizes on cloud and surface emissivity differences from the GOES 3.9 and 10.7-micrometer channels to distinguish cloudy from clear pixels. Separate threshold values are determined for day and nighttime detection, and applied to a 20-day minimum composite difference image to better filter background effects and enhance differences in cloud properties. A cloud top pressure is assigned to each cloudy pixel by referencing the 10.7-micrometer channel temperature to a thermodynamic profile from a locally -run regional forecast model. This paper and supplemental poster will present an objective validation of nighttime cloud detection by the BTH approach in comparison with previous methods. The cloud top pressure will be evaluated by comparing to the NESDIS operational CO2 slicing approach.
Si, Xingfeng; Kays, Roland
2014-01-01
Camera traps is an important wildlife inventory tool for estimating species diversity at a site. Knowing what minimum trapping effort is needed to detect target species is also important to designing efficient studies, considering both the number of camera locations, and survey length. Here, we take advantage of a two-year camera trapping dataset from a small (24-ha) study plot in Gutianshan National Nature Reserve, eastern China to estimate the minimum trapping effort actually needed to sample the wildlife community. We also evaluated the relative value of adding new camera sites or running cameras for a longer period at one site. The full dataset includes 1727 independent photographs captured during 13,824 camera days, documenting 10 resident terrestrial species of birds and mammals. Our rarefaction analysis shows that a minimum of 931 camera days would be needed to detect the resident species sufficiently in the plot, and c. 8700 camera days to detect all 10 resident species. In terms of detecting a diversity of species, the optimal sampling period for one camera site was c. 40, or long enough to record about 20 independent photographs. Our analysis of evaluating the increasing number of additional camera sites shows that rotating cameras to new sites would be more efficient for measuring species richness than leaving cameras at fewer sites for a longer period. PMID:24868493
ERIC Educational Resources Information Center
Price, Cristofer; Unlu, Fatih
2014-01-01
The Comparative Short Interrupted Time Series (C-SITS) design is a frequently employed quasi-experimental method, in which the pre- and post-intervention changes observed in the outcome levels of a treatment group is compared with those of a comparison group where the difference between the former and the latter is attributed to the treatment. The…
Monitoring Boreal Forest Owls in Ontario using tape playback surveys with volunteers
Charles M. Francis; Michael S. W. Bradstreet
1997-01-01
Long Point Bird Observatory ran pilot surveys in 1995 and 1996 to monitor boreal forest owls in Ontario using roadside surveys with tape playback of calls. A minimum of 791 owls on 84 routes in 1995, and 392 owls on 88 routes in 1996; nine different species were detected. Playback improved the response rate for Barred (Strix varia), Boreal (...
Theis, Daniel; Ivanic, Joseph; Windus, Theresa L.; ...
2016-03-10
The metastable ring structure of the ozone 1 1A 1 ground state, which theoretical calculations have shown to exist, has so far eluded experimental detection. An accurate prediction for the energy difference between this isomer and the lower open structure is therefore of interest, as is a prediction for the isomerization barrier between them, which results from interactions between the lowest two 1A 1 states. In the present work, valence correlated energies of the 1 1A 1 state and the 2 1A 1 state were calculated at the 1 1A 1 open minimum, the 1 1A 1 ring minimum, themore » transition state between these two minima, the minimum of the 2 1A 1 state, and the conical intersection between the two states. The geometries were determined at the full-valence multi-configuration self-consistent-field level. Configuration interaction (CI) expansions up to quadruple excitations were calculated with triple-zeta atomic basis sets. The CI expansions based on eight different reference configuration spaces were explored. To obtain some of the quadruple excitation energies, the method of CorrelationEnergy Extrapolation by Intrinsic Scaling was generalized to the simultaneous extrapolation for two states. This extrapolation method was shown to be very accurate. On the other hand, none of the CI expansions were found to have converged to millihartree (mh) accuracy at the quadruple excitation level. The data suggest that convergence to mh accuracy is probably attained at the sextuple excitation level. On the 11A1 state, the present calculations yield the estimates of (ring minimum—open minimum) ~45–50 mh and (transition state—open minimum) ~85–90 mh. For the (2 1A 1– 1A 1) excitation energy, the estimate of ~130–170 mh is found at the open minimum and 270–310 mh at the ring minimum. At the transition state, the difference (2 1A 1– 1A 1) is found to be between 1 and 10 mh. The geometry of the transition state on the 11A1 surface and that of the minimum on the 2 1A 1 surface nearly coincide. More accurate predictions of the energydifferences also require CI expansions to at least sextuple excitations with respect to the valence space. Furthermore, for every wave function considered, the omission of the correlations of the 2s oxygen orbitals, which is a widely used approximation, was found to cause errors of about ±10 mh with respect to the energy differences.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Theis, Daniel; Ivanic, Joseph; Windus, Theresa L.
The metastable ring structure of the ozone 1 1A 1 ground state, which theoretical calculations have shown to exist, has so far eluded experimental detection. An accurate prediction for the energy difference between this isomer and the lower open structure is therefore of interest, as is a prediction for the isomerization barrier between them, which results from interactions between the lowest two 1A 1 states. In the present work, valence correlated energies of the 1 1A 1 state and the 2 1A 1 state were calculated at the 1 1A 1 open minimum, the 1 1A 1 ring minimum, themore » transition state between these two minima, the minimum of the 2 1A 1 state, and the conical intersection between the two states. The geometries were determined at the full-valence multi-configuration self-consistent-field level. Configuration interaction (CI) expansions up to quadruple excitations were calculated with triple-zeta atomic basis sets. The CI expansions based on eight different reference configuration spaces were explored. To obtain some of the quadruple excitation energies, the method of CorrelationEnergy Extrapolation by Intrinsic Scaling was generalized to the simultaneous extrapolation for two states. This extrapolation method was shown to be very accurate. On the other hand, none of the CI expansions were found to have converged to millihartree (mh) accuracy at the quadruple excitation level. The data suggest that convergence to mh accuracy is probably attained at the sextuple excitation level. On the 11A1 state, the present calculations yield the estimates of (ring minimum—open minimum) ~45–50 mh and (transition state—open minimum) ~85–90 mh. For the (2 1A 1– 1A 1) excitation energy, the estimate of ~130–170 mh is found at the open minimum and 270–310 mh at the ring minimum. At the transition state, the difference (2 1A 1– 1A 1) is found to be between 1 and 10 mh. The geometry of the transition state on the 11A1 surface and that of the minimum on the 2 1A 1 surface nearly coincide. More accurate predictions of the energydifferences also require CI expansions to at least sextuple excitations with respect to the valence space. Furthermore, for every wave function considered, the omission of the correlations of the 2s oxygen orbitals, which is a widely used approximation, was found to cause errors of about ±10 mh with respect to the energy differences.« less
Ozkan, Semiha; Kaynak, Fatma; Kalkanci, Ayse; Abbasoglu, Ufuk; Kustimur, Semra
2005-05-01
Slime and proteinase activity of 54 strains consisting of 19 Candida parapsilosis and 35 C. albicans strains isolated from blood samples were investigated in this study. Ketoconazole, amphothericin B, and fluconazole susceptibility of Candida species were compared with slime production and proteinase activity of these species. For both Candida species, no correlation was detected between the slime activity and minimum inhibitory concentration (MIC) values of the three antifungal agents. For both Candida species no correlation was detected between the proteinase activity and the MIC values of amphothericin B, and fluconazole however, statistically significant difference, was determined between the proteinase activity and MIC values of ketoconazole (p = 0.007). Slime production was determined by using modified Christensen macrotube method and proteinase activity was measured by the method of Staib. Antifungal susceptibility was determined through the guidelines of National Committee for Laboratory Standards (NCCLS M27-A).
Detection of cow milk adulteration in yak milk by ELISA.
Ren, Q R; Zhang, H; Guo, H Y; Jiang, L; Tian, M; Ren, F Z
2014-10-01
In the current study, a simple, sensitive, and specific ELISA assay using a high-affinity anti-bovine β-casein monoclonal antibody was developed for the rapid detection of cow milk in adulterated yak milk. The developed ELISA was highly specific and could be applied to detect bovine β-casein (10-8,000 μg/mL) and cow milk (1:1,300 to 1:2 dilution) in yak milk. Cross-reactivity was <1% when tested against yak milk. The linear range of adulterant concentration was 1 to 80% (vol/vol) and the minimum detection limit was 1% (vol/vol) cow milk in yak milk. Different treatments, including heating, acidification, and rennet addition, did not interfere with the assay. Moreover, the results were highly reproducible (coefficient of variation <10%) and we detected no significant differences between known and estimated values. Therefore, this assay is appropriate for the routine analysis of yak milk adulterated with cow milk. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warburton, William K.; Hennig, Wolfgang G.
A method and apparatus for measuring the concentrations of radioxenon isotopes in a gaseous sample wherein the sample cell is surrounded by N sub-detectors that are sensitive to both electrons and to photons from radioxenon decays. Signal processing electronics are provided that can detect events within the sub-detectors, measure their energies, determine whether they arise from electrons or photons, and detect coincidences between events within the same or different sub-detectors. The energies of detected two or three event coincidences are recorded as points in associated two or three-dimensional histograms. Counts within regions of interest in the histograms are then usedmore » to compute estimates of the radioxenon isotope concentrations. The method achieves lower backgrounds and lower minimum detectable concentrations by using smaller detector crystals, eliminating interference between double and triple coincidence decay branches, and segregating double coincidences within the same sub-detector from those occurring between different sub-detectors.« less
Estimate of the influence of muzzle smoke on function range of infrared system
NASA Astrophysics Data System (ADS)
Luo, Yan-ling; Wang, Jun; Wu, Jiang-hui; Wu, Jun; Gao, Meng; Gao, Fei; Zhao, Yu-jie; Zhang, Lei
2013-09-01
Muzzle smoke produced by weapons shooting has important influence on infrared (IR) system while detecting targets. Based on the theoretical model of detecting spot targets and surface targets of IR system while there is muzzle smoke, the function range for detecting spot targets and surface targets are deduced separately according to the definition of noise equivalent temperature difference(NETD) and minimum resolution temperature difference(MRTD). Also parameters of muzzle smoke affecting function range of IR system are analyzed. Base on measured data of muzzle smoke for single shot, the function range of an IR system for detecting typical targets are calculated separately while there is muzzle smoke and there is no muzzle smoke at 8-12 micron waveband. For our IR system function range has reduced by over 10% for detecting tank if muzzle smoke exists. The results will provide evidence for evaluating the influence of muzzle smoke on IR system and will help researchers to improve ammo craftwork.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Theis, Daniel; Windus, Theresa L.; Ruedenberg, Klaus
The metastable ring structure of the ozone 1{sup 1}A{sub 1} ground state, which theoretical calculations have shown to exist, has so far eluded experimental detection. An accurate prediction for the energy difference between this isomer and the lower open structure is therefore of interest, as is a prediction for the isomerization barrier between them, which results from interactions between the lowest two {sup 1}A{sub 1} states. In the present work, valence correlated energies of the 1{sup 1}A{sub 1} state and the 2{sup 1}A{sub 1} state were calculated at the 1{sup 1}A{sub 1} open minimum, the 1{sup 1}A{sub 1} ring minimum,more » the transition state between these two minima, the minimum of the 2{sup 1}A{sub 1} state, and the conical intersection between the two states. The geometries were determined at the full-valence multi-configuration self-consistent-field level. Configuration interaction (CI) expansions up to quadruple excitations were calculated with triple-zeta atomic basis sets. The CI expansions based on eight different reference configuration spaces were explored. To obtain some of the quadruple excitation energies, the method of Correlation Energy Extrapolation by Intrinsic Scaling was generalized to the simultaneous extrapolation for two states. This extrapolation method was shown to be very accurate. On the other hand, none of the CI expansions were found to have converged to millihartree (mh) accuracy at the quadruple excitation level. The data suggest that convergence to mh accuracy is probably attained at the sextuple excitation level. On the 1{sup 1}A{sub 1} state, the present calculations yield the estimates of (ring minimum—open minimum) ∼45–50 mh and (transition state—open minimum) ∼85–90 mh. For the (2{sup 1}A{sub 1}–{sup 1}A{sub 1}) excitation energy, the estimate of ∼130–170 mh is found at the open minimum and 270–310 mh at the ring minimum. At the transition state, the difference (2{sup 1}A{sub 1}–{sup 1}A{sub 1}) is found to be between 1 and 10 mh. The geometry of the transition state on the 1{sup 1}A{sub 1} surface and that of the minimum on the 2{sup 1}A{sub 1} surface nearly coincide. More accurate predictions of the energy differences also require CI expansions to at least sextuple excitations with respect to the valence space. For every wave function considered, the omission of the correlations of the 2s oxygen orbitals, which is a widely used approximation, was found to cause errors of about ±10 mh with respect to the energy differences.« less
NASA Astrophysics Data System (ADS)
Skopal, A.; Pribulla, T.; Vaňko, M.; Velič, Z.; Semkov, E.; Wolf, M.; Jones, A.
2004-02-01
We present new photometric observations of EG And, Z And, BF Cyg, CH Cyg, CI Cyg, V1329 Cyg, TX CVn, AG Dra, RW Hya, AG Peg, AX Per, IV Vir and the peculiar M giant V934 Her, which were made in the standard Johnson UBV(R) system. QW Sge was measured in the Kron-Cousin B, V, RC, IC system and for AR Pav we present its new visual estimates. The current issue gathers observations of these objects to December 2003. The main results can be summarized as follows: EG And: The primary minimum in the U light curve (LC) occurred at the end of 2002. A 0.2 -- 0.3 mag brightening in U was detected in the autumn of 2003. Z And: At around August 2002 we detected for the first time a minimum, which is due to eclipse of the active object by the red giant. Measurements from 2003.3 are close to those of a quiescent phase. BF Cyg: In February 2003 a short-term flare developed in the LC. A difference in the depth of recent minima was detected. CH Cyg: This star was in a quiescent phase at a rather bright state. A shallow minimum occurred at ˜ JD 2 452 730, close to the position of the inferior conjunction of the giant in the inner binary of the triple-star model of CH Cyg. CI Cyg: Our observations cover the descending branch of a broad minimum. TX CVn: At/around the beginning of 2003 the star entered a bright stage containing a minimum at ˜ JD 2 452 660. AG Dra: New observations revealed two eruptions, which peaked in October 2002 and 2003 at ˜ 9.3 in U. AR Pav: Our new visual estimates showed a transient disappearance of a wave-like modulation in the star's brightness between the minima at epochs E = 66 and E = 68 and its reappearance. AG Peg: Our measurements from the end of 2001 showed rather complex profile of the LC. RW Hya: Observations follow behaviour of the wave-like variability of quiet symbiotics. AX Per: In May 2003 a 0.5 mag flare was detected following a rapid decrease of the light to a minimum. QW Sge: CCD observations in B, V, RC, IC bands cover a period from 1994.5 to 2003.5. An increase in the star's brightness by about 1 mag was observed in all passbands in 1997. Less pronounced brightening was detected in 1999/2000. V934 Her: Our observations did not show any larger variation in the optical as a reaction to its X-ray activity.
Edge grouping combining boundary and region information.
Stahl, Joachim S; Wang, Song
2007-10-01
This paper introduces a new edge-grouping method to detect perceptually salient structures in noisy images. Specifically, we define a new grouping cost function in a ratio form, where the numerator measures the boundary proximity of the resulting structure and the denominator measures the area of the resulting structure. This area term introduces a preference towards detecting larger-size structures and, therefore, makes the resulting edge grouping more robust to image noise. To find the optimal edge grouping with the minimum grouping cost, we develop a special graph model with two different kinds of edges and then reduce the grouping problem to finding a special kind of cycle in this graph with a minimum cost in ratio form. This optimal cycle-finding problem can be solved in polynomial time by a previously developed graph algorithm. We implement this edge-grouping method, test it on both synthetic data and real images, and compare its performance against several available edge-grouping and edge-linking methods. Furthermore, we discuss several extensions of the proposed method, including the incorporation of the well-known grouping cues of continuity and intensity homogeneity, introducing a factor to balance the contributions from the boundary and region information, and the prevention of detecting self-intersecting boundaries.
A deep oxic ecosystem in the subseafloor South Pacific Gyre
NASA Astrophysics Data System (ADS)
D'Hondt, S. L.; Inagaki, F.; Alvarez Zarikian, C. A.; Integrated Ocean Drilling Program Expedition 329 Shipboard Scientific Party
2011-12-01
Scientific ocean drilling has demonstrated the occurrence of rich microbial communities, abundant active cells and diverse anaerobic activities in anoxic subseafloor sediment. Buried organic matter from the surface photosynthetic world sustains anaerobic heterotrophs in anoxic sediment as deeply buried as 1.6 km below the seafloor. However, these studies have been mostly restricted to the organic-rich sediment of continental margins and biologically productive regions. IODP Expedition 329 discovered that subseafloor habitat and life are fundamentally different in the vast expanse of organic-poor sediment that underlies Earth's largest oceanic province, the South Pacific Gyre (SPG). Dissolved O2 and dissolved major nutrients (C, N, P) are present throughout the entire sediment sequence and the upper basaltic basement of the SPG. The drilled sediment is up to 75 m thick. Although heterotrophic O2 reduction (aerobic respiration) persists for millions of years in SPG sediment (which accumulates very slowly), it falls below minimum detection just a few meters to tens of meters beneath the SPG seafloor. Cell concentrations approach minimum detection at similar depths, but are intermittently detectable throughout the entire sediment sequence. In situ radiolysis of water may be a significant source of energy for the microbes that inhabit the deepest (oldest) sediment.
Detecting Hardware-assisted Hypervisor Rootkits within Nested Virtualized Environments
2012-06-14
least the minimum required for the guest OS and click “Next”. For 64-bit Windows 7 the minimum required is 2048 MB (Figure 66). Figure 66. Memory...prompted for Memory, allocate at least the minimum required for the guest OS, for 64-bit Windows 7 the minimum required is 2048 MB (Figure 79...130 21. Within the virtual disk creation wizard, select VDI for the file type (Figure 81). Figure 81. Select File Type 22. Select Dynamically
Liu, Zhaojun; Zhou, Jing; Gu, Liankun; Deng, Dajun
2016-08-30
Methylation changes of CpG islands can be determined using PCR-based assays. However, the exact impact of the amount of input templates (TAIT) on DNA methylation analysis has not been previously recognized. Using COL2A1 gene as an input reference, TAIT difference between human tissues with methylation-positive and -negative detection was calculated for two representative genes GFRA1 and P16. Results revealed that TAIT in GFRA1 methylation-positive frozen samples (n = 332) was significantly higher than the methylation-negative ones (n = 44) (P < 0.001). Similar difference was found in P16 methylation analysis. The TAIT-related effect was also observed in methylation-specific PCR (MSP) and denatured high performance liquid chromatography (DHPLC) analysis. Further study showed that the minimum TAIT for a successful MethyLight PCR reaction should be ≥ 9.4 ng (CtCOL2A1 ≤ 29.3), when the cutoff value of the methylated-GFRA1 proportion for methylation-positive detection was set at 1.6%. After TAIT of the methylation non-informative frozen samples (n = 94; CtCOL2A1 > 29.3) was increased above the minimum TAIT, the methylation-positive rate increased from 72.3% to 95.7% for GFRA1 and 26.6% to 54.3% for P16, respectively (Ps < 0.001). Similar results were observed in the FFPE samples. In conclusion, TAIT critically affects results of various PCR-based DNA methylation analyses. Characterization of the minimum TAIT for target CpG islands is essential to avoid false-negative results.
The effect of lossy image compression on image classification
NASA Technical Reports Server (NTRS)
Paola, Justin D.; Schowengerdt, Robert A.
1995-01-01
We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.
Error analysis of filtering operations in pixel-duplicated images of diabetic retinopathy
NASA Astrophysics Data System (ADS)
Mehrubeoglu, Mehrube; McLauchlan, Lifford
2010-08-01
In this paper, diabetic retinopathy is chosen for a sample target image to demonstrate the effectiveness of image enlargement through pixel duplication in identifying regions of interest. Pixel duplication is presented as a simpler alternative to data interpolation techniques for detecting small structures in the images. A comparative analysis is performed on different image processing schemes applied to both original and pixel-duplicated images. Structures of interest are detected and and classification parameters optimized for minimum false positive detection in the original and enlarged retinal pictures. The error analysis demonstrates the advantages as well as shortcomings of pixel duplication in image enhancement when spatial averaging operations (smoothing filters) are also applied.
Physiological and biochemical responses of Prorocentrum minimum to high light stress
NASA Astrophysics Data System (ADS)
Park, So Yun; Choi, Eun Seok; Hwang, Jinik; Kim, Donggiun; Ryu, Tae Kwon; Lee, Taek-Kyun
2009-12-01
Prorocentrum minimum is a common bloomforming photosynthetic dinoflagellate found along the southern coast of Korea. To investigate the adaptive responses of P. minimum to high light stress, we measured growth rate, and generation of reactive oxidative species (ROS), superoxide dismutase (SOD), catalase (CAT), and malondialdehyde (MDA) in cultures exposed to normal (NL) and high light levels (HL). The results showed that HL (800 μmol m-2 s-1) inhibited growth of P. minimum, with maximal inhibition after 7-9 days. HL also increased the amount of ROS and MDA, suggesting that HL stress leads to oxidative damage and lipid peroxidation in this species. Under HL, we first detected superoxide on day 4 and H2O2 on day 5. We also detected SOD activity on day 5 and CAT activity on day 6. The level of lipid peroxidation, an indicator of cell death, was high on day 8. Addition of diphenyleneiodonium (DPI), an NAD(P)H inhibitor, decreased the levels of superoxide generation and lipid peroxidation. Our results indicate that the production of ROS which results from HL stress in P. minimum also induces antioxidative enzymes that counteract oxidative damage and allow P. minimum to survive.
Assurance of Learning, "Closing the Loop": Utilizing a Pre and Post Test for Principles of Finance
ERIC Educational Resources Information Center
Flanegin, Frank; Letterman, Denise; Racic, Stanko; Schimmel, Kurt
2010-01-01
Since there is no standard national Pre and Post Test for Principles of Finance, akin to the one for Economics, by authors created one by selecting questions from previously administered examinations. The Cronbach's Alpha of 0.851, exceeding the minimum of 0.70 for reliable pen and paper test, indicates that our Test can detect differences in…
A simple method to measure critical angles for high-sensitivity differential refractometry.
Zilio, S C
2012-01-16
A total internal reflection-based differencial refractometer, capable of measuring the real and imaginary parts of the complex refractive index in real time, is presented. The device takes advantage of the phase difference acquired by s- and p-polarized light to generate an easily detectable minimum at the reflected profile. The method allows to sensitively measuring transparent and turbid liquid samples.
Application of Twin Beams in Mach-Zehnder Interferometer
NASA Technical Reports Server (NTRS)
Zhang, J. X.; Xie, C. D.; Peng, K. C.
1996-01-01
Using the twin beams generated from parametric amplifier to drive the two port of a Mach-Zehnder interferometer, it is shown that the minimum detectable optical phase shift can be largly reduced to the Heisenberg limit(1/n) which is far below the Shot Noise Limit(1/square root of n) the large gain limit. The dependence of the minimum detectable phase shift on parametric gain and the inefficient photodetectors has been discussed.
Methods for the preparation and analysis of solids and suspended solids for methylmercury
DeWild, John F.; Olund, Shane D.; Olson, Mark L.; Tate, Michael T.
2004-01-01
This report presents the methods and method performance data for the determination of methylmercury concentrations in solids and suspended solids. Using the methods outlined here, the U.S. Geological Survey's Wisconsin District Mercury Laboratory can consistently detect methylmercury in solids and suspended solids at environmentally relevant concentrations. Solids can be analyzed wet or freeze dried with a minimum detection limit of 0.08 ng/g (as-processed). Suspended solids must first be isolated from aqueous matrices by filtration. The minimum detection limit for suspended solids is 0.01 ng per filter resulting in a minimum reporting limit ranging from 0.2 ng/L for a 0.05 L filtered volume to 0.01 ng/L for a 1.0 L filtered volume. Maximum concentrations for both matrices can be extended to cover nearly any amount of methylmercury by limiting sample size.
NASA Astrophysics Data System (ADS)
Villarini, Gabriele; Khouakhi, Abdou; Cunningham, Evan
2017-12-01
Daily temperature values are generally computed as the average of the daily minimum and maximum observations, which can lead to biases in the estimation of daily averaged values. This study examines the impacts of these biases on the calculation of climatology and trends in temperature extremes at 409 sites in North America with at least 25 years of complete hourly records. Our results show that the calculation of daily temperature based on the average of minimum and maximum daily readings leads to an overestimation of the daily values of 10+ % when focusing on extremes and values above (below) high (low) thresholds. Moreover, the effects of the data processing method on trend estimation are generally small, even though the use of the daily minimum and maximum readings reduces the power of trend detection ( 5-10% fewer trends detected in comparison with the reference data).
NASA Technical Reports Server (NTRS)
Munchak, S. Joseph; Skofronick-Jackson, Gail
2012-01-01
During the middle part of this decade a wide variety of passive microwave imagers and sounders will be unified in the Global Precipitation Measurement (GPM) mission to provide a common basis for frequent (3 hr), global precipitation monitoring. The ability of these sensors to detect precipitation by discerning it from non-precipitating background depends upon the channels available and characteristics of the surface and atmosphere. This study quantifies the minimum detectable precipitation rate and fraction of precipitation detected for four representative instruments (TMI, GMI, AMSU-A, and AMSU-B) that will be part of the GPM constellation. Observations for these instruments were constructed from equivalent channels on the SSMIS instrument on DMSP satellites F16 and F17 and matched to precipitation data from NOAA's National Mosaic and QPE (NMQ) during 2009 over the continuous United States. A variational optimal estimation retrieval of non-precipitation surface and atmosphere parameters was used to determine the consistency between the observed brightness temperatures and these parameters, with high cost function values shown to be related to precipitation. The minimum detectable precipitation rate, defined as the lowest rate for which probability of detection exceeds 50%, and the detected fraction of precipitation, are reported for each sensor, surface type (ocean, coast, bare land, snow cover) and precipitation type (rain, mix, snow). The best sensors over ocean and bare land were GMI (0.22 mm/hr minimum threshold and 90% of precipitation detected) and AMSU (0.26 mm/hr minimum threshold and 81% of precipitation detected), respectively. Over coasts (0.74 mm/hr threshold and 12% detected) and snow-covered surfaces (0.44 mm/hr threshold and 23% detected), AMSU again performed best but with much lower detection skill, whereas TMI had no skill over these surfaces. The sounders (particularly over water) benefited from the use of re-analysis data (vs. climatology) to set the a-priori atmospheric state and all instruments benefit from the use of a conditional snow cover emissivity database over land. It is recommended that real-time sources of these data be used in the operational GPM precipitation algorithms.
Mobile Romberg test assessment (mRomberg).
Galán-Mercant, Alejandro; Cuesta-Vargas, Antonio I
2014-09-12
The diagnosis of frailty is based on physical impairments and clinicians have indicated that early detection is one of the most effective methods for reducing the severity of physical frailty. Maybe, an alternative to the classical diagnosis could be the instrumentalization of classical functional testing, as Romberg test or Timed Get Up and Go Test. The aim of this study was (I) to measure and describe the magnitude of accelerometry values in the Romberg test in two groups of frail and non-frail elderly people through instrumentation with the iPhone 4®, (II) to analyse the performances and differences between the study groups, and (III) to analyse the performances and differences within study groups to characterise accelerometer responses to increasingly difficult challenges to balance. This is a cross-sectional study of 18 subjects over 70 years old, 9 frail subjects and 9 non-frail subjects. The non-parametric Mann-Whitney U test was used for between-group comparisons in means values derived from different tasks. The Wilcoxon Signed-Rank test was used to analyse differences between different variants of the test in both independent study groups. The highest difference between groups was found in the accelerometer values with eyes closed and feet parallel: maximum peak acceleration in the lateral axis (p < 0.01), minimum peak acceleration in the lateral axis (p < 0.01) and minimum peak acceleration from the resultant vector (p < 0.01). Subjects with eyes open and feet parallel, greatest differences found between the groups were in the maximum peak acceleration in the lateral axis (p < 0.01), minimum peak acceleration in the lateral axis (p < 0.01) and minimum peak acceleration from the resultant vector (p < 0.001). With eyes closed and feet in tandem, the greatest differences found between the groups were in the minimum peak acceleration in the lateral axis (p < 0.01). The accelerometer fitted in the iPhone 4® is able to study and analyse the kinematics of the Romberg test between frail and non-frail elderly people. In addition, the results indicate that the accelerometry values also were significantly different between the frail and non-frail groups, and that values from the accelerometer accelerometer increased as the test was made more complicated.
NASA Technical Reports Server (NTRS)
Fern, Lisa; Rorie, Conrad; Shively, Jay
2015-01-01
This presentation provides an overview of the work the Human Systems Integration (HSI) sub-project has done on detect and avoid (DAA) displays while working on the UAS Integration into the NAS project. Much of the work has been used to support the ongoing development of minimum operational performance standards (MOPS) for UAS by RTCA Special Committee 228. The design and results of three different human-in-the-loop simulations are discussed, with particular emphasis on the role of the UAS pilot in the Self Separation Timeline.
NASA UAS Integration into the NAS Project Detect and Avoid Display Evaluations
NASA Technical Reports Server (NTRS)
Shively, Jay
2016-01-01
As part of the Air Force - NASA Bi-Annual Research Council Meeting, slides will be presented on phase 1 Detect and Avoid (DAA) display evaluations. A series of iterative human-in-the-loops (HITL) experiments were conducted with different display configurations to objectively measure pilot performance on maintaining well clear. To date, four simulations and two mini-HITLs have been conducted. Data from these experiments have been incorporated into a revised alerting structure and included in the RTCA SC 228 Phase 1 Minimum Operational Performance Standards (MOPS) proposal. Plans for phase 2 are briefly discussed.
An analysis of relational complexity in an air traffic control conflict detection task.
Boag, Christine; Neal, Andrew; Loft, Shayne; Halford, Graeme S
2006-11-15
Theoretical analyses of air traffic complexity were carried out using the Method for the Analysis of Relational Complexity. Twenty-two air traffic controllers examined static air traffic displays and were required to detect and resolve conflicts. Objective measures of performance included conflict detection time and accuracy. Subjective perceptions of mental workload were assessed by a complexity-sorting task and subjective ratings of the difficulty of different aspects of the task. A metric quantifying the complexity of pair-wise relations among aircraft was able to account for a substantial portion of the variance in the perceived complexity and difficulty of conflict detection problems, as well as reaction time. Other variables that influenced performance included the mean minimum separation between aircraft pairs and the amount of time that aircraft spent in conflict.
Improved MIMO radar GMTI via cyclic-shift transmission of orthogonal frequency division signals
NASA Astrophysics Data System (ADS)
Li, Fuyou; He, Feng; Dong, Zhen; Wu, Manqing
2018-05-01
Minimum detectable velocity (MDV) and maximum detectable velocity are both important in ground moving target indication (GMTI) systems. Smaller MDV can be achieved by longer baseline via multiple-input multiple-output (MIMO) radar. Maximum detectable velocity is decided by blind velocities associated with carrier frequencies, and blind velocities can be mitigated by orthogonal frequency division signals. However, the scattering echoes from different carrier frequencies are independent, which is not good for improving MDV performance. An improved cyclic-shift transmission is applied in MIMO GMTI system in this paper. MDV performance is improved due to the longer baseline, and maximum detectable velocity performance is improved due to the mitigation of blind velocities via multiple carrier frequencies. The signal model for this mode is established, the principle of mitigating blind velocities with orthogonal frequency division signals is presented; the performance of different MIMO GMTI waveforms is analysed; and the performance of different array configurations is analysed. Simulation results by space-time-frequency adaptive processing proves that our proposed method is a valid way to improve GMTI performance.
Colors of extreme exo-Earth environments.
Hegde, Siddharth; Kaltenegger, Lisa
2013-01-01
The search for extrasolar planets has already detected rocky planets and several planetary candidates with minimum masses that are consistent with rocky planets in the habitable zone of their host stars. A low-resolution spectrum in the form of a color-color diagram of an exoplanet is likely to be one of the first post-detection quantities to be measured for the case of direct detection. In this paper, we explore potentially detectable surface features on rocky exoplanets and their connection to, and importance as, a habitat for extremophiles, as known on Earth. Extremophiles provide us with the minimum known envelope of environmental limits for life on our planet. The color of a planet reveals information on its properties, especially for surface features of rocky planets with clear atmospheres. We use filter photometry in the visible as a first step in the characterization of rocky exoplanets to prioritize targets for follow-up spectroscopy. Many surface environments on Earth have characteristic albedos and occupy a different color space in the visible waveband (0.4-0.9 μm) that can be distinguished remotely. These detectable surface features can be linked to the extreme niches that support extremophiles on Earth and provide a link between geomicrobiology and observational astronomy. This paper explores how filter photometry can serve as a first step in characterizing Earth-like exoplanets for an aerobic as well as an anaerobic atmosphere, thereby prioritizing targets to search for atmospheric biosignatures.
Characterization of Terahertz Bi-Material Sensors with Integrated Metamaterial Absorbers
2013-09-01
Kumar, Qing Hu, and J. L. Reno, “Real-time imaging using a 4.3-THz quantum cascade laser and a 320x240 microbolometer focal-plane array ,” IEEE...responsivity, the speed of operation and the minimum detected incident power were measured using a quantum cascade laser (QCL), operating at 3.8 THz...of operation and the minimum detected incident power were measured using a quantum cascade laser (QCL), operating at 3.8 THz. The measured
Minimum Error Bounded Efficient L1 Tracker with Occlusion Detection (PREPRINT)
2011-01-01
Minimum Error Bounded Efficient `1 Tracker with Occlusion Detection Xue Mei\\ ∗ Haibin Ling† Yi Wu†[ Erik Blasch‡ Li Bai] \\Assembly Test Technology...proposed BPR-L1 tracker is tested on several challenging benchmark sequences involving chal- lenges such as occlusion and illumination changes. In all...point method de - pends on the value of the regularization parameter λ. In the experiments, we found that the total number of PCG is a few hundred. The
Soil carbon inventories under a bioenergy crop (switchgrass): Measurement limitations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garten, C.T. Jr.; Wullschleger, S.D.
Approximately 5 yr after planting, coarse root carbon (C) and soil organic C (SOC) inventories were compared under different types of plant cover at four switchgrass (Panicum virgatum L.) production field trials in the southeastern USA. There was significantly more coarse root C under switchgrass (Alamo variety) and forest cover than tall fescue (Festuca arundinacea Schreb.), corn (Zea mays L.), or native pastures of mixed grasses. Inventories of SOC under switchgrass were not significantly greater than SOC inventories under other plant covers. At some locations the statistical power associated with ANOVA of SOC inventories was low, which raised questions aboutmore » whether differences in SOC could be detected statistically. A minimum detectable difference (MDD) for SOC inventories was calculated. The MDD is the smallest detectable difference between treatment means once the variation, significance level, statistical power, and sample size are specified. The analysis indicated that a difference of {approx}50 mg SOC/cm{sup 2} or 5 Mg SOC/ha, which is {approx}10 to 15% of existing SOC, could be detected with reasonable sample sizes and good statistical power. The smallest difference in SOC inventories that can be detected, and only with exceedingly large sample sizes, is {approx}2 to 3%. These measurement limitations have implications for monitoring and verification of proposals to ameliorate increasing global atmospheric CO{sub 2} concentrations by sequestering C in soils.« less
Minimum requirements for adequate nighttime conspicuity of highway signs
DOT National Transportation Integrated Search
1988-02-01
A laboratory and field study were conducted to assess the minimum luminance levels of signs to ensure that they will be detected and identified at adequate distances under nighttime driving conditions. A total of 30 subjects participated in the field...
A novel pixellated solid-state photon detector for enhancing the Everhart-Thornley detector.
Chuah, Joon Huang; Holburn, David
2013-06-01
This article presents a pixellated solid-state photon detector designed specifically to improve certain aspects of the existing Everhart-Thornley detector. The photon detector was constructed and fabricated in an Austriamicrosystems 0.35 µm complementary metal-oxide-semiconductor process technology. This integrated circuit consists of an array of high-responsivity photodiodes coupled to corresponding low-noise transimpedance amplifiers, a selector-combiner circuit and a variable-gain postamplifier. Simulated and experimental results show that the photon detector can achieve a maximum transimpedance gain of 170 dBΩ and minimum bandwidth of 3.6 MHz. It is able to detect signals with optical power as low as 10 nW and produces a minimum signal-to-noise ratio (SNR) of 24 dB regardless of gain configuration. The detector has been proven to be able to effectively select and combine signals from different pixels. The key advantages of this detector are smaller dimensions, higher cost effectiveness, lower voltage and power requirements and better integration. The photon detector supports pixel-selection configurability which may improve overall SNR and also potentially generate images for different analyses. This work has contributed to the future research of system-level integration of a pixellated solid-state detector for secondary electron detection in the scanning electron microscope. Copyright © 2013 Wiley Periodicals, Inc.
Mann, L.J.
1989-01-01
Concern has been expressed that some of the approximately 30,900 curies of tritium disposed to the Snake River Plain aquifer from 1952 to 1988 at the INEL (Idaho National Engineering Laboratory) have migrated to springs discharging to the Snake River in the Twin Falls-Hagerman area. To document tritium concentrations in springflow, 17 springs were sampled in November 1988 and 19 springs were sampled in March 1989. Tritium concentrations were less than the minimum detectable concentration of 0.5 pCi/mL (picocuries/mL) in November 1988 and less than the minimum detectable concentration of 0.2 pCi/mL in March 1989; the minimum detectable concentration was smaller in March 1989 owing to a longer counting time in the liquid scintillation system. The maximum contaminant level of tritium in drinking water as established by the U.S. Environmental Protection Agency is 20 pCi/mL. U.S. Environmental Protection Agency sample analyses indicate that the tritium concentration has decreased in the Snake River near Buhl since the 1970's. In 1974-79, tritium concentrations were less than 0.3 +/-0.2 pCi/mL in 3 of 20 samples; in 1983-88, 17 of 23 samples contained less than 0.3 +/-0.2 pCi/mL of tritium; the minimum detectable concentration is 0.2 pCi/mL. On the basis of decreasing tritium concentrations in the Snake River, their correlation to cessation of atmospheric weapons tests tritium concentrations in springflow less than the minimum detectable concentration, and the distribution of tritium in groundwater at the INEL, aqueous disposal of tritium at the INEL has had no measurable effect on tritium concentrations in springflow from the Snake River Plain aquifer and in the Snake River near Buhl. (USGS)
Design of an integrated sensor system for the detection of traces of different molecules in the air
NASA Astrophysics Data System (ADS)
Strle, D.; Muševič, I.
2015-04-01
This article presents the design of a miniature detection system and its associated signal processing electronics, which can detect and selectively recognize vapor traces of different materials in the air - including explosives. It is based on the array of surface-functionalized COMB capacitive sensors and extremely low noise, analog, integrated electronic circuit, hardwired digital signal processing hardware and additional software running on a PC. The instrument is sensitive and selective, consumes a minimum amount of energy, is very small (few mm3) and cheap to produce in large quantities, and is insensitive to mechanical influences. Using an electronic detection system built of low noise analog front-end and hard-wired digital signal processing, it is possible to detect less than 0.3ppt of TNT molecules in the atmosphere (3 TNT molecules in 1013 molecules of the air) at 25°C on a 1 Hz bandwidth using very small volume and approx. 10 mA current from a 5V supply voltage. The sensors are implemented in a modified MEMS process and analog electronics in 0.18 um CMOS technology.
Development of High-Speed Fluorescent X-Ray Micro-Computed Tomography
NASA Astrophysics Data System (ADS)
Takeda, T.; Tsuchiya, Y.; Kuroe, T.; Zeniya, T.; Wu, J.; Lwin, Thet-Thet; Yashiro, T.; Yuasa, T.; Hyodo, K.; Matsumura, K.; Dilmanian, F. A.; Itai, Y.; Akatsuka, T.
2004-05-01
A high-speed fluorescent x-ray CT (FXCT) system using monochromatic synchrotron x rays was developed to detect very low concentration of medium-Z elements for biomedical use. The system is equipped two types of high purity germanium detectors, and fast electronics and software. Preliminary images of a 10mm diameter plastic phantom containing channels field with iodine solutions of different concentrations showed a minimum detection level of 0.002 mg I/ml at an in-plane spatial resolution of 100μm. Furthermore, the acquisition time was reduced about 1/2 comparing to previous system. The results indicate that FXCT is a highly sensitive imaging modality capable of detecting very low concentration of iodine, and that the method has potential in biomedical applications.
Optimal use of land surface temperature data to detect changes in tropical forest cover
NASA Astrophysics Data System (ADS)
van Leeuwen, Thijs T.; Frank, Andrew J.; Jin, Yufang; Smyth, Padhraic; Goulden, Michael L.; van der Werf, Guido R.; Randerson, James T.
2011-06-01
Rapid and accurate assessment of global forest cover change is needed to focus conservation efforts and to better understand how deforestation is contributing to the buildup of atmospheric CO2. Here we examined different ways to use land surface temperature (LST) to detect changes in tropical forest cover. In our analysis we used monthly 0.05° × 0.05° Terra Moderate Resolution Imaging Spectroradiometer (MODIS) observations of LST and Program for the Estimation of Deforestation in the Brazilian Amazon (PRODES) estimates of forest cover change. We also compared MODIS LST observations with an independent estimate of forest cover loss derived from MODIS and Landsat observations. Our study domain of approximately 10° × 10° included the Brazilian state of Mato Grosso. For optimal use of LST data to detect changes in tropical forest cover in our study area, we found that using data sampled during the end of the dry season (˜1-2 months after minimum monthly precipitation) had the greatest predictive skill. During this part of the year, precipitation was low, surface humidity was at a minimum, and the difference between day and night LST was the largest. We used this information to develop a simple temporal sampling algorithm appropriate for use in pantropical deforestation classifiers. Combined with the normalized difference vegetation index, a logistic regression model using day-night LST did moderately well at predicting forest cover change. Annual changes in day-night LST decreased during 2006-2009 relative to 2001-2005 in many regions within the Amazon, providing independent confirmation of lower deforestation levels during the latter part of this decade as reported by PRODES.
Uyar, Meral; Davutoğlu, Vedat; Aydın, Neriman; Filiz, Ayten
2013-05-01
The aim of this study is to compare metabolic syndrome with syndrome Z growing epidemic in terms of risk factors, demographic variables, and gender differences in our large cohort at southeastern area in Turkey. Data of patients admitted to sleep clinic in University of Gaziantep from January 2006 to January 2011 were retrospectively evaluated. ATP III and JNC 7 were used for defining metabolic syndrome and hypertension. Data of 761 patients were evaluated. Hypertension, diabetes mellitus, coronary artery disease, pulmonary hypertension, and left ventricular hypertrophy were more common in patients with syndrome Z than in patients without metabolic syndrome. Age, waist/neck circumferences, BMI, triglyceride, glucose, and Epworth sleepiness scale score were detected higher, whereas the minimum oxygen saturation during sleep was lower in patients with syndrome Z. Metabolic syndrome was more common in sleep apneic subjects than in controls (58 versus 30 %). Female sleep apneics showed higher rate of metabolic syndrome than those of males (74 versus 52 %). Hypertension, diabetes mellitus, coronary artery disease, and left ventricular hypertrophy were detected higher in males with syndrome Z than in males without metabolic syndrome. Snoring and excessive daytime sleepiness were detected higher in females with syndrome Z than in females without metabolic syndrome. Systemic/pulmonary hypertension, diabetes mellitus, and left ventricular hypertrophy were more common in females with syndrome Z than in females without metabolic syndrome. Complaints of headache and systemic/pulmonary hypertension were more common among females than males with syndrome Z. Female syndrome Z patients had lower minimum oxygen saturation than male patients with syndrome Z. Metabolic syndrome in sleep apneic patients is more prevalent than in controls. All metabolic syndrome parameters were significantly different among obstructive sleep apneic patients with respect to gender with more severe coronary risk factors in males.
A practical approach to determination of laboratory GC-MS limits of detection.
Underwood, P J; Kananen, G E; Armitage, E K
1997-01-01
Determination of limit of detection (LOD) values in a forensic laboratory serves a fundamental forensic requirement for assay performance. In addition to demonstrating assay capability, LOD values can also be used to fulfill certification requirements of a high-volume forensic drug laboratory. The LOD was defined as the lowest concentration of drug that the laboratory can detect in a specimen with forensic certainty at a minimum of 85% of the time. Overall batch acceptance criteria included acceptable quantitation of control materials (within 20% of target), acceptable chromatography (symmetry, peak integration, peak shape, peak, and baseline resolution), retention time within +/-1% of the extracted standard, and mass ion ratios within +/-20% of the extracted standard mass ion ratios. Individual specimen acceptance criteria were the same as the batch acceptance criteria excluding the quantitation requirement. Data were collected from all instruments on different runs. A minimum of ten data points was required for each certified instrument, and a minimum of 85% of data points was acceptable. Quantitation within +/-20% of the LOD concentration was not required, but acceptable mass ratios were required. Data points with poor chromatography (internal standard failed mass ratios; interference of the baseline, for example, shoulders; asymmetry; and baseline resolution) was omitted from the acceptable rate calculation. Data points with good chromatography with failed mass ion ratios were included in the acceptable rate calculation. With these criteria, we established the following LODs: 11-nor-delta 9-tetrahydrocannabinol-9-carboxylic acid, 2 ng/mL; benzoylecgonine, 5 ng/mL; phencyclidine, 2.5 ng/mL; amphetamine, 150 ng/mL; methamphetamine, 100 ng/mL; codeine, 500 ng/mL; and morphine, 1000 ng/mL.
Parajulee, M N; Shrestha, R B; Leser, J F
2006-04-01
A 2-yr field study was conducted to examine the effectiveness of two sampling methods (visual and plant washing techniques) for western flower thrips, Frankliniella occidentalis (Pergande), and five sampling methods (visual, beat bucket, drop cloth, sweep net, and vacuum) for cotton fleahopper, Pseudatomoscelis seriatus (Reuter), in Texas cotton, Gossypium hirsutum (L.), and to develop sequential sampling plans for each pest. The plant washing technique gave similar results to the visual method in detecting adult thrips, but the washing technique detected significantly higher number of thrips larvae compared with the visual sampling. Visual sampling detected the highest number of fleahoppers followed by beat bucket, drop cloth, vacuum, and sweep net sampling, with no significant difference in catch efficiency between vacuum and sweep net methods. However, based on fixed precision cost reliability, the sweep net sampling was the most cost-effective method followed by vacuum, beat bucket, drop cloth, and visual sampling. Taylor's Power Law analysis revealed that the field dispersion patterns of both thrips and fleahoppers were aggregated throughout the crop growing season. For thrips management decision based on visual sampling (0.25 precision), 15 plants were estimated to be the minimum sample size when the estimated population density was one thrips per plant, whereas the minimum sample size was nine plants when thrips density approached 10 thrips per plant. The minimum visual sample size for cotton fleahoppers was 16 plants when the density was one fleahopper per plant, but the sample size decreased rapidly with an increase in fleahopper density, requiring only four plants to be sampled when the density was 10 fleahoppers per plant. Sequential sampling plans were developed and validated with independent data for both thrips and cotton fleahoppers.
NASA Astrophysics Data System (ADS)
Ding, Y.; Chen, X.; Bi, R.; Zhang, L. H.; Li, L.; Zhao, M.
2016-12-01
Alkenones and sterols are useful biomarkers to construct past productivity and community structure changes in aquatic environments. Until now, the quantitative relationship between biomarker content and biomass in marine phytoplankton remains understudied, which hinders the quantitative reconstruction of ocean changes. In this study, we carried out laboratory culture experiments to determine the quantitative relationship between biomarker content and biomass under three temperatures (15°, 20° and 25°) and three N:P supply ratios (N:P=10:1, 24:1 and 63:1 mol mol-1) for three common phytoplankton groups, diatoms (Phaeodactylum tricornutum Bohlin, Skeletonema costatum, Chaetoceros muelleri), dinoflagellates (Karenia mikimotoi, Prorocentrum donghaiense, Prorocentrum minimum), and coccolithophores (Emiliania huxleyi). Alkenones were only detected in E. huxleyiand dinosterol was only detected in dinoflagellates, confirming that they are the biomarkers for these two groups of phytoplankton, respectively. Brassicasterol was detected in all three groups of phytoplankton, but its content was higher in diatoms, suggesting that it is still a useful biomarker for diatoms. Cell-normalized alkenone content (pg/cell) increases with increasing growth temperature by up to 30%; while the effect of nutrients on alkenone content is minimum. On the other hand, cell-normalized dinosterol content is not temperature dependent, but it is strongly affected by nutrient ratio changes. The effects of temperature and nutrients on cell-normalized brassicasterol content are phytoplankton dependent. For diatoms, the temperature effect is minimum while the nutrient effect is significant but also varies with temperatures. Our results have strong implications for understanding how different phytoplankton respond to global changes, and for more quantitative reconstruction of past productivity and community structure changes using these biomarkers.
Thomas-Gibson, Siwan; Bugajski, Marek; Bretthauer, Michael; Rees, Colin J; Dekker, Evelien; Hoff, Geir; Jover, Rodrigo; Suchanek, Stepan; Ferlitsch, Monika; Anderson, John; Roesch, Thomas; Hultcranz, Rolf; Racz, Istvan; Kuipers, Ernst J; Garborg, Kjetil; East, James E; Rupinski, Maciej; Seip, Birgitte; Bennett, Cathy; Senore, Carlo; Minozzi, Silvia; Bisschops, Raf; Domagk, Dirk; Valori, Roland; Spada, Cristiano; Hassan, Cesare; Dinis-Ribeiro, Mario; Rutter, Matthew D
2017-01-01
The European Society of Gastrointestinal Endoscopy and United European Gastroenterology present a short list of key performance measures for lower gastrointestinal endoscopy. We recommend that endoscopy services across Europe adopt the following seven key performance measures for lower gastrointestinal endoscopy for measurement and evaluation in daily practice at a center and endoscopist level: 1 rate of adequate bowel preparation (minimum standard 90%); 2 cecal intubation rate (minimum standard 90%); 3 adenoma detection rate (minimum standard 25%); 4 appropriate polypectomy technique (minimum standard 80%); 5 complication rate (minimum standard not set); 6 patient experience (minimum standard not set); 7 appropriate post-polypectomy surveillance recommendations (minimum standard not set). Other identified performance measures have been listed as less relevant based on an assessment of their importance, scientific acceptability, feasibility, usability, and comparison to competing measures. PMID:28507745
Development of Detectability Limits for On-Orbit Inspection of Space Shuttle Wing Leading Edge
NASA Technical Reports Server (NTRS)
Stephan, Ryan A.; Johnson, David G.; Mastropietro, A. J.; Ancarrow, Walt C.
2005-01-01
At the conclusion of the Columbia Accident Investigation, one of the recommendations of the Columbia Accident Investigation Board (CAIB) was that NASA develop and implement an inspection plan for the Reinforced Carbon-Carbon (RCC) system components of the Space Shuttle. To address these issues, a group of scientists and engineers at NASA Langley Research Center proposed the use of an IR camera to inspect the RCC. Any crack in an RCC panel changes the thermal resistance of the material in the direction perpendicular to the crack. The change in thermal resistance can be made visible by introducing a heat flow across the crack and using an IR camera to image the resulting surface temperature distribution. The temperature difference across the crack depends on the change in the thermal resistance, the length of the crack, the local thermal gradient, and the rate of radiation exchange with the environment. This paper describes how the authors derived the minimum thermal gradient detectability limits for a through crack in an RCC panel. This paper will also show, through the use of a transient, 3-dimensional, finite element model, that these minimum gradients naturally exist on-orbit. The results from the finite element model confirm that there are sufficient thermal gradient to detect a crack on 96% of the RCC leading edge.
Optimal use of land surface temperature data to detect changes in tropical forest cover
NASA Astrophysics Data System (ADS)
Van Leeuwen, T. T.; Frank, A. J.; Jin, Y.; Smyth, P.; Goulden, M.; van der Werf, G.; Randerson, J. T.
2011-12-01
Rapid and accurate assessment of global forest cover change is needed to focus conservation efforts and to better understand how deforestation is contributing to the build up of atmospheric CO2. Here we examined different ways to use remotely sensed land surface temperature (LST) to detect changes in tropical forest cover. In our analysis we used monthly 0.05×0.05 degree Terra MODerate Resolution Imaging Spectroradiometer (MODIS) observations of LST and PRODES (Program for the Estimation of Deforestation in the Brazilian Amazon) estimates of forest cover change. We also compared MODIS LST observations with an independent estimate of forest cover loss derived from MODIS and Landsat observations. Our study domain of approximately 10×10 degree included most of the Brazilian state of Mato Grosso. For optimal use of LST data to detect changes in tropical forest cover in our study area, we found that using data sampled during the end of the dry season (~1-2 months after minimum monthly precipitation) had the greatest predictive skill. During this part of the year, precipitation was low, surface humidity was at a minimum, and the difference between day and night LST was the largest. We used this information to develop a simple temporal sampling algorithm appropriate for use in pan-tropical deforestation classifiers. Combined with the normalized difference vegetation index (NDVI), a logistic regression model using day-night LST did moderately well at predicting forest cover change. Annual changes in day-night LST difference decreased during 2006-2009 relative to 2001-2005 in many regions within the Amazon, providing independent confirmation of lower deforestation levels during the latter part of this decade as reported by PRODES. The use of day-night LST differences may be particularly valuable for use with satellites that do not have spectral bands that allow for the estimation of NDVI or other vegetation indices.
Acoustic rhinometry of the Indian and Anglo-Saxon nose.
Gurr, P; Diver, J; Morgan, N; MacGregor, F; Lund, V
1996-09-01
The internal and external geometry of the nose has previously been shown to differ between Anglo-Saxon, Chinese, and Negro noses. It is therefore important to define the normal geometric nasal parameters of a given race, so as to detect the abnormal nose. We present acoustic rhinometric data, with height-adjusted figures, examining the nasal minimum cross-sectional area (MCA), the distance to the nostril from the MCA, and the MCA between 0-6 cm. These data show no significant differences between Indian and Anglo-Saxon noses.
Zisselman, Marc H; Smith, Robert V; Smith, Stephanie A; Daskalakis, Constantine; Sanchez, Francisco
2006-01-01
Little research has explored racial and socioeconomic differences in the presence, detection, and treatment of neuropsychiatric symptoms in nursing home residents. To evaluate racial and socioeconomic differences on mood and behavior Minimum Data Set (MDS) recorded symptoms, MDS recorded psychiatric diagnoses, and MDS identified psychotropic medication use. Data were obtained through a cross-sectional review of MDS data of 290 African-American and white residents of 2 nursing homes. The association between age, gender, race, and pay status with mood and behavior patterns, psychiatric diagnoses, and use of psychotropic medication was evaluated. White residents were more likely than African American residents to have MDS recorded psychiatric diagnoses (odds ratio, OR = 3.24), but there were no significant racial differences in recorded mood or behavior symptomatology or in the pharmacologic treatment of mental illness. Medicaid recipients were more likely than nonrecipients to have behavior symptoms (OR = 2.09), have a psychiatric diagnosis (OR = 2.91), and receive psychotropic medications in the absence of a psychiatric diagnosis (OR = 3.62). Pay status was associated with recorded symptoms, diagnoses, and medications, but racial differences were found only for recorded diagnoses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fournier, Sean Donovan; Beall, Patrick S; Miller, Mark L
2014-08-01
Through the SNL New Mexico Small Business Assistance (NMSBA) program, several Sandia engineers worked with the Environmental Restoration Group (ERG) Inc. to verify and validate a novel algorithm used to determine the scanning Critical Level (L c ) and Minimum Detectable Concentration (MDC) (or Minimum Detectable Areal Activity) for the 102F scanning system. Through the use of Monte Carlo statistical simulations the algorithm mathematically demonstrates accuracy in determining the L c and MDC when a nearest-neighbor averaging (NNA) technique was used. To empirically validate this approach, SNL prepared several spiked sources and ran a test with the ERG 102F instrumentmore » on a bare concrete floor known to have no radiological contamination other than background naturally occurring radioactive material (NORM). The tests conclude that the NNA technique increases the sensitivity (decreases the L c and MDC) for high-density data maps that are obtained by scanning radiological survey instruments.« less
King, Michael J.; Sanchez, Roberto J.; Moss, William C.
2013-03-19
A passive blast pressure sensor for detecting blast overpressures of at least a predetermined minimum threshold pressure. The blast pressure sensor includes a piston-cylinder arrangement with one end of the piston having a detection surface exposed to a blast event monitored medium through one end of the cylinder and the other end of the piston having a striker surface positioned to impact a contact stress sensitive film that is positioned against a strike surface of a rigid body, such as a backing plate. The contact stress sensitive film is of a type which changes color in response to at least a predetermined minimum contact stress which is defined as a product of the predetermined minimum threshold pressure and an amplification factor of the piston. In this manner, a color change in the film arising from impact of the piston accelerated by a blast event provides visual indication that a blast overpressure encountered from the blast event was not less than the predetermined minimum threshold pressure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kunder, Andrea; Chaboyer, Brian; Layden, Andrew
New R-band observations of 21 local field RR Lyrae variable stars are used to explore the reliability of minimum light (V - R) colors as a tool for measuring interstellar reddening. For each star, R-band intensity mean magnitudes and light amplitudes are presented. Corresponding V-band light curves from the literature are supplemented with the new photometry, and (V - R) colors at minimum light are determined for a subset of these stars as well as for other stars in the literature. Two different definitions of minimum light color are examined, one which uses a Fourier decomposition to the V andmore » R light curves to find (V - R) at minimum V-band light, (V - R) {sup F} {sub min}, and the other which uses the average color between the phase interval 0.5-0.8, (V - R){sup {phi}}{sup (0.5-0.8)} {sub min}. From 31 stars with a wide range of metallicities and pulsation periods, the mean dereddened RR Lyrae color at minimum light is (V - R) {sup F} {sub min,0} = 0.28 {+-} 0.02 mag and (V - R){sup {phi}}{sup (0.5-0.8)} {sub min,0} = 0.27 {+-} 0.02 mag. As was found by Guldenschuh et al. using (V - I) colors, any dependence of the star's minimum light color on metallicity or pulsation amplitude is too weak to be formally detected. We find that the intrinsic (V - R) of Galactic bulge RR Lyrae stars are similar to those found by their local counterparts and hence that bulge RR0 Lyrae stars do not have anomalous colors as compared to the local RR Lyrae stars.« less
Research and development on performance models of thermal imaging systems
NASA Astrophysics Data System (ADS)
Wang, Ji-hui; Jin, Wei-qi; Wang, Xia; Cheng, Yi-nan
2009-07-01
Traditional ACQUIRE models perform the discrimination tasks of detection (target orientation, recognition and identification) for military target based upon minimum resolvable temperature difference (MRTD) and Johnson criteria for thermal imaging systems (TIS). Johnson criteria is generally pessimistic for performance predict of sampled imager with the development of focal plane array (FPA) detectors and digital image process technology. Triangle orientation discrimination threshold (TOD) model, minimum temperature difference perceived (MTDP)/ thermal range model (TRM3) Model and target task performance (TTP) metric have been developed to predict the performance of sampled imager, especially TTP metric can provides better accuracy than the Johnson criteria. In this paper, the performance models above are described; channel width metrics have been presented to describe the synthesis performance including modulate translate function (MTF) channel width for high signal noise to ration (SNR) optoelectronic imaging systems and MRTD channel width for low SNR TIS; the under resolvable questions for performance assessment of TIS are indicated; last, the development direction of performance models for TIS are discussed.
Pérez Zapata, A I; Gutiérrez Samaniego, M; Rodríguez Cuéllar, E; Gómez de la Cámara, A; Ruiz López, P
Surgery is a high risk for the occurrence of adverse events (AE). The main objective of this study is to compare the effectiveness of the Trigger tool with the Hospital National Health System registration of Discharges, the minimum basic data set (MBDS), in detecting adverse events in patients admitted to General Surgery and undergoing surgery. Observational and descriptive retrospective study of patients admitted to general surgery of a tertiary hospital, and undergoing surgery in 2012. The identification of adverse events was made by reviewing the medical records, using an adaptation of "Global Trigger Tool" methodology, as well as the (MBDS) registered on the same patients. Once the AE were identified, they were classified according to damage and to the extent to which these could have been avoided. The area under the curve (ROC) were used to determine the discriminatory power of the tools. The Hanley and Mcneil test was used to compare both tools. AE prevalence was 36.8%. The TT detected 89.9% of all AE, while the MBDS detected 28.48%. The TT provides more information on the nature and characteristics of the AE. The area under the curve was 0.89 for the TT and 0.66 for the MBDS. These differences were statistically significant (P<.001). The Trigger tool detects three times more adverse events than the MBDS registry. The prevalence of adverse events in General Surgery is higher than that estimated in other studies. Copyright © 2017 SECA. Publicado por Elsevier España, S.L.U. All rights reserved.
Shin, Hye-Young; Park, Hae-Young Lopilly; Jung, Kyoung-In; Choi, Jin-A; Park, Chan Kee
2014-01-01
To determine whether the ganglion cell-inner plexiform layer (GCIPL) or circumpapillary retinal nerve fiber layer (cpRNFL) is better at distinguishing eyes with early glaucoma from normal eyes on the basis of the the initial location of the visual field (VF) damage. Retrospective, observational study. Eighty-four patients with early glaucoma and 43 normal subjects were enrolled. The patients with glaucoma were subdivided into 2 groups according to the location of VF damage: (1) an isolated parafoveal scotoma (PFS, N = 42) within 12 points of a central 10 degrees in 1 hemifield or (2) an isolated peripheral nasal step (PNS, N = 42) within the nasal periphery outside 10 degrees of fixation in 1 hemifield. All patients underwent macular and optic disc scanning using Cirrus high-definition optical coherence tomography (Carl Zeiss Meditec, Dublin, CA). The GCIPL and cpRNFL thicknesses were compared between groups. Areas under the receiver operating characteristic curves (AUCs) were calculated. Comparison of diagnostic ability using AUCs. The average and minimum GCIPL of the PFS group were significantly thinner than those of the PNS group, whereas there was no significant difference in the average retinal nerve fiber layer (RNFL) thickness between the 2 groups. The AUCs of the average (0.962) and minimum GCIPL (0.973) thicknesses did not differ from that of the average RNFL thickness (0.972) for discriminating glaucomatous changes between normal and all glaucoma eyes (P =0.566 and 0.974, respectively). In the PFS group, the AUCs of the average (0.988) and minimum GCIPL (0.999) thicknesses were greater than that of the average RNFL thickness (0.961, P =0.307 and 0.125, respectively). However, the AUCs of the average (0.936) and minimum GCIPL (0.947) thicknesses were lower than that of the average RNFL thickness (0.984) in the PNS group (P =0.032 and 0.069, respectively). The GCIPL parameters were more valuable than the cpRNFL parameters for detecting glaucoma in eyes with parafoveal VF loss, and the cpRNFL parameters were better than the GCIPL parameters for detecting glaucoma in eyes with peripheral VF loss. Clinicians should know that the diagnostic capability of macular GCIPL parameters depends largely on the location of the VF loss. Copyright © 2014 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
77 FR 51807 - Agency Forms Undergoing Paperwork Reduction Act Review
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-27
... Minimum Data Elements (MDEs) for the National Breast and Cervical Cancer Early Detection Program (NBCCEDP... screening and early detection tests for breast and cervical cancer. Mammography is extremely valuable as an early detection tool because it can detect breast cancer well before the woman can feel the lump, when...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahmood, U; Dauer, L; Erdi, Y
Purpose: Our goal was to evaluate low contrast detectability (LCD) for abdominal CT protocols across two CT scanner manufacturers, while producing a similar noise texture and CTDIvol for acquired images. Methods: A CIRS tissue equivalent LCD phantom containing three columns of 7 spherical targets, ranging from 10 mm to 2.4 mm, that are 5, 10, and 20 HU below the background matrix (HUBB) was scanned using two a GE HD750 64 slice scanner and a Siemens Somatom Definition AS 64 slice scanner. Protocols were designed to deliver a CTDIvol of 12.26 mGy and images were reconstructed with FBP, ASIR andmore » Sapphire. Comparisons were made with those algorithms that had matching noise power spectrum peaks (NPS). NPS information was extracted from a previously published article that matched NPS peak frequencies across manufacturers by calculating the NPS from uniform phantom images reconstructed with several IR algorithms. Results: The minimum detectable lesion size in the 20 HUBB and 10 HUBB column was 6.3 mm, and 10 mm in the 5 HUBB column for the GE HD 750 scanner. The minimum detectable lesion size in the 20 HUBB column was 4.8 mm, in the 10 HUBB column, 9.5 mm, and the 5 HUBB column, 10 mm for the Siemens Somatom Definition AS. Conclusion: Reducing radiation dose while improving or maintaining LCD is possible with application of IR. However, there are several different IR algorithms, with each generating a different resolution and noise texture. In multi-manufacturer settings, matching only the CTDIvol between manufacturers may Result in a loss of clinically relevant information.« less
Sirgo, Gonzalo; Esteban, Federico; Gómez, Josep; Moreno, Gerard; Rodríguez, Alejandro; Blanch, Lluis; Guardiola, Juan José; Gracia, Rafael; De Haro, Lluis; Bodí, María
2018-04-01
Big data analytics promise insights into healthcare processes and management, improving outcomes while reducing costs. However, data quality is a major challenge for reliable results. Business process discovery techniques and an associated data model were used to develop data management tool, ICU-DaMa, for extracting variables essential for overseeing the quality of care in the intensive care unit (ICU). To determine the feasibility of using ICU-DaMa to automatically extract variables for the minimum dataset and ICU quality indicators from the clinical information system (CIS). The Wilcoxon signed-rank test and Fisher's exact test were used to compare the values extracted from the CIS with ICU-DaMa for 25 variables from all patients attended in a polyvalent ICU during a two-month period against the gold standard of values manually extracted by two trained physicians. Discrepancies with the gold standard were classified into plausibility, conformance, and completeness errors. Data from 149 patients were included. Although there were no significant differences between the automatic method and the manual method, we detected differences in values for five variables, including one plausibility error and two conformance and completeness errors. Plausibility: 1) Sex, ICU-DaMa incorrectly classified one male patient as female (error generated by the Hospital's Admissions Department). Conformance: 2) Reason for isolation, ICU-DaMa failed to detect a human error in which a professional misclassified a patient's isolation. 3) Brain death, ICU-DaMa failed to detect another human error in which a professional likely entered two mutually exclusive values related to the death of the patient (brain death and controlled donation after circulatory death). Completeness: 4) Destination at ICU discharge, ICU-DaMa incorrectly classified two patients due to a professional failing to fill out the patient discharge form when thepatients died. 5) Length of continuous renal replacement therapy, data were missing for one patient because the CRRT device was not connected to the CIS. Automatic generation of minimum dataset and ICU quality indicators using ICU-DaMa is feasible. The discrepancies were identified and can be corrected by improving CIS ergonomics, training healthcare professionals in the culture of the quality of information, and using tools for detecting and correcting data errors. Copyright © 2018 Elsevier B.V. All rights reserved.
Allahdina, Ali M; Stetson, Paul F; Vitale, Susan; Wong, Wai T; Chew, Emily Y; Ferris, Fredrick L; Sieving, Paul A; Cukras, Catherine
2018-04-01
As optical coherence tomography (OCT) minimum intensity (MI) analysis provides a quantitative assessment of changes in the outer nuclear layer (ONL), we evaluated the ability of OCT-MI analysis to detect hydroxychloroquine toxicity. Fifty-seven predominantly female participants (91.2% female; mean age, 55.7 ± 10.4 years; mean time on hydroxychloroquine, 15.0 ± 7.5 years) were enrolled in a case-control study and categorized into affected (i.e., with toxicity, n = 19) and unaffected (n = 38) groups using objective multifocal electroretinographic (mfERG) criteria. Spectral-domain OCT scans of the macula were analyzed and OCT-MI values quantitated for each subfield of the Early Treatment Diabetic Retinopathy Study (ETDRS) grid. A two-sample U-test and a cross-validation approach were used to assess the sensitivity and specificity of toxicity detection according to OCT-MI criteria. The medians of the OCT-MI values in all nine of the ETDRS subfields were significantly elevated in the affected group relative to the unaffected group (P < 0.005 for all comparisons), with the largest difference found for the inner inferior subfield (P < 0.0001). The receiver operating characteristic analysis of median MI values of the inner inferior subfields showed high sensitivity and high specificity in the detection of toxicity with area under the curve = 0.99. Retinal changes secondary to hydroxychloroquine toxicity result in increased OCT reflectivity in the ONL that can be detected and quantitated using OCT-MI analysis. Analysis of OCT-MI values demonstrates high sensitivity and specificity for detecting the presence of hydroxychloroquine toxicity in this cohort and may contribute additionally to current screening practices.
Allahdina, Ali M.; Stetson, Paul F.; Vitale, Susan; Wong, Wai T.; Chew, Emily Y.; Ferris, Fredrick L.; Sieving, Paul A.
2018-01-01
Purpose As optical coherence tomography (OCT) minimum intensity (MI) analysis provides a quantitative assessment of changes in the outer nuclear layer (ONL), we evaluated the ability of OCT-MI analysis to detect hydroxychloroquine toxicity. Methods Fifty-seven predominantly female participants (91.2% female; mean age, 55.7 ± 10.4 years; mean time on hydroxychloroquine, 15.0 ± 7.5 years) were enrolled in a case-control study and categorized into affected (i.e., with toxicity, n = 19) and unaffected (n = 38) groups using objective multifocal electroretinographic (mfERG) criteria. Spectral-domain OCT scans of the macula were analyzed and OCT-MI values quantitated for each subfield of the Early Treatment Diabetic Retinopathy Study (ETDRS) grid. A two-sample U-test and a cross-validation approach were used to assess the sensitivity and specificity of toxicity detection according to OCT-MI criteria. Results The medians of the OCT-MI values in all nine of the ETDRS subfields were significantly elevated in the affected group relative to the unaffected group (P < 0.005 for all comparisons), with the largest difference found for the inner inferior subfield (P < 0.0001). The receiver operating characteristic analysis of median MI values of the inner inferior subfields showed high sensitivity and high specificity in the detection of toxicity with area under the curve = 0.99. Conclusions Retinal changes secondary to hydroxychloroquine toxicity result in increased OCT reflectivity in the ONL that can be detected and quantitated using OCT-MI analysis. Analysis of OCT-MI values demonstrates high sensitivity and specificity for detecting the presence of hydroxychloroquine toxicity in this cohort and may contribute additionally to current screening practices. PMID:29677357
[Research of Identify Spatial Object Using Spectrum Analysis Technique].
Song, Wei; Feng, Shi-qi; Shi, Jing; Xu, Rong; Wang, Gong-chang; Li, Bin-yu; Liu, Yu; Li, Shuang; Cao Rui; Cai, Hong-xing; Zhang, Xi-he; Tan, Yong
2015-06-01
The high precision scattering spectrum of spatial fragment with the minimum brightness of 4.2 and the resolution of 0.5 nm has been observed using spectrum detection technology on the ground. The obvious differences for different types of objects are obtained by the normalizing and discrete rate analysis of the spectral data. Each of normalized multi-frame scattering spectral line shape for rocket debris is identical. However, that is different for lapsed satellites. The discrete rate of the single frame spectrum of normalized space debris for rocket debris ranges from 0.978% to 3.067%, and the difference of oscillation and average value is small. The discrete rate for lapsed satellites ranges from 3.118 4% to 19.472 7%, and the difference of oscillation and average value relatively large. The reason is that the composition of rocket debris is single, while that of the lapsed satellites is complex. Therefore, the spectrum detection technology on the ground can be used to the classification of the spatial fragment.
NASA Astrophysics Data System (ADS)
Vujović, Dragana; Todorović, Nedeljko; Paskota, Mira
2018-04-01
With the goal of finding summer climate patterns in the region of Belgrade (Serbia) over the period 1888-2013, different techniques of multivariate statistical analysis were used in order to analyze the simultaneous changes of a number of climatologic parameters. An increasing trend of the mean daily minimum temperature was detected. In the recent decades (1960-2013), this increase was much more pronounced. The number of days with the daily minimum temperature greater or equal to 20 °C also increased significantly. Precipitation had no statistically significant trend. Spectral analysis showed a repetitive nature of the climatologic parameters which had periods that roughly can be classified into three groups, with the durations of the following: (1) 6 to 7 years, (2) 10 to 18 years, and (3) 21, 31, and 41 years. The temperature variables mainly had one period of repetitiveness of 5 to 7 years. Among other variables, the correlations of regional fluctuations of the temperature and precipitation and atmospheric circulation indices were analyzed. The North Atlantic oscillation index had the same periodicity as that of the precipitation, and it was not correlated to the temperature variables. Atlantic multidecadal oscillation index correlated well to the summer mean daily minimum and summer mean temperatures. The underlying structure of the data was analyzed by principal component analysis, which detected the following four easily interpreted dimensions: More sunshine-Higher temperature, Precipitation, Extreme heats, and Changeable summer.
Image reduction pipeline for the detection of variable sources in highly crowded fields
NASA Astrophysics Data System (ADS)
Gössl, C. A.; Riffeser, A.
2002-01-01
We present a reduction pipeline for CCD (charge-coupled device) images which was built to search for variable sources in highly crowded fields like the M 31 bulge and to handle extensive databases due to large time series. We describe all steps of the standard reduction in detail with emphasis on the realisation of per pixel error propagation: Bias correction, treatment of bad pixels, flatfielding, and filtering of cosmic rays. The problems of conservation of PSF (point spread function) and error propagation in our image alignment procedure as well as the detection algorithm for variable sources are discussed: we build difference images via image convolution with a technique called OIS (optimal image subtraction, Alard & Lupton \\cite{1998ApJ...503..325A}), proceed with an automatic detection of variable sources in noise dominated images and finally apply a PSF-fitting, relative photometry to the sources found. For the WeCAPP project (Riffeser et al. \\cite{2001A&A...0000..00R}) we achieve 3sigma detections for variable sources with an apparent brightness of e.g. m = 24.9;mag at their minimum and a variation of Delta m = 2.4;mag (or m = 21.9;mag brightness minimum and a variation of Delta m = 0.6;mag) on a background signal of 18.1;mag/arcsec2 based on a 500;s exposure with 1.5;arcsec seeing at a 1.2;m telescope. The complete per pixel error propagation allows us to give accurate errors for each measurement.
50 CFR 218.174 - Requirements for monitoring and reporting.
Code of Federal Regulations, 2011 CFR
2011-10-01
...-based surveys shall be designed to maximize detections of marine mammals near mission activity event. (2... Navy to implement, at a minimum, the monitoring activities summarized below: (1) Visual Surveys: (i) The Holder of this Authorization shall conduct a minimum of 2 special visual surveys per year to...
Eusebio, Lidia; Capelli, Laura; Sironi, Selena
2016-01-01
Despite initial enthusiasm towards electronic noses and their possible application in different fields, and quite a lot of promising results, several criticalities emerge from most published research studies, and, as a matter of fact, the diffusion of electronic noses in real-life applications is still very limited. In general, a first step towards large-scale-diffusion of an analysis method, is standardization. The aim of this paper is describing the experimental procedure adopted in order to evaluate electronic nose performances, with the final purpose of establishing minimum performance requirements, which is considered to be a first crucial step towards standardization of the specific case of electronic nose application for environmental odor monitoring at receptors. Based on the experimental results of the performance testing of a commercialized electronic nose type with respect to three criteria (i.e., response invariability to variable atmospheric conditions, instrumental detection limit, and odor classification accuracy), it was possible to hypothesize a logic that could be adopted for the definition of minimum performance requirements, according to the idea that these are technologically achievable. PMID:27657086
Eusebio, Lidia; Capelli, Laura; Sironi, Selena
2016-09-21
Despite initial enthusiasm towards electronic noses and their possible application in different fields, and quite a lot of promising results, several criticalities emerge from most published research studies, and, as a matter of fact, the diffusion of electronic noses in real-life applications is still very limited. In general, a first step towards large-scale-diffusion of an analysis method, is standardization. The aim of this paper is describing the experimental procedure adopted in order to evaluate electronic nose performances, with the final purpose of establishing minimum performance requirements, which is considered to be a first crucial step towards standardization of the specific case of electronic nose application for environmental odor monitoring at receptors. Based on the experimental results of the performance testing of a commercialized electronic nose type with respect to three criteria (i.e., response invariability to variable atmospheric conditions, instrumental detection limit, and odor classification accuracy), it was possible to hypothesize a logic that could be adopted for the definition of minimum performance requirements, according to the idea that these are technologically achievable.
Antarctic meteor observations using the Davis MST and meteor radars
NASA Astrophysics Data System (ADS)
Holdsworth, David A.; Murphy, Damian J.; Reid, Iain M.; Morris, Ray J.
2008-07-01
This paper presents the meteor observations obtained using two radars installed at Davis (68.6°S, 78.0°E), Antarctica. The Davis MST radar was installed primarily for observation of polar mesosphere summer echoes, with additional transmit and receive antennas installed to allow all-sky interferometric meteor radar observations. The Davis meteor radar performs dedicated all-sky interferometric meteor radar observations. The annual count rate variation for both radars peaks in mid-summer and minimizes in early Spring. The height distribution shows significant annual variation, with minimum (maximum) peak heights and maximum (minimum) height widths in early Spring (mid-summer). Although the meteor radar count rate and height distribution variations are consistent with a similar frequency meteor radar operating at Andenes (69.3°N), the peak heights show a much larger variation than at Andenes, while the count rate maximum-to-minimum ratios show a much smaller variation. Investigation of the effects of the temporal sampling parameters suggests that these differences are consistent with the different temporal sampling strategies used by the Davis and Andenes meteor radars. The new radiant mapping procedure of [Jones, J., Jones, W., Meteor radiant activity mapping using single-station radar observations, Mon. Not. R. Astron. Soc., 367(3), 1050-1056, doi: 10.1111/j.1365-2966.2006.10025.x, 2006] is investigated. The technique is used to detect the Southern delta-Aquarid meteor shower, and a previously unknown weak shower. Meteoroid speeds obtained using the Fresnel transform are presented. The diurnal, annual, and height variation of meteoroid speeds are presented, with the results found to be consistent with those obtained using specular meteor radars. Meteoroid speed estimates for echoes identified as Southern delta-Aquarid and Sextantid meteor candidates show good agreement with the theoretical pre-atmospheric speeds of these showers (41 km s -1 and 32 km s -1, respectively). The meteoroid speeds estimated for these showers show decreasing speed with decreasing height, consistent with the effects of meteoroid deceleration. Finally, we illustrate how the new radiant mapping and meteoroid speed techniques can be combined for unambiguous meteor shower detection, and use these techniques to detect a previously unknown weak shower.
Mebane, Christopher A.
2015-01-01
Criticisms of the uses of the no-observed-effect concentration (NOEC) and the lowest-observed-effect concentration (LOEC) and more generally the entire null hypothesis statistical testing scheme are hardly new or unique to the field of ecotoxicology [1-4]. Among the criticisms of NOECs and LOECs is that statistically similar LOECs (in terms of p value) can represent drastically different levels of effect. For instance, my colleagues and I found that a battery of chronic toxicity tests with different species and endpoints yielded LOECs with minimum detectable differences ranging from 3% to 48% reductions from controls [5].
HUDSON, PARISA; HUDSON, STEPHEN D.; HANDLER, WILLIAM B.; SCHOLL, TIMOTHY J.; CHRONIK, BLAINE A.
2010-01-01
High-performance shim coils are required for high-field magnetic resonance imaging and spectroscopy. Complete sets of high-power and high-performance shim coils were designed using two different methods: the minimum inductance and the minimum power target field methods. A quantitative comparison of shim performance in terms of merit of inductance (ML) and merit of resistance (MR) was made for shim coils designed using the minimum inductance and the minimum power design algorithms. In each design case, the difference in ML and the difference in MR given by the two design methods was <15%. Comparison of wire patterns obtained using the two design algorithms show that minimum inductance designs tend to feature oscillations within the current density; while minimum power designs tend to feature less rapidly varying current densities and lower power dissipation. Overall, the differences in coil performance obtained by the two methods are relatively small. For the specific case of shim systems customized for small animal imaging, the reduced power dissipation obtained when using the minimum power method is judged to be more significant than the improvements in switching speed obtained from the minimum inductance method. PMID:20411157
2012-01-01
Background Symmetry and regularity of gait are essential outcomes of gait retraining programs, especially in lower-limb amputees. This study aims presenting an algorithm to automatically compute symmetry and regularity indices, and assessing the minimum number of strides for appropriate evaluation of gait symmetry and regularity through autocorrelation of acceleration signals. Methods Ten transfemoral amputees (AMP) and ten control subjects (CTRL) were studied. Subjects wore an accelerometer and were asked to walk for 70 m at their natural speed (twice). Reference values of step and stride regularity indices (Ad1 and Ad2) were obtained by autocorrelation analysis of the vertical and antero-posterior acceleration signals, excluding initial and final strides. The Ad1 and Ad2 coefficients were then computed at different stages by analyzing increasing portions of the signals (considering both the signals cleaned by initial and final strides, and the whole signals). At each stage, the difference between Ad1 and Ad2 values and the corresponding reference values were compared with the minimum detectable difference, MDD, of the index. If that difference was less than MDD, it was assumed that the portion of signal used in the analysis was of sufficient length to allow reliable estimation of the autocorrelation coefficient. Results All Ad1 and Ad2 indices were lower in AMP than in CTRL (P < 0.0001). Excluding initial and final strides from the analysis, the minimum number of strides needed for reliable computation of step symmetry and stride regularity was about 2.2 and 3.5, respectively. Analyzing the whole signals, the minimum number of strides increased to about 15 and 20, respectively. Conclusions Without the need to identify and eliminate the phases of gait initiation and termination, twenty strides can provide a reasonable amount of information to reliably estimate gait regularity in transfemoral amputees. PMID:22316184
Staroverov, Sergey A; Volkov, Alexei A; Fomin, Alexander S; Laskavuy, Vladislav N; Mezhennyy, Pavel V; Kozlov, Sergey V; Larionov, Sergey V; Fedorov, Michael V; Dykman, Lev A; Guliy, Olga I
2015-01-01
Mini-antibodies that have specific ferritin response have been produced for the first time using sheep's phage libraries (Griffin.1, Medical Research Council, Cambridge, UK). Produced phage antibodies were used for the first time for the development of diagnostic test kits for ferritin detection in the blood of cattle. The immunodot assay with secondary biospecific labeling is suggested as means of ferritin detection in cow blood serum (antiferritin phage antibodies and rabbit antiphage antibodies conjugated with different labels). Сolloidal gold, gold nanoshells, and horse reddish peroxidase used as labels have shown a similar response while detecting concentration of ferritin (0.2 mg/mL). It is shown that the method of solid-phase immunoassay with a visual view of the results allows determination of the minimum concentration of ferritin in the blood of cows at 0.225 g/mL.
Sharma, Amit Kumar; Gangwar, Mayank; Kumar, Dharmendra; Nath, Gopal; Kumar Sinha, Akhoury Sudhir; Tripathi, Yamini Bhushan
2016-01-01
Objective: This study aims to evaluate the antimicrobial activity, phytochemical studies and thin layer chromatography analysis of machine oil, hexane extract of seed oil and methanol extract of presscake & latex of Jatropha curcas Linn (family Euphorbiaceae). Materials and Methods: J. curcas extracts were subjected to preliminary qualitative phytochemical screening to detect the major phytochemicals followed by its reducing power and content of phenol and flavonoids in different fractions. Thin layer chromatography was also performed using different solvent systems for the analysis of a number of constituents in the plant extracts. Antimicrobial activity was evaluated by the disc diffusion method, while the minimum inhibitory concentration, minimum bactericidal concentration and minimum fungicidal concentration were calculated by micro dilution method. Results: The methanolic fraction of latex and cake exhibited marked antifungal and antibacterial activities against Gram-positive and Gram-negative bacteria. Phytochemical analysis revealed the presence of alkaloids, saponins, tannins, terpenoids, steroids, glycosides, phenols and flavonoids. Reducing power showed dose dependent increase in concentration compared to standard Quercetin. Furthermore, this study recommended the isolation and separation of bioactive compounds responsible for the antibacterial activity which would be done by using different chromatographic methods such as high-performance liquid chromatography (HPLC), GC-MS etc. Conclusion: The results of the above study suggest that all parts of the plants possess potent antibacterial activity. Hence, it is important to isolate the active principles for further testing of antimicrobial and other biological efficacy. PMID:27516977
Sharma, Amit Kumar; Gangwar, Mayank; Kumar, Dharmendra; Nath, Gopal; Kumar Sinha, Akhoury Sudhir; Tripathi, Yamini Bhushan
2016-01-01
This study aims to evaluate the antimicrobial activity, phytochemical studies and thin layer chromatography analysis of machine oil, hexane extract of seed oil and methanol extract of presscake & latex of Jatropha curcas Linn (family Euphorbiaceae). J. curcas extracts were subjected to preliminary qualitative phytochemical screening to detect the major phytochemicals followed by its reducing power and content of phenol and flavonoids in different fractions. Thin layer chromatography was also performed using different solvent systems for the analysis of a number of constituents in the plant extracts. Antimicrobial activity was evaluated by the disc diffusion method, while the minimum inhibitory concentration, minimum bactericidal concentration and minimum fungicidal concentration were calculated by micro dilution method. The methanolic fraction of latex and cake exhibited marked antifungal and antibacterial activities against Gram-positive and Gram-negative bacteria. Phytochemical analysis revealed the presence of alkaloids, saponins, tannins, terpenoids, steroids, glycosides, phenols and flavonoids. Reducing power showed dose dependent increase in concentration compared to standard Quercetin. Furthermore, this study recommended the isolation and separation of bioactive compounds responsible for the antibacterial activity which would be done by using different chromatographic methods such as high-performance liquid chromatography (HPLC), GC-MS etc. The results of the above study suggest that all parts of the plants possess potent antibacterial activity. Hence, it is important to isolate the active principles for further testing of antimicrobial and other biological efficacy.
Differential detection of Gaussian MSK in a mobile radio environment
NASA Technical Reports Server (NTRS)
Simon, M. K.; Wang, C. C.
1984-01-01
Minimum shift keying with Gaussian shaped transmit pulses is a strong candidate for a modulation technique that satisfies the stringent out-of-band radiated power requirements of the mobil radio application. Numerous studies and field experiments have been conducted by the Japanese on urban and suburban mobile radio channels with systems employing Gaussian minimum-shift keying (GMSK) transmission and differentially coherent reception. A comprehensive analytical treatment is presented of the performance of such systems emphasizing the important trade-offs among the various system design parameters such as transmit and receiver filter bandwidths and detection threshold level. It is shown that two-bit differential detection of GMSK is capable of offering far superior performance to the more conventional one-bit detection method both in the presence of an additive Gaussian noise background and Rician fading.
Differential detection of Gaussian MSK in a mobile radio environment
NASA Astrophysics Data System (ADS)
Simon, M. K.; Wang, C. C.
1984-11-01
Minimum shift keying with Gaussian shaped transmit pulses is a strong candidate for a modulation technique that satisfies the stringent out-of-band radiated power requirements of the mobil radio application. Numerous studies and field experiments have been conducted by the Japanese on urban and suburban mobile radio channels with systems employing Gaussian minimum-shift keying (GMSK) transmission and differentially coherent reception. A comprehensive analytical treatment is presented of the performance of such systems emphasizing the important trade-offs among the various system design parameters such as transmit and receiver filter bandwidths and detection threshold level. It is shown that two-bit differential detection of GMSK is capable of offering far superior performance to the more conventional one-bit detection method both in the presence of an additive Gaussian noise background and Rician fading.
A Multi-Level Decision Fusion Strategy for Condition Based Maintenance of Composite Structures
Sharif Khodaei, Zahra; Aliabadi, M.H.
2016-01-01
In this work, a multi-level decision fusion strategy is proposed which weighs the Value of Information (VoI) against the intended functions of a Structural Health Monitoring (SHM) system. This paper presents a multi-level approach for three different maintenance strategies in which the performance of the SHM systems is evaluated against its intended functions. Level 1 diagnosis results in damage existence with minimum sensors covering a large area by finding the maximum energy difference for the guided waves propagating in pristine structure and the post-impact state; Level 2 diagnosis provides damage detection and approximate localization using an approach based on Electro-Mechanical Impedance (EMI) measures, while Level 3 characterizes damage (exact location and size) in addition to its detection by utilising a Weighted Energy Arrival Method (WEAM). The proposed multi-level strategy is verified and validated experimentally by detection of Barely Visible Impact Damage (BVID) on a curved composite fuselage panel. PMID:28773910
POLIX: A Thomson X-ray polarimeter for a small satellite mission
NASA Astrophysics Data System (ADS)
Paul, Biswajit; Gopala Krishna, M. R.; Puthiya Veetil, Rishin
2016-07-01
POLIX is a Thomson X-ray polarimeter for a small satellite mission of ISRO. The instrument consists of a collimator, a scatterer and a set proportional counters to detect the scattered X-rays. We will describe the design, specifications, sensitivity, and development status of this instrument and some of the important scientific goals. This instrument will provide unprecedented opportunity to measure X-ray polarisation in the medium energy range in a large number of sources of different classes with a minimum detectable linear polarisation degree of 2-3%. The prime objects for observation with this instrument are the X-ray bright accretion powered neutron stars, accreting black holes in different spectral states, rotation powered pulsars, magnetars, and active galactic nuclei. This instrument will be a bridge between the soft X-ray polarimeters and the Compton polarimeters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brett Emery Trabun; Gamage, Thoshitha Thanushka; Bakken, David Edward
This disclosure describes, in part, a system management component and failure detection component for use in a power grid data network to identify anomalies within the network and systematically adjust the quality of service of data published by publishers and subscribed to by subscribers within the network. In one implementation, subscribers may identify a desired data rate, a minimum acceptable data rate, desired latency, minimum acceptable latency and a priority for each subscription. The failure detection component may identify an anomaly within the network and a source of the anomaly. Based on the identified anomaly, data rates and or datamore » paths may be adjusted in real-time to ensure that the power grid data network does not become overloaded and/or fail.« less
Validation of Minimum Display Requirements for a UAS Detect and Avoid System
NASA Technical Reports Server (NTRS)
Rorie, Conrad; Fern, Lisa; Roberts, Zach; Monk, Kevin; Santiago, Confesor; Shively, Jay
2017-01-01
The full integration of Unmanned Aircraft Systems (UAS) into the National Airspace System (NAS), a prerequisite for enabling a broad range of public and commercial UAS operations, presents several technical challenges to UAS developers, operators and regulators. A primary barrier is the inability for UAS pilots (situated at a ground control station, or GCS) to comply with Title 14 Code of Federal Regulations sections 91.111 and 91.113, which require pilots to see and avoid other aircraft in order to maintain well clear. The present study is the final in a series of human-in-the-loop experiments designed to explore and test the various display and alerting requirements being incorporated into the minimum operational performance standards (MOPS) for a UAS-specific detect and avoid system that would replace the see and avoid function required of manned aircraft. Two display configurations were tested - an integrated display and a standalone display - and their impact on pilot response times and ability to maintain DAA well clear were compared. Results indicated that the current draft of the MOPS result in high-level performance and did not meaningfully differ by display configuration.
Antibacterial activity of antibacterial cutting boards in household kitchens.
Kounosu, Masayuki; Kaneko, Seiichi
2007-12-01
We examined antibacterial cutting boards with antibacterial activity values of either "2" or "4" in compliance with the JIS Z 2801 standard, and compared their findings with those of cutting boards with no antibacterial activity. These cutting boards were used in ten different households, and we measured changes in the viable cell counts of several types of bacteria with the drop plate method. We also identified the detected bacterial flora and measured the minimum antimicrobial concentrations of several commonly used antibacterial agents against the kinds of bacteria identified to determine the expected antibacterial activity of the respective agents. Cutting boards with activity values of both "2" and "4" proved to be antibacterial in actual use, although no correlation between the viable cell counts and the antibacterial activity values was observed. In the kitchen environment, large quantities of Pseudomonas, Flavobacterium, Micrococcus, and Bacillus were detected, and it was confirmed that common antibacterial agents used in many antibacterial products are effective against these bacterial species. In addition, we measured the minimum antimicrobial concentrations of the agents against lactobacillus, a typical good bacterium, and discovered that this bacterium is less sensitive to these antibacterial agents compared to more common bacteria.
Dual Energy Method for Breast Imaging: A Simulation Study.
Koukou, V; Martini, N; Michail, C; Sotiropoulou, P; Fountzoula, C; Kalyvas, N; Kandarakis, I; Nikiforidis, G; Fountos, G
2015-01-01
Dual energy methods can suppress the contrast between adipose and glandular tissues in the breast and therefore enhance the visibility of calcifications. In this study, a dual energy method based on analytical modeling was developed for the detection of minimum microcalcification thickness. To this aim, a modified radiographic X-ray unit was considered, in order to overcome the limited kVp range of mammographic units used in previous DE studies, combined with a high resolution CMOS sensor (pixel size of 22.5 μm) for improved resolution. Various filter materials were examined based on their K-absorption edge. Hydroxyapatite (HAp) was used to simulate microcalcifications. The contrast to noise ratio (CNR tc ) of the subtracted images was calculated for both monoenergetic and polyenergetic X-ray beams. The optimum monoenergetic pair was 23/58 keV for the low and high energy, respectively, resulting in a minimum detectable microcalcification thickness of 100 μm. In the polyenergetic X-ray study, the optimal spectral combination was 40/70 kVp filtered with 100 μm cadmium and 1000 μm copper, respectively. In this case, the minimum detectable microcalcification thickness was 150 μm. The proposed dual energy method provides improved microcalcification detectability in breast imaging with mean glandular dose values within acceptable levels.
Dual Energy Method for Breast Imaging: A Simulation Study
2015-01-01
Dual energy methods can suppress the contrast between adipose and glandular tissues in the breast and therefore enhance the visibility of calcifications. In this study, a dual energy method based on analytical modeling was developed for the detection of minimum microcalcification thickness. To this aim, a modified radiographic X-ray unit was considered, in order to overcome the limited kVp range of mammographic units used in previous DE studies, combined with a high resolution CMOS sensor (pixel size of 22.5 μm) for improved resolution. Various filter materials were examined based on their K-absorption edge. Hydroxyapatite (HAp) was used to simulate microcalcifications. The contrast to noise ratio (CNRtc) of the subtracted images was calculated for both monoenergetic and polyenergetic X-ray beams. The optimum monoenergetic pair was 23/58 keV for the low and high energy, respectively, resulting in a minimum detectable microcalcification thickness of 100 μm. In the polyenergetic X-ray study, the optimal spectral combination was 40/70 kVp filtered with 100 μm cadmium and 1000 μm copper, respectively. In this case, the minimum detectable microcalcification thickness was 150 μm. The proposed dual energy method provides improved microcalcification detectability in breast imaging with mean glandular dose values within acceptable levels. PMID:26246848
A differential detection scheme of spectral shifts in long-period fiber gratings
NASA Astrophysics Data System (ADS)
Zhelyazkova, Katerina; Eftimov, Tinko; Smietana, Mateusz; Bock, Wojtek
2010-10-01
In this work we present an analysis of the response of a compact, simple and inexpensive optoelectronic sensor system intended to detect spectral shifts of a long-period fiber grating (LPG). The system makes use of a diffraction grating and a couple of receiving optical fibers that pick up signals at two different wavelengths. This differential detection system provides the same useful information from an LPG-based sensor as with a conventional laboratory system using optical spectrum analyzers for monitoring the minimum offset of LPG. The design of the fiber detection pair as a function of the parameters of the dispersion grating, the pick-up fiber and the LPG parameters, is presented in detail. Simulation of the detection system responses is presented using real from spectral shifts in nano-coated LPGs caused by the evaporation of various liquids such as water, ethanol and acetone, which are examples of corrosive, flammable and hazardous substances. Fiber optic sensors with similar detection can find applications in structural health monitoring for moisture detection, monitoring the spillage of toxic and flammable substances in industry etc.
NASA Astrophysics Data System (ADS)
Keeble, James; Brown, Hannah; Abraham, N. Luke; Harris, Neil R. P.; Pyle, John A.
2018-06-01
Total column ozone values from an ensemble of UM-UKCA model simulations are examined to investigate different definitions of progress on the road to ozone recovery. The impacts of modelled internal atmospheric variability are accounted for by applying a multiple linear regression model to modelled total column ozone values, and ozone trend analysis is performed on the resulting ozone residuals. Three definitions of recovery are investigated: (i) a slowed rate of decline and the date of minimum column ozone, (ii) the identification of significant positive trends and (iii) a return to historic values. A return to past thresholds is the last state to be achieved. Minimum column ozone values, averaged from 60° S to 60° N, occur between 1990 and 1995 for each ensemble member, driven in part by the solar minimum conditions during the 1990s. When natural cycles are accounted for, identification of the year of minimum ozone in the resulting ozone residuals is uncertain, with minimum values for each ensemble member occurring at different times between 1992 and 2000. As a result of this large variability, identification of the date of minimum ozone constitutes a poor measure of ozone recovery. Trends for the 2000-2017 period are positive at most latitudes and are statistically significant in the mid-latitudes in both hemispheres when natural cycles are accounted for. This significance results largely from the large sample size of the multi-member ensemble. Significant trends cannot be identified by 2017 at the highest latitudes, due to the large interannual variability in the data, nor in the tropics, due to the small trend magnitude, although it is projected that significant trends may be identified in these regions soon thereafter. While significant positive trends in total column ozone could be identified at all latitudes by ˜ 2030, column ozone values which are lower than the 1980 annual mean can occur in the mid-latitudes until ˜ 2050, and in the tropics and high latitudes deep into the second half of the 21st century.
Graphene oxide and DNA aptamer based sub-nanomolar potassium detecting optical nanosensor
NASA Astrophysics Data System (ADS)
Datta, Debopam; Sarkar, Ketaki; Mukherjee, Souvik; Meshik, Xenia; Stroscio, Michael A.; Dutta, Mitra
2017-08-01
Quantum-dot (QD) based nanosensors are frequently used by researchers to detect small molecules, ions and different biomolecules. In this article, we present a sensor complex/system comprised of deoxyribonucleic acid (DNA) aptamer, gold nanoparticle and semiconductor QD, attached to a graphene oxide (GO) flake for detection of potassium. As reported herein, it is demonstrated that QD-aptamer-quencher nanosensor functions even when tethered to GO, opening the way to future applications where sensing can be accomplished simultaneously with other previously demonstrated applications of GO such as serving as a nanocarrier for drug delivery. Herein, it is demonstrated that the DNA based thrombin binding aptamer used in this study undergoes the conformational change needed for sensing even when the nanosensor complex is anchored to the GO. Analysis with the Hill equation indicates the interaction between aptamer and potassium follows sigmoidal Hill kinetics. It is found that the quenching efficiency of the optical sensor is linear with the logarithm of concentration from 1 pM to 100 nM and decreases for higher concentration due to unavailability of aptamer binding sites. Such a simple and sensitive optical aptasensor with minimum detection capability of 1.96 pM for potassium ion can also be employed in-vitro detection of different physiological ions, pathogens and disease detection methods.
Search for magnetic monopoles in lunar material
NASA Technical Reports Server (NTRS)
Alvarez, L. W.; Eberhard, P. H.; Ross, R. R.; Watt, R. D.
1972-01-01
Magnetic monopoles in 19.8 kg. of lunar material returned by Apollo 11, 12 and 14 missions were investigated. The search was done with a detector which is capable of detecting any single monopole of any charge equal to or larger than the minimum value compatible with Dirac's theory. Two experiments were performed, each one with different lunar material. In each experiment the lunar material was divided into several measurement samples. No monopole was found. The magnetic charge of each sample was consistent with zero.
Weighted network analysis of high-frequency cross-correlation measures
NASA Astrophysics Data System (ADS)
Iori, Giulia; Precup, Ovidiu V.
2007-03-01
In this paper we implement a Fourier method to estimate high-frequency correlation matrices from small data sets. The Fourier estimates are shown to be considerably less noisy than the standard Pearson correlation measures and thus capable of detecting subtle changes in correlation matrices with just a month of data. The evolution of correlation at different time scales is analyzed from the full correlation matrix and its minimum spanning tree representation. The analysis is performed by implementing measures from the theory of random weighted networks.
Rényi-Fisher entropy product as a marker of topological phase transitions
NASA Astrophysics Data System (ADS)
Bolívar, J. C.; Nagy, Ágnes; Romera, Elvira
2018-05-01
The combined Rényi-Fisher entropy product of electrons plus holes displays a minimum at the charge neutrality points. The Stam-Rényi difference and the Stam-Rényi uncertainty product of the electrons plus holes, show maxima at the charge neutrality points. Topological quantum numbers capable of detecting the topological insulator and the band insulator phases, are defined. Upper and lower bounds for the position and momentum space Rényi-Fisher entropy products are derived.
2010-03-03
obtainable while for the free-decay problem we simply have to include the initial conditions as random variables to be predicted. A different approach that...important and useful properties of MLEs is that, under regularity conditions , they are asymptotically unbiased and possess the minimum possible...becomes pLðzjh;s2G;MiÞ (i.e. the likelihood is conditional on the specified model). However, in this work we will only consider a single model and drop the
A point of minimal important difference (MID): a critique of terminology and methods.
King, Madeleine T
2011-04-01
The minimal important difference (MID) is a phrase with instant appeal in a field struggling to interpret health-related quality of life and other patient-reported outcomes. The terminology can be confusing, with several terms differing only slightly in definition (e.g., minimal clinically important difference, clinically important difference, minimally detectable difference, the subjectively significant difference), and others that seem similar despite having quite different meanings (minimally detectable difference versus minimum detectable change). Often, nuances of definition are of little consequence in the way that these quantities are estimated and used. Four methods are commonly employed to estimate MIDs: patient rating of change (global transition items); clinical anchors; standard error of measurement; and effect size. These are described and critiqued in this article. There is no universal MID, despite the appeal of the notion. Indeed, for a particular patient-reported outcome instrument or scale, the MID is not an immutable characteristic, but may vary by population and context. At both the group and individual level, the MID may depend on the clinical context and decision at hand, the baseline from which the patient starts, and whether they are improving or deteriorating. Specific estimates of MIDs should therefore not be overinterpreted. For a given health-related quality-of-life scale, all available MID estimates (and their confidence intervals) should be considered, amalgamated into general guidelines and applied judiciously to any particular clinical or research context.
Target Coverage in Wireless Sensor Networks with Probabilistic Sensors
Shan, Anxing; Xu, Xianghua; Cheng, Zongmao
2016-01-01
Sensing coverage is a fundamental problem in wireless sensor networks (WSNs), which has attracted considerable attention. Conventional research on this topic focuses on the 0/1 coverage model, which is only a coarse approximation to the practical sensing model. In this paper, we study the target coverage problem, where the objective is to find the least number of sensor nodes in randomly-deployed WSNs based on the probabilistic sensing model. We analyze the joint detection probability of target with multiple sensors. Based on the theoretical analysis of the detection probability, we formulate the minimum ϵ-detection coverage problem. We prove that the minimum ϵ-detection coverage problem is NP-hard and present an approximation algorithm called the Probabilistic Sensor Coverage Algorithm (PSCA) with provable approximation ratios. To evaluate our design, we analyze the performance of PSCA theoretically and also perform extensive simulations to demonstrate the effectiveness of our proposed algorithm. PMID:27618902
NASA Astrophysics Data System (ADS)
Ghiami-Shamami, Fereshteh; Sabziparvar, Ali Akbar; Shinoda, Seirou
2018-06-01
The present study examined annually and seasonally trends in climate-based and location-based indices after detection of artificial change points and application of homogenization. Thirteen temperature and eight precipitation indices were generated at 27 meteorological stations over Iran during 1961-2012. The Mann-Kendall test and Sen's slope estimator were applied for trend detection. Results revealed that almost all indices based on minimum temperature followed warmer conditions. Indicators based on minimum temperature showed less consistency with more cold and less warm events. Climate-based results for all extremes indicated semi-arid climate had the most warming events. Moreover, based on location-based results, inland areas showed the most signs of warming. Indices based on precipitation exhibited a negative trend in warm seasons, with the most changes in coastal areas and inland, respectively. Results provided evidence of warming and drying since the 1990s. Changes in precipitation indices were much weaker and less spatially coherent. Summer was found to be the most sensitive season, in comparison with winter. For arid and semi-arid regions, by increasing the latitude, less warm events occurred, while increasing the longitude led to more warming events. Overall, Iran is dominated by a significant increase in warm events, especially minimum temperature-based indices (nighttime). This result, in addition to fewer precipitation events, suggests a generally dryer regime for the future, which is more evident in the warm season of semi-arid sites. The results could provide beneficial references for water resources and eco-environmental policymakers.
Future changes over the Himalayas: Maximum and minimum temperature
NASA Astrophysics Data System (ADS)
Dimri, A. P.; Kumar, D.; Choudhary, A.; Maharana, P.
2018-03-01
An assessment of the projection of minimum and maximum air temperature over the Indian Himalayan region (IHR) from the COordinated Regional Climate Downscaling EXperiment- South Asia (hereafter, CORDEX-SA) regional climate model (RCM) experiments have been carried out under two different Representative Concentration Pathway (RCP) scenarios. The major aim of this study is to assess the probable future changes in the minimum and maximum climatology and its long-term trend under different RCPs along with the elevation dependent warming over the IHR. A number of statistical analysis such as changes in mean climatology, long-term spatial trend and probability distribution function are carried out to detect the signals of changes in climate. The study also tries to quantify the uncertainties associated with different model experiments and their ensemble in space, time and for different seasons. The model experiments and their ensemble show prominent cold bias over Himalayas for present climate. However, statistically significant higher warming rate (0.23-0.52 °C/decade) for both minimum and maximum air temperature (Tmin and Tmax) is observed for all the seasons under both RCPs. The rate of warming intensifies with the increase in the radiative forcing under a range of greenhouse gas scenarios starting from RCP4.5 to RCP8.5. In addition to this, a wide range of spatial variability and disagreements in the magnitude of trend between different models describes the uncertainty associated with the model projections and scenarios. The projected rate of increase of Tmin may destabilize the snow formation at the higher altitudes in the northern and western parts of Himalayan region, while rising trend of Tmax over southern flank may effectively melt more snow cover. Such combined effect of rising trend of Tmin and Tmax may pose a potential threat to the glacial deposits. The overall trend of Diurnal temperature range (DTR) portrays increasing trend across entire area with highest magnitude under RCP8.5. This higher rate of increase is imparted from the predominant rise of Tmax as compared to Tmin.
Minimum viewing angle for visually guided ground speed control in bumblebees.
Baird, Emily; Kornfeldt, Torill; Dacke, Marie
2010-05-01
To control flight, flying insects extract information from the pattern of visual motion generated during flight, known as optic flow. To regulate their ground speed, insects such as honeybees and Drosophila hold the rate of optic flow in the axial direction (front-to-back) constant. A consequence of this strategy is that its performance varies with the minimum viewing angle (the deviation from the frontal direction of the longitudinal axis of the insect) at which changes in axial optic flow are detected. The greater this angle, the later changes in the rate of optic flow, caused by changes in the density of the environment, will be detected. The aim of the present study is to examine the mechanisms of ground speed control in bumblebees and to identify the extent of the visual range over which optic flow for ground speed control is measured. Bumblebees were trained to fly through an experimental tunnel consisting of parallel vertical walls. Flights were recorded when (1) the distance between the tunnel walls was either 15 or 30 cm, (2) the visual texture on the tunnel walls provided either strong or weak optic flow cues and (3) the distance between the walls changed abruptly halfway along the tunnel's length. The results reveal that bumblebees regulate ground speed using optic flow cues and that changes in the rate of optic flow are detected at a minimum viewing angle of 23-30 deg., with a visual field that extends to approximately 155 deg. By measuring optic flow over a visual field that has a low minimum viewing angle, bumblebees are able to detect and respond to changes in the proximity of the environment well before they are encountered.
Antenna Allocation in MIMO Radar with Widely Separated Antennas for Multi-Target Detection
Gao, Hao; Wang, Jian; Jiang, Chunxiao; Zhang, Xudong
2014-01-01
In this paper, we explore a new resource called multi-target diversity to optimize the performance of multiple input multiple output (MIMO) radar with widely separated antennas for detecting multiple targets. In particular, we allocate antennas of the MIMO radar to probe different targets simultaneously in a flexible manner based on the performance metric of relative entropy. Two antenna allocation schemes are proposed. In the first scheme, each antenna is allocated to illuminate a proper target over the entire illumination time, so that the detection performance of each target is guaranteed. The problem is formulated as a minimum makespan scheduling problem in the combinatorial optimization framework. Antenna allocation is implemented through a branch-and-bound algorithm and an enhanced factor 2 algorithm. In the second scheme, called antenna-time allocation, each antenna is allocated to illuminate different targets with different illumination time. Both antenna allocation and time allocation are optimized based on illumination probabilities. Over a large range of transmitted power, target fluctuations and target numbers, both of the proposed antenna allocation schemes outperform the scheme without antenna allocation. Moreover, the antenna-time allocation scheme achieves a more robust detection performance than branch-and-bound algorithm and the enhanced factor 2 algorithm when the target number changes. PMID:25350505
Antenna allocation in MIMO radar with widely separated antennas for multi-target detection.
Gao, Hao; Wang, Jian; Jiang, Chunxiao; Zhang, Xudong
2014-10-27
In this paper, we explore a new resource called multi-target diversity to optimize the performance of multiple input multiple output (MIMO) radar with widely separated antennas for detecting multiple targets. In particular, we allocate antennas of the MIMO radar to probe different targets simultaneously in a flexible manner based on the performance metric of relative entropy. Two antenna allocation schemes are proposed. In the first scheme, each antenna is allocated to illuminate a proper target over the entire illumination time, so that the detection performance of each target is guaranteed. The problem is formulated as a minimum makespan scheduling problem in the combinatorial optimization framework. Antenna allocation is implemented through a branch-and-bound algorithm and an enhanced factor 2 algorithm. In the second scheme, called antenna-time allocation, each antenna is allocated to illuminate different targets with different illumination time. Both antenna allocation and time allocation are optimized based on illumination probabilities. Over a large range of transmitted power, target fluctuations and target numbers, both of the proposed antenna allocation schemes outperform the scheme without antenna allocation. Moreover, the antenna-time allocation scheme achieves a more robust detection performance than branch-and-bound algorithm and the enhanced factor 2 algorithm when the target number changes.
Simulating future uncertainty to guide the selection of survey designs for long-term monitoring
Garman, Steven L.; Schweiger, E. William; Manier, Daniel J.; Gitzen, Robert A.; Millspaugh, Joshua J.; Cooper, Andrew B.; Licht, Daniel S.
2012-01-01
A goal of environmental monitoring is to provide sound information on the status and trends of natural resources (Messer et al. 1991, Theobald et al. 2007, Fancy et al. 2009). When monitoring observations are acquired by measuring a subset of the population of interest, probability sampling as part of a well-constructed survey design provides the most reliable and legally defensible approach to achieve this goal (Cochran 1977, Olsen et al. 1999, Schreuder et al. 2004; see Chapters 2, 5, 6, 7). Previous works have described the fundamentals of sample surveys (e.g. Hansen et al. 1953, Kish 1965). Interest in survey designs and monitoring over the past 15 years has led to extensive evaluations and new developments of sample selection methods (Stevens and Olsen 2004), of strategies for allocating sample units in space and time (Urquhart et al. 1993, Overton and Stehman 1996, Urquhart and Kincaid 1999), and of estimation (Lesser and Overton 1994, Overton and Stehman 1995) and variance properties (Larsen et al. 1995, Stevens and Olsen 2003) of survey designs. Carefully planned, “scientific” (Chapter 5) survey designs have become a standard in contemporary monitoring of natural resources. Based on our experience with the long-term monitoring program of the US National Park Service (NPS; Fancy et al. 2009; Chapters 16, 22), operational survey designs tend to be selected using the following procedures. For a monitoring indicator (i.e. variable or response), a minimum detectable trend requirement is specified, based on the minimum level of change that would result in meaningful change (e.g. degradation). A probability of detecting this trend (statistical power) and an acceptable level of uncertainty (Type I error; see Chapter 2) within a specified time frame (e.g. 10 years) are specified to ensure timely detection. Explicit statements of the minimum detectable trend, the time frame for detecting the minimum trend, power, and acceptable probability of Type I error (α) collectively form the quantitative sampling objective.
NASA Astrophysics Data System (ADS)
Chakraborty, S.; Banerjee, A.; Gupta, S. K. S.; Christensen, P. R.; Papandreou-Suppappola, A.
2017-12-01
Multitemporal observations acquired frequently by satellites with short revisit periods such as the Moderate Resolution Imaging Spectroradiometer (MODIS), is an important source for modeling land cover. Due to the inherent seasonality of the land cover, harmonic modeling reveals hidden state parameters characteristic to it, which is used in classifying different land cover types and in detecting changes due to natural or anthropogenic factors. In this work, we use an eight day MODIS composite to create a Normalized Difference Vegetation Index (NDVI) time-series of ten years. Improved hidden parameter estimates of the nonlinear harmonic NDVI model are obtained using the Particle Filter (PF), a sequential Monte Carlo estimator. The nonlinear estimation based on PF is shown to improve parameter estimation for different land cover types compared to existing techniques that use the Extended Kalman Filter (EKF), due to linearization of the harmonic model. As these parameters are representative of a given land cover, its applicability in near real-time detection of land cover change is also studied by formulating a metric that captures parameter deviation due to change. The detection methodology is evaluated by considering change as a rare class problem. This approach is shown to detect change with minimum delay. Additionally, the degree of change within the change perimeter is non-uniform. By clustering the deviation in parameters due to change, this spatial variation in change severity is effectively mapped and validated with high spatial resolution change maps of the given regions.
Tang, Jing; Zheng, Jianbin; Wang, Yang; Yu, Lie; Zhan, Enqi; Song, Qiuzhi
2018-02-06
This paper presents a novel methodology for detecting the gait phase of human walking on level ground. The previous threshold method (TM) sets a threshold to divide the ground contact forces (GCFs) into on-ground and off-ground states. However, the previous methods for gait phase detection demonstrate no adaptability to different people and different walking speeds. Therefore, this paper presents a self-tuning triple threshold algorithm (STTTA) that calculates adjustable thresholds to adapt to human walking. Two force sensitive resistors (FSRs) were placed on the ball and heel to measure GCFs. Three thresholds (i.e., high-threshold, middle-threshold andlow-threshold) were used to search out the maximum and minimum GCFs for the self-adjustments of thresholds. The high-threshold was the main threshold used to divide the GCFs into on-ground and off-ground statuses. Then, the gait phases were obtained through the gait phase detection algorithm (GPDA), which provides the rules that determine calculations for STTTA. Finally, the STTTA reliability is determined by comparing the results between STTTA and Mariani method referenced as the timing analysis module (TAM) and Lopez-Meyer methods. Experimental results show that the proposed method can be used to detect gait phases in real time and obtain high reliability when compared with the previous methods in the literature. In addition, the proposed method exhibits strong adaptability to different wearers walking at different walking speeds.
NASA Technical Reports Server (NTRS)
Generazio, Edward R.
2011-01-01
The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that for a minimum flaw size and all greater flaw sizes, there is 0.90 probability of detection with 95% confidence (90/95 POD). Directed design of experiments for probability of detection (DOEPOD) has been developed to provide an efficient and accurate methodology that yields estimates of POD and confidence bounds for both Hit-Miss or signal amplitude testing, where signal amplitudes are reduced to Hit-Miss by using a signal threshold Directed DOEPOD uses a nonparametric approach for the analysis or inspection data that does require any assumptions about the particular functional form of a POD function. The DOEPOD procedure identifies, for a given sample set whether or not the minimum requirement of 0.90 probability of detection with 95% confidence is demonstrated for a minimum flaw size and for all greater flaw sizes (90/95 POD). The DOEPOD procedures are sequentially executed in order to minimize the number of samples needed to demonstrate that there is a 90/95 POD lower confidence bound at a given flaw size and that the POD is monotonic for flaw sizes exceeding that 90/95 POD flaw size. The conservativeness of the DOEPOD methodology results is discussed. Validated guidelines for binomial estimation of POD for fracture critical inspection are established.
Ahmed, Asm Sabbir; Hauck, Barry; Kramer, Gary H
2012-08-01
This study described the performance of an array of high-purity Germanium detectors, designed with two different end cap materials-steel and carbon fibre. The advantages and disadvantages of using this detector type in the estimation of the minimum detectable activity (MDA) for different energy peaks of isotope (152)Eu were illustrated. A Monte Carlo model was developed to study the detection efficiency for the detector array. A voxelised Lawrence Livermore torso phantom, equipped with lung, chest plates and overlay plates, was used to mimic a typical lung counting protocol with the array of detectors. The lung of the phantom simulated the volumetric source organ. A significantly low MDA was estimated for energy peaks at 40 keV and at a chest wall thickness of 6.64 cm.
Research on Abnormal Detection Based on Improved Combination of K - means and SVDD
NASA Astrophysics Data System (ADS)
Hao, Xiaohong; Zhang, Xiaofeng
2018-01-01
In order to improve the efficiency of network intrusion detection and reduce the false alarm rate, this paper proposes an anomaly detection algorithm based on improved K-means and SVDD. The algorithm first uses the improved K-means algorithm to cluster the training samples of each class, so that each class is independent and compact in class; Then, according to the training samples, the SVDD algorithm is used to construct the minimum superspheres. The subordinate relationship of the samples is determined by calculating the distance of the minimum superspheres constructed by SVDD. If the test sample is less than the center of the hypersphere, the test sample belongs to this class, otherwise it does not belong to this class, after several comparisons, the final test of the effective detection of the test sample.In this paper, we use KDD CUP99 data set to simulate the proposed anomaly detection algorithm. The results show that the algorithm has high detection rate and low false alarm rate, which is an effective network security protection method.
Bao, Hongmei; Zhao, Yuhui; Wang, Yunhe; Xu, Xiaolong; Shi, Jianzhong; Zeng, Xianying; Wang, Xiurong; Chen, Hualan
2014-01-01
A novel influenza A (H7N9) virus has emerged in China. To rapidly detect this virus from clinical samples, we developed a reverse transcription loop-mediated isothermal amplification (RT-LAMP) method for the detection of the H7N9 virus. The minimum detection limit of the RT-LAMP assay was 0.01 PFU H7N9 virus, making this method 100-fold more sensitive to the detection of the H7N9 virus than conventional RT-PCR. The H7N9 virus RT-LAMP assays can efficiently detect different sources of H7N9 influenza virus RNA (from chickens, pigeons, the environment, and humans). No cross-reactive amplification with the RNA of other subtype influenza viruses or of other avian respiratory viruses was observed. The assays can effectively detect H7N9 influenza virus RNA in drinking water, soil, cloacal swab, and tracheal swab samples that were collected from live poultry markets, as well as human H7N9 virus, in less than 30 min. These results suggest that the H7N9 virus RT-LAMP assays were efficient, practical, and rapid diagnostic methods for the epidemiological surveillance and diagnosis of influenza A (H7N9) virus from different resource samples. PMID:24689044
Characteristics of low-latitude ionospheric depletions and enhancements during solar minimum
NASA Astrophysics Data System (ADS)
Haaser, R. A.; Earle, G. D.; Heelis, R. A.; Klenzing, J.; Stoneback, R.; Coley, W. R.; Burrell, A. G.
2012-10-01
Under the waning solar minimum conditions during 2009 and 2010, the Ion Velocity Meter, part of the Coupled Ion Neutral Dynamics Investigation aboard the Communication/Navigation Outage Forecasting System satellite, is used to measure in situ nighttime ion densities and drifts at altitudes between 400 and 550 km during the hours 21:00-03:00 solar local time. A new approach to detecting and classifying well-formed ionospheric plasma depletions and enhancements (bubbles and blobs) with scale sizes between 50 and 500 km is used to develop geophysical statistics for the summer, winter, and equinox seasons during the quiet solar conditions. Some diurnal and seasonal geomagnetic distribution characteristics confirm previous work on equatorial irregularities and scintillations, while other elements reveal new behaviors that will require further investigation before they may be fully understood. Events identified in the study reveal very different and often opposite behaviors of bubbles and blobs during solar minimum. In particular, more bubbles demonstrating deeper density fluctuations and faster perturbation plasma drifts typically occur earlier near the magnetic equator, while blobs of similar magnitude occur more often far away from the geomagnetic equator closer to midnight.
Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows
Wang, Di; Kleinberg, Robert D.
2009-01-01
Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C2, C3, C4,…. It is known that C2 can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing Ck (k > 2) require solving a linear program. In this paper we prove that C3 can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}n, this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network. PMID:20161596
Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows.
Wang, Di; Kleinberg, Robert D
2009-11-28
Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C(2), C(3), C(4),…. It is known that C(2) can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing C(k) (k > 2) require solving a linear program. In this paper we prove that C(3) can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}(n), this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guha, Saikat; Shapiro, Jeffrey H.; Erkmen, Baris I.
Previous work on the classical information capacities of bosonic channels has established the capacity of the single-user pure-loss channel, bounded the capacity of the single-user thermal-noise channel, and bounded the capacity region of the multiple-access channel. The latter is a multiple-user scenario in which several transmitters seek to simultaneously and independently communicate to a single receiver. We study the capacity region of the bosonic broadcast channel, in which a single transmitter seeks to simultaneously and independently communicate to two different receivers. It is known that the tightest available lower bound on the capacity of the single-user thermal-noise channel is thatmore » channel's capacity if, as conjectured, the minimum von Neumann entropy at the output of a bosonic channel with additive thermal noise occurs for coherent-state inputs. Evidence in support of this minimum output entropy conjecture has been accumulated, but a rigorous proof has not been obtained. We propose a minimum output entropy conjecture that, if proved to be correct, will establish that the capacity region of the bosonic broadcast channel equals the inner bound achieved using a coherent-state encoding and optimum detection. We provide some evidence that supports this conjecture, but again a full proof is not available.« less
[Detection of intraorbital foreign material using MDCT].
Hoffstetter, P; Friedrich, C; Framme, C; Hoffstetter, M; Zorger, N; Stierstorfer, K; Ross, C; Uller, W; Müller-Wille, R; Rennert, J; Jung, E M; Schreyer, A G
2011-06-01
To judge the possibilities of detection of orbital foreign bodies in multidetector CT (MDCT) with a focus on glass slivers. Experimental systematic measuring of Hounsfield Units (HU) of 20 different materials, containing 16 different types of glass with 4 different types of ophthalmic lenses among them. The measurements were performed using a standardized protocol with an orbita phantom being scanned with 16-slice MDCT. Using the resulting density values, the smallest detectable volume was calculated. Using this data we produced slivers of 5 different glass types in the sub-millimeter range and calculated their volume. Those micro-slivers underwent another CT scan using the same protocol as mentioned above to experimentally discern and confirm the detection limit for micro-slivers made of different materials. Glass has comparatively high density values of at least 2000 HU. The density of glasses with strong refraction is significantly higher and reaches up to 12 400 HU. We calculated a minimum detectable volume of 0.07 mm (3) for glass with a density of 2000 HU. Only glass slivers with a density higher than 8300 HU were experimentally detectable in the sub-millimeter range up to a volume as small as 0.01 mm (3). Less dense glass slivers could not be seen, even though their volume was above the theoretically calculated threshold for detection. Due to its high density of at least 2000 HU, glass is usually easily recognizable as an orbital foreign body. The detection threshold depends on the object's density and size and can be as low as 0.01 mm (3) in the case of glass with strong refraction and thus high density. The detection of glass as an orbital foreign body seems to be secure for slivers with a volume of at least 0.2 mm (3) for all types of glass. © Georg Thieme Verlag KG Stuttgart · New York.
Minimum depth of investigation for grounded-wire TEM due to self-transients
NASA Astrophysics Data System (ADS)
Zhou, Nannan; Xue, Guoqiang
2018-05-01
The grounded-wire transient electromagnetic method (TEM) has been widely used for near-surface metalliferous prospecting, oil and gas exploration, and hydrogeological surveying in the subsurface. However, it is commonly observed that such TEM signal is contaminated by the self-transient process occurred at the early stage of data acquisition. Correspondingly, there exists a minimum depth of investigation, above which the observed signal is not applicable for reliable data processing and interpretation. Therefore, for achieving a more comprehensive understanding of the TEM method, it is necessary to perform research on the self-transient process and moreover develop an approach for quantifying the minimum detection depth. In this paper, we first analyze the temporal procedure of the equivalent circuit of the TEM method and present a theoretical equation for estimating the self-induction voltage based on the inductor of the transmitting wire. Then, numerical modeling is applied for building the relationship between the minimum depth of investigation and various properties, including resistivity of the earth, offset, and source length. It is guide for the design of survey parameters when the grounded-wire TEM is applied to the shallow detection. Finally, it is verified through applications to a coal field in China.
Bayes Node Energy Polynomial Distribution to Improve Routing in Wireless Sensor Network
Palanisamy, Thirumoorthy; Krishnasamy, Karthikeyan N.
2015-01-01
Wireless Sensor Network monitor and control the physical world via large number of small, low-priced sensor nodes. Existing method on Wireless Sensor Network (WSN) presented sensed data communication through continuous data collection resulting in higher delay and energy consumption. To conquer the routing issue and reduce energy drain rate, Bayes Node Energy and Polynomial Distribution (BNEPD) technique is introduced with energy aware routing in the wireless sensor network. The Bayes Node Energy Distribution initially distributes the sensor nodes that detect an object of similar event (i.e., temperature, pressure, flow) into specific regions with the application of Bayes rule. The object detection of similar events is accomplished based on the bayes probabilities and is sent to the sink node resulting in minimizing the energy consumption. Next, the Polynomial Regression Function is applied to the target object of similar events considered for different sensors are combined. They are based on the minimum and maximum value of object events and are transferred to the sink node. Finally, the Poly Distribute algorithm effectively distributes the sensor nodes. The energy efficient routing path for each sensor nodes are created by data aggregation at the sink based on polynomial regression function which reduces the energy drain rate with minimum communication overhead. Experimental performance is evaluated using Dodgers Loop Sensor Data Set from UCI repository. Simulation results show that the proposed distribution algorithm significantly reduce the node energy drain rate and ensure fairness among different users reducing the communication overhead. PMID:26426701
Bayes Node Energy Polynomial Distribution to Improve Routing in Wireless Sensor Network.
Palanisamy, Thirumoorthy; Krishnasamy, Karthikeyan N
2015-01-01
Wireless Sensor Network monitor and control the physical world via large number of small, low-priced sensor nodes. Existing method on Wireless Sensor Network (WSN) presented sensed data communication through continuous data collection resulting in higher delay and energy consumption. To conquer the routing issue and reduce energy drain rate, Bayes Node Energy and Polynomial Distribution (BNEPD) technique is introduced with energy aware routing in the wireless sensor network. The Bayes Node Energy Distribution initially distributes the sensor nodes that detect an object of similar event (i.e., temperature, pressure, flow) into specific regions with the application of Bayes rule. The object detection of similar events is accomplished based on the bayes probabilities and is sent to the sink node resulting in minimizing the energy consumption. Next, the Polynomial Regression Function is applied to the target object of similar events considered for different sensors are combined. They are based on the minimum and maximum value of object events and are transferred to the sink node. Finally, the Poly Distribute algorithm effectively distributes the sensor nodes. The energy efficient routing path for each sensor nodes are created by data aggregation at the sink based on polynomial regression function which reduces the energy drain rate with minimum communication overhead. Experimental performance is evaluated using Dodgers Loop Sensor Data Set from UCI repository. Simulation results show that the proposed distribution algorithm significantly reduce the node energy drain rate and ensure fairness among different users reducing the communication overhead.
Sample size and allocation of effort in point count sampling of birds in bottomland hardwood forests
Smith, W.P.; Twedt, D.J.; Cooper, R.J.; Wiedenfeld, D.A.; Hamel, P.B.; Ford, R.P.; Ralph, C. John; Sauer, John R.; Droege, Sam
1995-01-01
To examine sample size requirements and optimum allocation of effort in point count sampling of bottomland hardwood forests, we computed minimum sample sizes from variation recorded during 82 point counts (May 7-May 16, 1992) from three localities containing three habitat types across three regions of the Mississippi Alluvial Valley (MAV). Also, we estimated the effect of increasing the number of points or visits by comparing results of 150 four-minute point counts obtained from each of four stands on Delta Experimental Forest (DEF) during May 8-May 21, 1991 and May 30-June 12, 1992. For each stand, we obtained bootstrap estimates of mean cumulative number of species each year from all possible combinations of six points and six visits. ANOVA was used to model cumulative species as a function of number of points visited, number of visits to each point, and interaction of points and visits. There was significant variation in numbers of birds and species between regions and localities (nested within region); neither habitat, nor the interaction between region and habitat, was significant. For a = 0.05 and a = 0.10, minimum sample size estimates (per factor level) varied by orders of magnitude depending upon the observed or specified range of desired detectable difference. For observed regional variation, 20 and 40 point counts were required to accommodate variability in total individuals (MSE = 9.28) and species (MSE = 3.79), respectively, whereas ? 25 percent of the mean could be achieved with five counts per factor level. Sample size sufficient to detect actual differences of Wood Thrush (Hylocichla mustelina) was >200, whereas the Prothonotary Warbler (Protonotaria citrea) required <10 counts. Differences in mean cumulative species were detected among number of points visited and among number of visits to a point. In the lower MAV, mean cumulative species increased with each added point through five points and with each additional visit through four visits. Although no interaction was detected between number of points and number of visits, when paired reciprocals were compared, more points invariably yielded a significantly greater cumulative number of species than more visits to a point. Still, 36 point counts per stand during each of two breeding seasons detected only 52 percent of the known available species pool in DEF.
Development and evaluation of a technique for in vivo monitoring of 60Co in human lungs
NASA Astrophysics Data System (ADS)
de Mello, J. Q.; Lucena, E. A.; Dantas, A. L. A.; Dantas, B. M.
2016-07-01
60Co is a fission product of 235U and represents a risk of internal exposure of workers in nuclear power plants, especially those involved in the maintenance of potentially contaminated parts and equipment. The control of 60Co intake by inhalation can be performed through in vivo monitoring. This work describes the evaluation of a technique through the minimum detectable activity and the corresponding minimum detectable effective doses, based on biokinetic and dosimetric models of 60Co in the human body. The results allow to state that the technique is suitable either for monitoring of occupational exposures or evaluation of accidental intake.
THREE PLANETS ORBITING WOLF 1061
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, D. J.; Wittenmyer, R. A.; Tinney, C. G.
We use archival HARPS spectra to detect three planets orbiting the M3 dwarf Wolf 1061 (GJ 628). We detect a 1.36 M{sub ⊕} minimum-mass planet with an orbital period P = 4.888 days (Wolf 1061b), a 4.25 M{sub ⊕} minimum-mass planet with orbital period P = 17.867 days (Wolf 1061c), and a likely 5.21 M{sub ⊕} minimum-mass planet with orbital period P = 67.274 days (Wolf 1061d). All of the planets are of sufficiently low mass that they may be rocky in nature. The 17.867 day planet falls within the habitable zone for Wolf 1061 and the 67.274 day planetmore » falls just outside the outer boundary of the habitable zone. There are no signs of activity observed in the bisector spans, cross-correlation FWHMs, calcium H and K indices, NaD indices, or Hα indices near the planetary periods. We use custom methods to generate a cross-correlation template tailored to the star. The resulting velocities do not suffer the strong annual variation observed in the HARPS DRS velocities. This differential technique should deliver better exploitation of the archival HARPS data for the detection of planets at extremely low amplitudes.« less
Marzocchi, O; Breustedt, B; Mostacci, D; Zankl, M; Urban, M
2011-03-01
A goal of whole body counting (WBC) is the estimation of the total body burden of radionuclides disregarding the actual position within the body. To achieve the goal, the detectors need to be placed in regions where the photon flux is as independent as possible from the distribution of the source. At the same time, the detectors need high photon fluxes in order to achieve better efficiency and lower minimum detectable activities. This work presents a method able to define the layout of new WBC systems and to study the behaviour of existing ones using both detection efficiency and its dependence on the position of the source within the body of computational phantoms.
A portable detection system for in vivo monitoring of 131I in routine and emergency situations
NASA Astrophysics Data System (ADS)
Lucena, EA; Dantas, ALA; Dantas, BM
2018-03-01
In vivo monitoring of 131I in human thyroid is often used to evaluate occupational exposure in nuclear medicine facilities and in the case of accidental intakes in nuclear power plants for the monitoring of workers and population. The device presented in this work consists on a Pb-collimated NaI(Tl)3”x3” scintillation detector assembled on a tripod and connected to a portable PC. The evaluation of the applicability and limitations of the system is based on the estimation of the committed effective doses associated to the minimum detectable activities in different facilities. It has been demonstrated that the system is suitable for use in routine and accidental situations.
Detection of changes in leaf water content using near- and middle-infrared reflectances
NASA Technical Reports Server (NTRS)
Hunt, E. Raymond, Jr.; Rock, Barrett N.
1989-01-01
A method to detect plant water stress by remote sensing is proposed using indices of near-IR and mid-IR wavelengths. The ability of the Leaf Water Content Index (LWCI) to determine leaf relative water content (RWC) is tested on species with different leaf morphologies. The way in which the Misture Stress Index (MSI) varies with RWC is studied. On test with several species, it is found that LWCI is equal to RWC, although the reflectances at 1.6 microns for two different RWC must be known to accurately predict unknown RWC. A linear correlation is found between MSI and RWC with each species having a different regression equation. Also, MSI is correlated with log sub 10 Equivalent Water Thickness (EWT) with data for all species falling on the same regression line. It is found that the minimum significant change of RWC that could be detected by appying the linear regression equation of MSI to EWT is 52 percent. Because the natural RWC variation from water stress is about 20 percent for most species, it is concluded that the near-IR and mid-IR reflectances cannot be used to remotely sense water stress.
Change detection in synthetic aperture radar images based on image fusion and fuzzy clustering.
Gong, Maoguo; Zhou, Zhiqiang; Ma, Jingjing
2012-04-01
This paper presents an unsupervised distribution-free change detection approach for synthetic aperture radar (SAR) images based on an image fusion strategy and a novel fuzzy clustering algorithm. The image fusion technique is introduced to generate a difference image by using complementary information from a mean-ratio image and a log-ratio image. In order to restrain the background information and enhance the information of changed regions in the fused difference image, wavelet fusion rules based on an average operator and minimum local area energy are chosen to fuse the wavelet coefficients for a low-frequency band and a high-frequency band, respectively. A reformulated fuzzy local-information C-means clustering algorithm is proposed for classifying changed and unchanged regions in the fused difference image. It incorporates the information about spatial context in a novel fuzzy way for the purpose of enhancing the changed information and of reducing the effect of speckle noise. Experiments on real SAR images show that the image fusion strategy integrates the advantages of the log-ratio operator and the mean-ratio operator and gains a better performance. The change detection results obtained by the improved fuzzy clustering algorithm exhibited lower error than its preexistences.
Automatic detection of Martian dark slope streaks by machine learning using HiRISE images
NASA Astrophysics Data System (ADS)
Wang, Yexin; Di, Kaichang; Xin, Xin; Wan, Wenhui
2017-07-01
Dark slope streaks (DSSs) on the Martian surface are one of the active geologic features that can be observed on Mars nowadays. The detection of DSS is a prerequisite for studying its appearance, morphology, and distribution to reveal its underlying geological mechanisms. In addition, increasingly massive amounts of Mars high resolution data are now available. Hence, an automatic detection method for locating DSSs is highly desirable. In this research, we present an automatic DSS detection method by combining interest region extraction and machine learning techniques. The interest region extraction combines gradient and regional grayscale information. Moreover, a novel recognition strategy is proposed that takes the normalized minimum bounding rectangles (MBRs) of the extracted regions to calculate the Local Binary Pattern (LBP) feature and train a DSS classifier using the Adaboost machine learning algorithm. Comparative experiments using five different feature descriptors and three different machine learning algorithms show the superiority of the proposed method. Experimental results utilizing 888 extracted region samples from 28 HiRISE images show that the overall detection accuracy of our proposed method is 92.4%, with a true positive rate of 79.1% and false positive rate of 3.7%, which in particular indicates great performance of the method at eliminating non-DSS regions.
de Hoop, Bartjan; Gietema, Hester; van Ginneken, Bram; Zanen, Pieter; Groenewegen, Gerard; Prokop, Mathias
2009-04-01
We compared interexamination variability of CT lung nodule volumetry with six currently available semi-automated software packages to determine the minimum change needed to detect the growth of solid lung nodules. We had ethics committee approval. To simulate a follow-up examination with zero growth, we performed two low-dose unenhanced CT scans in 20 patients referred for pulmonary metastases. Between examinations, patients got off and on the table. Volumes of all pulmonary nodules were determined on both examinations using six nodule evaluation software packages. Variability (upper limit of the 95% confidence interval of the Bland-Altman plot) was calculated for nodules for which segmentation was visually rated as adequate. We evaluated 214 nodules (mean diameter 10.9 mm, range 3.3 mm-30.0 mm). Software packages provided adequate segmentation in 71% to 86% of nodules (p < 0.001). In case of adequate segmentation, variability in volumetry between scans ranged from 16.4% to 22.3% for the various software packages. Variability with five to six software packages was significantly less for nodules >or=8 mm in diameter (range 12.9%-17.1%) than for nodules <8 mm (range 18.5%-25.6%). Segmented volumes of each package were compared to each of the other packages. Systematic volume differences were detected in 11/15 comparisons. This hampers comparison of nodule volumes between software packages.
[Comparison of detection sensitivity in rapid-diagnosis influenza virus kits].
Tokuno, Osamu; Fujiwara, Miki; Nakajoh, Yoshimi; Yamanouchi, Sumika; Adachi, Masayo; Ikeda, Akiko; Kitayama, Shigeo; Takahashi, Toshio; Kase, Tetsuo; Kinoshita, Shouhiro; Kumagai, Shunichi
2009-09-01
Rapid-diagnosis kits able to detect influenza A and B virus by immunochromatography developed by different manufacturers, while useful in early diagnosis, may vary widely in detection sensitivity. We compared sensitivity results for eight virus-detection kits in current use--Quick Chaser FluA, B (Mizuho Medy), Espline Influenza A & B-N (Fujirebio), Capilia Flu A + B (Nippon Beckton Dickinson & Alfesa Pharma), Poctem Influenza A/B (Otsuka Pharma & Sysmex), BD Flu Examan (Nippon Beckton Dickinson), Quick Ex-Flu "Seiken" (Denka Seiken), Quick Vue Rapid SP Influ (DP Pharma Biomedical), and Rapid Testa FLU stick (Daiichi Pure Chemicals)--against influenza virus stocks, contained five vaccination strains (one A/H1N1, two A/H3N2, and two B) and six clinical strains (two A/H1N1, two A/H3N2, and two B). Minimum detection concentrations giving immunologically positive signals in serial dilution and RNA copies in positive dilution in real-time reverse transcriptase-polymerase chain reaction (RT-PCR) were assayed for all kits and virus stock combinations. RNA log10 copy numbers/mL in dilutions within detection limits yielded 5.68-7.02, 6.37-7,17, and 6.5-8.13 for A/H1N1, A/H3N2, and B. Statistically significant differences in sensitivity were observed between some kit combinations. Detection sensitivity tended to be relatively higher for influenza A than B virus. This is assumed due to different principles in kit methods, such as monoclonal antibodies, specimen-extraction conditions, and other unknown factors.
Bellesi, Luca; Wyttenbach, Rolf; Gaudino, Diego; Colleoni, Paolo; Pupillo, Francesco; Carrara, Mauro; Braghetti, Antonio; Puligheddu, Carla; Presilla, Stefano
2017-01-01
The aim of this work was to evaluate detection of low-contrast objects and image quality in computed tomography (CT) phantom images acquired at different tube loadings (i.e. mAs) and reconstructed with different algorithms, in order to find appropriate settings to reduce the dose to the patient without any image detriment. Images of supraslice low-contrast objects of a CT phantom were acquired using different mAs values. Images were reconstructed using filtered back projection (FBP), hybrid and iterative model-based methods. Image quality parameters were evaluated in terms of modulation transfer function; noise, and uniformity using two software resources. For the definition of low-contrast detectability, studies based on both human (i.e. four-alternative forced-choice test) and model observers were performed across the various images. Compared to FBP, image quality parameters were improved by using iterative reconstruction (IR) algorithms. In particular, IR model-based methods provided a 60% noise reduction and a 70% dose reduction, preserving image quality and low-contrast detectability for human radiological evaluation. According to the model observer, the diameters of the minimum detectable detail were around 2 mm (up to 100 mAs). Below 100 mAs, the model observer was unable to provide a result. IR methods improve CT protocol quality, providing a potential dose reduction while maintaining a good image detectability. Model observer can in principle be useful to assist human performance in CT low-contrast detection tasks and in dose optimisation.
NASA Astrophysics Data System (ADS)
Cheng, Yao; Zhou, Ning; Zhang, Weihua; Wang, Zhiwei
2018-07-01
Minimum entropy deconvolution is a widely-used tool in machinery fault diagnosis, because it enhances the impulse component of the signal. The filter coefficients that greatly influence the performance of the minimum entropy deconvolution are calculated by an iterative procedure. This paper proposes an improved deconvolution method for the fault detection of rolling element bearings. The proposed method solves the filter coefficients by the standard particle swarm optimization algorithm, assisted by a generalized spherical coordinate transformation. When optimizing the filters performance for enhancing the impulses in fault diagnosis (namely, faulty rolling element bearings), the proposed method outperformed the classical minimum entropy deconvolution method. The proposed method was validated in simulation and experimental signals from railway bearings. In both simulation and experimental studies, the proposed method delivered better deconvolution performance than the classical minimum entropy deconvolution method, especially in the case of low signal-to-noise ratio.
NASA Astrophysics Data System (ADS)
Yu, Yajun; Sanchez, Nancy P.; Yi, Fan; Zheng, Chuantao; Ye, Weilin; Wu, Hongpeng; Griffin, Robert J.; Tittel, Frank K.
2017-05-01
A sensor system capable of simultaneous measurements of NO and NO2 was developed using a wavelength modulation-division multiplexing (WMDM) scheme and multi-pass absorption spectroscopy. A continuous wave (CW), distributed-feedback (DFB) quantum cascade laser (QCL) and a CW external-cavity (EC) QCL were employed for targeting a NO absorption doublet at 1900.075 cm-1 and a NO2 absorption line at 1630.33 cm-1, respectively. Simultaneous detection was realized by modulating both QCLs independently at different frequencies and demodulating the detector signals with LabView-programmed lock-in amplifiers. The sensor operated at a reduced pressure of 40 Torr and a data sampling rate of 1 Hz. An Allan-Werle deviation analysis indicated that the minimum detection limits of NO and NO2 can reach sub-ppbv concentration levels with averaging times of 100 and 200 s, respectively.
Webcams for Bird Detection and Monitoring: A Demonstration Study
Verstraeten, Willem W.; Vermeulen, Bart; Stuckens, Jan; Lhermitte, Stefaan; Van der Zande, Dimitry; Van Ranst, Marc; Coppin, Pol
2010-01-01
Better insights into bird migration can be a tool for assessing the spread of avian borne infections or ecological/climatologic issues reflected in deviating migration patterns. This paper evaluates whether low budget permanent cameras such as webcams can offer a valuable contribution to the reporting of migratory birds. An experimental design was set up to study the detection capability using objects of different size, color and velocity. The results of the experiment revealed the minimum size, maximum velocity and contrast of the objects required for detection by a standard webcam. Furthermore, a modular processing scheme was proposed to track and follow migratory birds in webcam recordings. Techniques such as motion detection by background subtraction, stereo vision and lens distortion were combined to form the foundation of the bird tracking algorithm. Additional research to integrate webcam networks, however, is needed and future research should enforce the potential of the processing scheme by exploring and testing alternatives of each individual module or processing step. PMID:22319308
NASA Astrophysics Data System (ADS)
Melo, D.; Yelós, L. D.; Garcia, B.; Rovero, A. C.
2017-10-01
Gamma-ray astronomy opened the universe of the more energetic electromagnetic radiation using ground and orbiting instruments, which provide information for the understanding of sources of different types. Ground-based telescope arrays use Cherenkov light produced by the charged particles from extensive air showers generated in the Earth's atmosphere to identify gamma rays. This imposes a minimum energy threshold on the gamma rays to be detected. Towards the high-energy end of the spectrum, however, the amount of Cherenkov radiation produced by a gamma-ray photon guarantees its detectability, the limiting factor being the low flux of the sources. For this reason, the detection strategy consists in using arrays of small telescopes. In this work, we investigate the feasibility of detecting gamma-ray cascades using Cherenkov telescopes, in the range of 100 GeV to 2 TeV, at the CASLEO site, characterizing the response of a system of three Cherenkov telescopes.
Resonant photoacoustic detection of NO2 traces with a Q-switched green laser
NASA Astrophysics Data System (ADS)
Slezak, Verónica; Codnia, Jorge; Peuriot, Alejandro L.; Santiago, Guillermo
2003-01-01
Resonant photoacoustic detection of NO2 traces by means of a high repetition pulsed green laser is presented. The resonator is a cylindrical Pyrex glass cell with a measured Q factor 380 for the first radial mode in air at atmospheric pressure. The system is calibrated with known mixtures in dry air and a minimum detectable volume concentration of 50 parts in 109 is obtained (S/N=1). Its sensitivity allows one to detect and quantify NO2 traces in the exhaust gases of cars. Previously, the analysis of gas adsorption and desorption on the walls and of changes in the sample composition is carried out in order to minimize errors in the determination of NO2 content upon application of the extractive method. The efficiency of catalytic converters of several models of automobiles is studied and the NO2 concentration in samples from exhausts of different types of engine (gasoline, diesel, and methane gas) at idling operation are measured.
Effect of censoring trace-level water-quality data on trend-detection capability
Gilliom, R.J.; Hirsch, R.M.; Gilroy, E.J.
1984-01-01
Monte Carlo experiments were used to evaluate whether trace-level water-quality data that are routinely censored (not reported) contain valuable information for trend detection. Measurements are commonly censored if they fall below a level associated with some minimum acceptable level of reliability (detection limit). Trace-level organic data were simulated with best- and worst-case estimates of measurement uncertainty, various concentrations and degrees of linear trend, and different censoring rules. The resulting classes of data were subjected to a nonparametric statistical test for trend. For all classes of data evaluated, trends were most effectively detected in uncensored data as compared to censored data even when the data censored were highly unreliable. Thus, censoring data at any concentration level may eliminate valuable information. Whether or not valuable information for trend analysis is, in fact, eliminated by censoring of actual rather than simulated data depends on whether the analytical process is in statistical control and bias is predictable for a particular type of chemical analyses.
Webcams for bird detection and monitoring: a demonstration study.
Verstraeten, Willem W; Vermeulen, Bart; Stuckens, Jan; Lhermitte, Stefaan; Van der Zande, Dimitry; Van Ranst, Marc; Coppin, Pol
2010-01-01
Better insights into bird migration can be a tool for assessing the spread of avian borne infections or ecological/climatologic issues reflected in deviating migration patterns. This paper evaluates whether low budget permanent cameras such as webcams can offer a valuable contribution to the reporting of migratory birds. An experimental design was set up to study the detection capability using objects of different size, color and velocity. The results of the experiment revealed the minimum size, maximum velocity and contrast of the objects required for detection by a standard webcam. Furthermore, a modular processing scheme was proposed to track and follow migratory birds in webcam recordings. Techniques such as motion detection by background subtraction, stereo vision and lens distortion were combined to form the foundation of the bird tracking algorithm. Additional research to integrate webcam networks, however, is needed and future research should enforce the potential of the processing scheme by exploring and testing alternatives of each individual module or processing step.
Inhomogeneous models of the Venus clouds containing sulfur
NASA Technical Reports Server (NTRS)
Smith, S. M.; Pollack, J. B.; Giver, L. P.; Cuzzi, J. N.; Podolak, M.
1979-01-01
Based on the suggestion that elemental sulfur is responsible for the yellow color of Venus, calculations are compared at 3.4 microns of the reflectivity phase function of two sulfur containing inhomogeneous cloud models with that of a homogeneous model. Assuming reflectivity observations with 25% or less total error, comparison of the model calculations leads to a minimum detectable mass of sulfur equal to 7% of the mass of sulfuric acid for the inhomogeneous drop model. For the inhomogeneous cloud model the comparison leads to a minimum detectable mass of sulfur between 17% and 38% of the mass of the acid drops, depending upon the actual size of the large particles. It is concluded that moderately accurate 3.4 microns reflectivity observations are capable of detecting quite small amounts of elemental sulfur at the top of the Venus clouds.
van den Beld, Maaike J C; Friedrich, Alexander W; van Zanten, Evert; Reubsaet, Frans A G; Kooistra-Smid, Mirjam A M D; Rossen, John W A
2016-12-01
An inter-laboratory collaborative trial for the evaluation of diagnostics for detection and identification of Shigella species and Entero-invasive Escherichia coli (EIEC) was performed. Sixteen Medical Microbiological Laboratories (MMLs) participated. MMLs were interviewed about their diagnostic methods and a sample panel, consisting of DNA-extracts and spiked stool samples with different concentrations of Shigella flexneri, was provided to each MML. The results of the trial showed an enormous variety in culture-dependent and molecular diagnostic techniques currently used among MMLs. Despite the various molecular procedures, 15 out of 16 MMLs were able to detect Shigella species or EIEC in all the samples provided, showing that the diversity of methods has no effect on the qualitative detection of Shigella flexneri. In contrast to semi quantitative analysis, the minimum and maximum values per sample differed by approximately five threshold cycles (Ct-value) between the MMLs included in the study. This indicates that defining a uniform Ct-value cut-off for notification to health authorities is not advisable. Copyright © 2016 Elsevier B.V. All rights reserved.
Code of Federal Regulations, 2010 CFR
2010-07-01
... minimum pressure drop and liquid flow-rate at or above the operating levels established during the... leak detection system alarm does not sound more than 5 percent of the operating time during a 6-month... control Maintain the minimum sorbent or carbon injection rate at or above the operating levels established...
Code of Federal Regulations, 2011 CFR
2011-07-01
... minimum pressure drop and liquid flow-rate at or above the operating levels established during the... leak detection system alarm does not sound more than 5 percent of the operating time during a 6-month... control Maintain the minimum sorbent or carbon injection rate at or above the operating levels established...
Degree Day Requirements for Kudzu Bug (Hemiptera: Plataspidae), a Pest of Soybeans.
Grant, Jessica I; Lamp, William O
2018-04-02
Understanding the phenology of a new potential pest is fundamental for the development of a management program. Megacopta cribraria Fabricius (Hemiptera: Plataspidae), kudzu bug, is a pest of soybeans first detected in the United States in 2009 and in Maryland in 2013. We observed the phenology of kudzu bug life stages in Maryland, created a Celsius degree-day (CDD) model for development, and characterized the difference between microhabitat and ambient temperatures of both kudzu, Pueraria montana (Lour.) Merr. (Fabales: Fabaceae) and soybeans, Glycine max (L.) Merrill (Fabales: Fabaceae). In 2014, low population numbers yielded limited resolution from field phenology observations. We observed kudzu bug populations persisting within Maryland; but between 2013 and 2016, populations were low compared to populations in the southeastern United States. Based on the degree-day model, kudzu bug eggs require 80 CDD at a minimum temperature of 14°C to hatch. Nymphs require 545 CDD with a minimum temperature of 16°C for development. The CDD model matches field observations when factoring a biofix date of April 1 and a minimum preoviposition period of 17 d. The model suggests two full generations per year in Maryland. Standard air temperature monitors do not affect model predictions for pest management, as microhabitat temperature differences did not show a clear trend between kudzu and soybeans. Ultimately, producers can predict the timing of kudzu bug life stages with the CDD model for the use of timing management plans in soybean fields.
NASA Astrophysics Data System (ADS)
He, Minhui; Yang, Bao; Datsenko, Nina M.
2014-08-01
The recent unprecedented warming found in different regions has aroused much attention in the past years. How temperature has really changed on the Tibetan Plateau (TP) remains unknown since very limited high-resolution temperature series can be found over this region, where large areas of snow and ice exist. Herein, we develop two Juniperus tibetica Kom. tree-ring width chronologies from different elevations. We found that the two tree-ring series only share high-frequency variability. Correlation, response function and partial correlation analysis indicate that prior year annual (January-December) minimum temperature is most responsible for the higher belt juniper radial growth, while more or less precipitation signal is contained by the tree-ring width chronology at the lower belt and is thus excluded from further analysis. The tree growth-climate model accounted for 40 % of the total variance in actual temperature during the common period 1957-2010. The detected temperature signal is further robustly verified by other results. Consequently, a six century long annual minimum temperature history was firstly recovered for the Yushu region, central TP. Interestingly, the rapid warming trend during the past five decades is identified as a significant cold phase in the context of the past 600 years. The recovered temperature series reflects low-frequency variability consistent with other temperature reconstructions over the whole TP region. Furthermore, the present recovered temperature series is associated with the Asian monsoon strength on decadal to multidecadal scales over the past 600 years.
Panchen, Zoe A; Primack, Richard B; Anisko, Tomasz; Lyons, Robert E
2012-04-01
The global climate is changing rapidly and is expected to continue changing in coming decades. Studying changes in plant flowering times during a historical period of warming temperatures gives us a way to examine the impacts of climate change and allows us to predict further changes in coming decades. The Greater Philadelphia region has a long and rich history of botanical study and documentation, with abundant herbarium specimens, field observations, and botanical photographs from the mid-1800s onward. These extensive records also provide an opportunity to validate methodologies employed by other climate change researchers at a different biogeographical area and with a different group of species. Data for 2539 flowering records from 1840 to 2010 were assessed to examine changes in flowering response over time and in relation to monthly minimum temperatures of 28 Piedmont species native to the Greater Philadelphia region. Regression analysis of the date of flowering with year or with temperature showed that, on average, the Greater Philadelphia species studied are flowering 16 d earlier over the 170-yr period and 2.7 d earlier per 1°C rise in monthly minimum temperature. Of the species studied, woody plants with short flowering duration are the best indicators of a warming climate. For monthly minimum temperatures, temperatures 1 or 2 mo prior to flowering are most significantly correlated with flowering time. Studies combining herbarium specimens, photographs, and field observations are an effective method for detecting the effects of climate change on flowering times.
Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim
2016-01-01
Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation.
Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim
2016-01-01
Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation. PMID:27092179
A Fuel-Efficient Conflict Resolution Maneuver for Separation Assurance
NASA Technical Reports Server (NTRS)
Bowe, Aisha Ruth; Santiago, Confesor
2012-01-01
Automated separation assurance algorithms are envisioned to play an integral role in accommodating the forecasted increase in demand of the National Airspace System. Developing a robust, reliable, air traffic management system involves safely increasing efficiency and throughput while considering the potential impact on users. This experiment seeks to evaluate the benefit of augmenting a conflict detection and resolution algorithm to consider a fuel efficient, Zero-Delay Direct-To maneuver, when resolving a given conflict based on either minimum fuel burn or minimum delay. A total of twelve conditions were tested in a fast-time simulation conducted in three airspace regions with mixed aircraft types and light weather. Results show that inclusion of this maneuver has no appreciable effect on the ability of the algorithm to safely detect and resolve conflicts. The results further suggest that enabling the Zero-Delay Direct-To maneuver significantly increases the cumulative fuel burn savings when choosing resolution based on minimum fuel burn while marginally increasing the average delay per resolution.
The asymmetrical features in electron density during extreme solar minimum
NASA Astrophysics Data System (ADS)
Zhang, Xuemin; Shen, Xuhui; Liu, Jing; Yao, Lu; Yuan, Guiping; Huang, Jianping
2014-12-01
The variations of plasma density in topside ionosphere during 23rd/24th solar cycle minimum attract more attentions in recently years. In this analysis, we use the data of electron density (Ne) from DEMETER (Detection of Electromagnetic Emissions Transmitted from Earthquake Regions) satellite at the altitude of 660-710 km to investigate the solstitial and equinoctial asymmetry under geomagnetic coordinate system at LT (local time) 1030 and 2230 during 2005-2010, especially in solar minimum years of 2008-2009. The results reveal that ΔNe (December-June) is always positive over Southern Hemisphere and negative over northern part whatever at LT 1030 or 2230, only at 0-10°N the winter anomaly occurs with ΔNe (December-June) > 0, and its amplitude becomes smaller with the declining of solar flux from 2005 to 2009. The ΔNe between September and March is completely negative during 2005-2008, but in 2009, it turns to be positive at latitudes of 20°S-40°N at LT 1030 and 10°S-20°N at LT 2230. Furthermore, the solstitial and equinoctial asymmetry index (AI) are calculated and studied respectively, which all depends on local time, latitude and longitude. The notable differences occur at higher latitudes in solar minimum year of 2009 with those in 2005-2008. The equinoctial AI at LT 2230 is quite consistent with the variational trend of solar flux with the lowest absolute AI occurring in 2009, the extreme solar minimum, but the solstitial AI exhibits abnormal enhancement during 2008 and 2009 with bigger AI than those in 2005-2007. Compared with the neutral compositions at 500 km altitude, it illustrates that [O/N2] and [O] play some roles in daytime and nighttime asymmetry of Ne at topside ionosphere.
Reducing false-positive detections by combining two stage-1 computer-aided mass detection algorithms
NASA Astrophysics Data System (ADS)
Bedard, Noah D.; Sampat, Mehul P.; Stokes, Patrick A.; Markey, Mia K.
2006-03-01
In this paper we present a strategy for reducing the number of false-positives in computer-aided mass detection. Our approach is to only mark "consensus" detections from among the suspicious sites identified by different "stage-1" detection algorithms. By "stage-1" we mean that each of the Computer-aided Detection (CADe) algorithms is designed to operate with high sensitivity, allowing for a large number of false positives. In this study, two mass detection methods were used: (1) Heath and Bowyer's algorithm based on the average fraction under the minimum filter (AFUM) and (2) a low-threshold bi-lateral subtraction algorithm. The two methods were applied separately to a set of images from the Digital Database for Screening Mammography (DDSM) to obtain paired sets of mass candidates. The consensus mass candidates for each image were identified by a logical "and" operation of the two CADe algorithms so as to eliminate regions of suspicion that were not independently identified by both techniques. It was shown that by combining the evidence from the AFUM filter method with that obtained from bi-lateral subtraction, the same sensitivity could be reached with fewer false-positives per image relative to using the AFUM filter alone.
Bergaoui, K; Reguigui, N; Gary, C K; Brown, C; Cremer, J T; Vainionpaa, J H; Piestrup, M A
2014-12-01
An explosive detection system based on a Deuterium-Deuterium (D-D) neutron generator has been simulated using the Monte Carlo N-Particle Transport Code (MCNP5). Nuclear-based explosive detection methods can detect explosives by identifying their elemental components, especially nitrogen. Thermal neutron capture reactions have been used for detecting prompt gamma emission (10.82MeV) following radiative neutron capture by (14)N nuclei. The explosive detection system was built based on a fully high-voltage-shielded, axial D-D neutron generator with a radio frequency (RF) driven ion source and nominal yield of about 10(10) fast neutrons per second (E=2.5MeV). Polyethylene and paraffin were used as moderators with borated polyethylene and lead as neutron and gamma ray shielding, respectively. The shape and the thickness of the moderators and shields are optimized to produce the highest thermal neutron flux at the position of the explosive and the minimum total dose at the outer surfaces of the explosive detection system walls. In addition, simulation of the response functions of NaI, BGO, and LaBr3-based γ-ray detectors to different explosives is described. Copyright © 2014 Elsevier Ltd. All rights reserved.
A novel adaptive, real-time algorithm to detect gait events from wearable sensors.
Chia Bejarano, Noelia; Ambrosini, Emilia; Pedrocchi, Alessandra; Ferrigno, Giancarlo; Monticone, Marco; Ferrante, Simona
2015-05-01
A real-time, adaptive algorithm based on two inertial and magnetic sensors placed on the shanks was developed for gait-event detection. For each leg, the algorithm detected the Initial Contact (IC), as the minimum of the flexion/extension angle, and the End Contact (EC) and the Mid-Swing (MS), as minimum and maximum of the angular velocity, respectively. The algorithm consisted of calibration, real-time detection, and step-by-step update. Data collected from 22 healthy subjects (21 to 85 years) walking at three self-selected speeds were used to validate the algorithm against the GaitRite system. Comparable levels of accuracy and significantly lower detection delays were achieved with respect to other published methods. The algorithm robustness was tested on ten healthy subjects performing sudden speed changes and on ten stroke subjects (43 to 89 years). For healthy subjects, F1-scores of 1 and mean detection delays lower than 14 ms were obtained. For stroke subjects, F1-scores of 0.998 and 0.944 were obtained for IC and EC, respectively, with mean detection delays always below 31 ms. The algorithm accurately detected gait events in real time from a heterogeneous dataset of gait patterns and paves the way for the design of closed-loop controllers for customized gait trainings and/or assistive devices.
Dizman, Secil; Turker, Gurkan; Gurbet, Alp; Mogol, Elif Basagan; Turkcan, Suat; Karakuzu, Ziyaatin
2011-01-01
Objective: To evaluate the effects of two different spinal isobaric levobupivacaine doses on spinal anesthesia characteristics and to find the minimum effective dose for surgery in patients undergoing transurethral resection (TUR) surgery. Materials and Methods: Fifty male patients undergoing TUR surgery were included in the study and were randomized into two equal groups: Group LB10 (n=25): 10 mg 0.5% isobaric levobupivacaine (2 ml) and Group LB15 (n=25): 15 mg 0.75% isobaric levobupivacaine (2 ml). Spinal anesthesia was administered via a 25G Quincke spinal needle through the L3–4 intervertebral space. Sensorial block levels were evaluated using the ‘pin-prick test’, and motor block levels were evaluated using the ‘Bromage scale’. The sensorial and motor block characteristics of patients during intraoperative and postoperative periods and recovery time from spinal anesthesia were evaluated. Results: In three cases in the Group LB10, sensorial block did not reach the T10 level. Complete motor block (Bromage=3) did not occur in eight cases in the Group LB10 and in five cases in the Group LB15. The highest sensorial dermatomal level detected was higher in Group LB15. In Group LB15, sensorial block initial time and the time of complete motor block occurrence were significantly shorter than Group LB10. Hypotension was observed in one case in Group LB15. No significant difference between groups was detected in two segments of regression times: the time to S2 regression and complete sensorial block regression time. Complete motor block regression time was significantly longer in Group LB15 than in Group LB10 (p<0.01). Conclusion: Our findings showed that the minimum effective spinal isobaric levobupivacaine dose was 10 mg for TUR surgery. PMID:25610173
Development of a homogeneous pulse shape discriminating flow-cell radiation detection system
NASA Astrophysics Data System (ADS)
Hastie, K. H.; DeVol, T. A.; Fjeld, R. A.
1999-02-01
A homogeneous flow-cell radiation detection system which utilizes coincidence counting and pulse shape discrimination circuitry was assembled and tested with five commercially available liquid scintillation cocktails. Two of the cocktails, Ultima Flo (Packard) and Mono Flow 5 (National Diagnostics) have low viscosities and are intended for flow applications; and three of the cocktails, Optiphase HiSafe 3 (Wallac), Ultima Gold AB (Packard), and Ready Safe (Beckman), have higher viscosities and are intended for static applications. The low viscosity cocktails were modified with 1-methylnaphthalene to increase their capability for alpha/beta pulse shape discrimination. The sample loading and pulse shape discriminator setting were optimized to give the lowest minimum detectable concentration for alpha radiation in a 30 s count time. Of the higher viscosity cocktails, Optiphase HiSafe 3 had the lowest minimum detectable activities for alpha and beta radiation, 0.2 and 0.4 Bq/ml for 233U and 90Sr/ 90Y, respectively, for a 30 s count time. The sample loading was 70% and the corresponding alpha/beta spillover was 5.5%. Of the low viscosity cocktails, Mono Flow 5 modified with 2.5% (by volume) 1-methylnaphthalene resulted in the lowest minimum detectable activities for alpha and beta radiation; 0.3 and 0.5 Bq/ml for 233U and 90Sr/ 90Y, respectively, for a 30 s count time. The sample loading was 50%, and the corresponding alpha/beta spillover was 16.6%. HiSafe 3 at a 10% sample loading was used to evaluate the system under simulated flow conditions.
Optimizing probability of detection point estimate demonstration
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2017-04-01
The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.
The objectives of this presentation are to: review history of distribution system chlorination regulations, raise awareness on the meaning of detectable residual as it relates to chloramines, and perhaps renew dialogue on the discussion of minimum disinfectant residuals.
NASA Astrophysics Data System (ADS)
Fukuda, Satoru; Nakajima, Teruyuki; Takenaka, Hideaki; Higurashi, Akiko; Kikuchi, Nobuyuki; Nakajima, Takashi Y.; Ishida, Haruma
2013-12-01
satellite aerosol retrieval algorithm was developed to utilize a near-ultraviolet band of the Greenhouse gases Observing SATellite/Thermal And Near infrared Sensor for carbon Observation (GOSAT/TANSO)-Cloud and Aerosol Imager (CAI). At near-ultraviolet wavelengths, the surface reflectance over land is smaller than that at visible wavelengths. Therefore, it is thought possible to reduce retrieval error by using the near-ultraviolet spectral region. In the present study, we first developed a cloud shadow detection algorithm that uses first and second minimum reflectances of 380 nm and 680 nm based on the difference in Rayleigh scattering contribution for these two bands. Then, we developed a new surface reflectance correction algorithm, the modified Kaufman method, which uses minimum reflectance data at 680 nm and the NDVI to estimate the surface reflectance at 380 nm. This algorithm was found to be particularly effective at reducing the aerosol effect remaining in the 380 nm minimum reflectance; this effect has previously proven difficult to remove owing to the infrequent sampling rate associated with the three-day recursion period of GOSAT and the narrow CAI swath of 1000 km. Finally, we applied these two algorithms to retrieve aerosol optical thicknesses over a land area. Our results exhibited better agreement with sun-sky radiometer observations than results obtained using a simple surface reflectance correction technique using minimum radiances.
Determination of total sulfur content via sulfur-specific chemiluminescence detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kubala, S.W.; Campbell, D.N.; DiSanzo, F.P.
A specially designed system, based upon sulfur-specific chemiluminescence detection (SSCD), was developed to permit the determination of total sulfur content in a variety of samples. This type of detection system possesses several advantages such as excellent linearity and selectivity, low minimum detectable levels, and an equimolar response to various sulfur compounds. This paper will focus on the design and application of a sulfur-specific chemiluminescence detection system for use in determining total sulfur content in gasoline.
Baseline requirements for detecting biosignatures with the HabEx and LUVOIR mission concepts
NASA Astrophysics Data System (ADS)
Wang, Ji; Mawet, Dimitri; Ruane, Garreth; Delorme, Jacques-Robert; Klimovich, Nikita; Hu, Renyu
2017-09-01
A milestone in understanding life in the universe is the detection of biosignature gases in the atmospheres of habitable exoplanets. Future mission concepts under study by the 2020 decadal survey, e.g., HabEx and LUVOIR, have the potential of achieving this goal. We investigate the baseline requirements for detecting four molecular species, H2O, O2, CH4, and CO2. These molecules are highly relevant to habitability and life activity on Earth and other planets. Through numerical simulations, we find the minimum requirement for spectral resolution (R) and starlight suppression level (C) for a given exposure time. We consider scenarios in which different molecules are detected. For example, R = 6400 (400) and C = 5 × 10-10 (2 × 10-9 ) are required for HabEx (LUVOIR) to detect O2 and H2O for an exposure time of 400 hours for an Earth analog around a solar-type star at a distance of 5 pc. The full results are given in Table 2. The impact of exo-zodiacal contamination and thermal background is also discussed
NASA Astrophysics Data System (ADS)
Wei-Li, Ma, Weiping; Pan-Qi, Wen-jiao, Dou; Yuan, Xin'an; Yin, Xiaokang
2018-04-01
Stainless steel is widely used in nuclear power plants, such as various high-radioactive pool, tools storage and fuel transportation channel, and serves as an important barrier to stop the leakage of high-radioactive material. NonDestructive Evaluation (NDE) methods, eddy current testing (ET), ultrasonic examination (UT), penetration testing (PT) and hybrid detection method, etc., have been introduced into the inspection of a nuclear plant. In this paper, the Alternating Current Field Measurement (ACFM) was fully applied to detect and evaluate the defects in the welds of the stainless steel. Simulations were carried out on different defect types, crack lengths, and orientation to reveal the relationship between the signals and dimensions to determine whether methods could be validated by the experiment. A 3-axis ACFM probe was developed and three plates including 16 defects, which served in nuclear plant before, were examined by automatic detection equipment. The result shows that the minimum detectable crack length on the surface is 2mm and ACFM shows excellent inspection results for a weld in stainless steel and gives an encouraging prospect of broader application.
Detection of Epileptic Seizure Event and Onset Using EEG
Ahammad, Nabeel; Fathima, Thasneem; Joseph, Paul
2014-01-01
This study proposes a method of automatic detection of epileptic seizure event and onset using wavelet based features and certain statistical features without wavelet decomposition. Normal and epileptic EEG signals were classified using linear classifier. For seizure event detection, Bonn University EEG database has been used. Three types of EEG signals (EEG signal recorded from healthy volunteer with eye open, epilepsy patients in the epileptogenic zone during a seizure-free interval, and epilepsy patients during epileptic seizures) were classified. Important features such as energy, entropy, standard deviation, maximum, minimum, and mean at different subbands were computed and classification was done using linear classifier. The performance of classifier was determined in terms of specificity, sensitivity, and accuracy. The overall accuracy was 84.2%. In the case of seizure onset detection, the database used is CHB-MIT scalp EEG database. Along with wavelet based features, interquartile range (IQR) and mean absolute deviation (MAD) without wavelet decomposition were extracted. Latency was used to study the performance of seizure onset detection. Classifier gave a sensitivity of 98.5% with an average latency of 1.76 seconds. PMID:24616892
Invariance of the bit error rate in the ancilla-assisted homodyne detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoshida, Yuhsuke; Takeoka, Masahiro; Sasaki, Masahide
2010-11-15
We investigate the minimum achievable bit error rate of the discrimination of binary coherent states with the help of arbitrary ancillary states. We adopt homodyne measurement with a common phase of the local oscillator and classical feedforward control. After one ancillary state is measured, its outcome is referred to the preparation of the next ancillary state and the tuning of the next mixing with the signal. It is shown that the minimum bit error rate of the system is invariant under the following operations: feedforward control, deformations, and introduction of any ancillary state. We also discuss the possible generalization ofmore » the homodyne detection scheme.« less
Auger electron and characteristic energy loss spectra for electro-deposited americium-241
NASA Astrophysics Data System (ADS)
Varma, Matesh N.; Baum, John W.
1983-07-01
Auger electron energy spectra for electro-deposited americium-241 on platinum substrate were obtained using a cylindrical mirror analyzer. Characteristic energy loss spectra for this sample were also obtained at primary electron beam energies of 990 and 390 eV. From these measurements PI, PII, and PIII energy levels for americium-241 are determined. Auger electron energies are compared with theoretically calculated values. Minimum detectability under the present condition of sample preparation and equipment was estimated at approximately 1.2×10-8 g/cm2 or 3.9×10-8 Ci/cm2. Minimum detectability for plutonium-239 under similar conditions was estimated at about 7.2×10-10 Ci/cm2.
ERIC Educational Resources Information Center
Dong, Nianbo; Spybrook, Jessaca; Kelcey, Ben
2016-01-01
The purpose of this study is to propose a general framework for power analyses to detect the moderator effects in two- and three-level cluster randomized trials (CRTs). The study specifically aims to: (1) develop the statistical formulations for calculating statistical power, minimum detectable effect size (MDES) and its confidence interval to…
The performance of matched-field track-before-detect methods using shallow-water Pacific data.
Tantum, Stacy L; Nolte, Loren W; Krolik, Jeffrey L; Harmanci, Kerem
2002-07-01
Matched-field track-before-detect processing, which extends the concept of matched-field processing to include modeling of the source dynamics, has recently emerged as a promising approach for maintaining the track of a moving source. In this paper, optimal Bayesian and minimum variance beamforming track-before-detect algorithms which incorporate a priori knowledge of the source dynamics in addition to the underlying uncertainties in the ocean environment are presented. A Markov model is utilized for the source motion as a means of capturing the stochastic nature of the source dynamics without assuming uniform motion. In addition, the relationship between optimal Bayesian track-before-detect processing and minimum variance track-before-detect beamforming is examined, revealing how an optimal tracking philosophy may be used to guide the modification of existing beamforming techniques to incorporate track-before-detect capabilities. Further, the benefits of implementing an optimal approach over conventional methods are illustrated through application of these methods to shallow-water Pacific data collected as part of the SWellEX-1 experiment. The results show that incorporating Markovian dynamics for the source motion provides marked improvement in the ability to maintain target track without the use of a uniform velocity hypothesis.
Inflight fuel tank temperature survey data
NASA Technical Reports Server (NTRS)
Pasion, A. J.
1979-01-01
Statistical summaries of the fuel and air temperature data for twelve different routes and for different aircraft models (B747, B707, DC-10 and DC-8), are given. The minimum fuel, total air and static air temperature expected for a 0.3% probability were summarized in table form. Minimum fuel temperature extremes agreed with calculated predictions and the minimum fuel temperature did not necessarily equal the minimum total air temperature even for extreme weather, long range flights.
Using lod scores to detect sex differences in male-female recombination fractions.
Feenstra, B; Greenberg, D A; Hodge, S E
2004-01-01
Human recombination fraction (RF) can differ between males and females, but investigators do not always know which disease genes are located in genomic areas of large RF sex differences. Knowledge of RF sex differences contributes to our understanding of basic biology and can increase the power of a linkage study, improve gene localization, and provide clues to possible imprinting. One way to detect these differences is to use lod scores. In this study we focused on detecting RF sex differences and answered the following questions, in both phase-known and phase-unknown matings: (1) How large a sample size is needed to detect a RF sex difference? (2) What are "optimal" proportions of paternally vs. maternally informative matings? (3) Does ascertaining nonoptimal proportions of paternally or maternally informative matings lead to ascertainment bias? Our results were as follows: (1) We calculated expected lod scores (ELODs) under two different conditions: "unconstrained," allowing sex-specific RF parameters (theta(female), theta(male)); and "constrained," requiring theta(female) = theta(male). We then examined the DeltaELOD (identical with difference between maximized constrained and unconstrained ELODs) and calculated minimum sample sizes required to achieve statistically significant DeltaELODs. For large RF sex differences, samples as small as 10 to 20 fully informative matings can achieve statistical significance. We give general sample size guidelines for detecting RF differences in informative phase-known and phase-unknown matings. (2) We defined p as the proportion of paternally informative matings in the dataset; and the optimal proportion p(circ) as that value of p that maximizes DeltaELOD. We determined that, surprisingly, p(circ) does not necessarily equal (1/2), although it does fall between approximately 0.4 and 0.6 in most situations. (3) We showed that if p in a sample deviates from its optimal value, no bias is introduced (asymptotically) to the maximum likelihood estimates of theta(female) and theta(male), even though ELOD is reduced (see point 2). This fact is important because often investigators cannot control the proportions of paternally and maternally informative families. In conclusion, it is possible to reliably detect sex differences in recombination fraction. Copyright 2004 S. Karger AG, Basel
Belkahia, Hanène; Ben Said, Mourad; El Mabrouk, Narjesse; Saidani, Mariem; Cherni, Chayma; Ben Hassen, Mariem; Bouattour, Ali; Messadi, Lilia
2017-09-01
In cattle, anaplasmosis is a tick-borne rickettsial disease caused by Anaplasma marginale, A. centrale, A. phagocytophilum, and A. bovis. To date, no information concerning the seasonal dynamics of single and/or mixed infections by different Anaplasma species in bovines are available in Tunisia. In this work, a total of 1035 blood bovine samples were collected in spring (n=367), summer (n=248), autumn (n=244) and winter (n=176) from five different governorates belonging to three bioclimatic zones from the North of Tunisia. Molecular survey of A. marginale, A. centrale and A. bovis in cattle showed that average prevalence rates were 4.7% (minimum 4.1% in autumn and maximum 5.6% in summer), 7% (minimum 3.9% in winter and maximum 10.7% in autumn) and 4.9% (minimum 2.7% in spring and maximum 7.3% in summer), respectively. A. phagocytophilum was not detected in all investigated cattle. Seasonal variations of Anaplasma spp. infection and co-infection rates in overall and/or according to each bioclimatic area were recorded. Molecular characterization of A. marginale msp4 gene indicated a high sequence homology of revealed strains with A. marginale sequences from African countries. Alignment of 16S rRNA A. centrale sequences showed that Tunisian strains were identical to the vaccine strain from several sub-Saharan African and European countries. The comparison of the 16S rRNA sequences of A. bovis variants showed a perfect homology between Tunisian variants isolated from cattle, goats and sheep. These present data are essential to estimate the risk of bovine anaplasmosis in order to develop integrated control policies against multi-species pathogen communities, infecting humans and different animal species, in the country. Copyright © 2017 Elsevier B.V. All rights reserved.
Xue, Xiaonan; Shore, Roy E; Ye, Xiangyang; Kim, Mimi Y
2004-10-01
Occupational exposures are often recorded as zero when the exposure is below the minimum detection level (BMDL). This can lead to an underestimation of the doses received by individuals and can lead to biased estimates of risk in occupational epidemiologic studies. The extent of the exposure underestimation is increased with the magnitude of the minimum detection level (MDL) and the frequency of monitoring. This paper uses multiple imputation methods to impute values for the missing doses due to BMDL. A Gibbs sampling algorithm is developed to implement the method, which is applied to two distinct scenarios: when dose information is available for each measurement (but BMDL is recorded as zero or some other arbitrary value), or when the dose information available represents the summation of a series of measurements (e.g., only yearly cumulative exposure is available but based on, say, weekly measurements). Then the average of the multiple imputed exposure realizations for each individual is used to obtain an unbiased estimate of the relative risk associated with exposure. Simulation studies are used to evaluate the performance of the estimators. As an illustration, the method is applied to a sample of historical occupational radiation exposure data from the Oak Ridge National Laboratory.
Short version of the Depression Anxiety Stress Scale-21: is it valid for Brazilian adolescents?
da Silva, Hítalo Andrade; dos Passos, Muana Hiandra Pereira; de Oliveira, Valéria Mayaly Alves; Palmeira, Aline Cabral; Pitangui, Ana Carolina Rodarti; de Araújo, Rodrigo Cappato
2016-01-01
ABSTRACT Objective To evaluate the interday reproducibility, agreement and validity of the construct of short version of the Depression Anxiety Stress Scale-21 applied to adolescents. Methods The sample consisted of adolescents of both sexes, aged between 10 and 19 years, who were recruited from schools and sports centers. The validity of the construct was performed by exploratory factor analysis, and reliability was calculated for each construct using the intraclass correlation coefficient, standard error of measurement and the minimum detectable change. Results The factor analysis combining the items corresponding to anxiety and stress in a single factor, and depression in a second factor, showed a better match of all 21 items, with higher factor loadings in their respective constructs. The reproducibility values for depression were intraclass correlation coefficient with 0.86, standard error of measurement with 0.80, and minimum detectable change with 2.22; and, for anxiety/stress: intraclass correlation coefficient with 0.82, standard error of measurement with 1.80, and minimum detectable change with 4.99. Conclusion The short version of the Depression Anxiety Stress Scale-21 showed excellent values of reliability, and strong internal consistency. The two-factor model with condensation of the constructs anxiety and stress in a single factor was the most acceptable for the adolescent population. PMID:28076595
SSI-ARC Flight Test 3 Data Review
NASA Technical Reports Server (NTRS)
Gong, Chester; Wu, Minghong G.
2015-01-01
The "Unmanned Aircraft System (UAS) Integration into the National Airspace System (NAS)" Project conducted flight test program, referred to as Flight Test 3, at Armstrong Flight Research Center from June - August 2015. Four flight test days were dedicated to the NASA Ames-developed Detect and Avoid (DAA) System referred to as Autoresolver. The encounter scenarios, which involved NASA's Ikhana UAS and a manned intruder aircraft, were designed to collect data on DAA system performance in real-world conditions and uncertainties with four different surveillance sensor systems. Resulting flight test data and analysis results will be used to evaluate the DAA system performance (e.g., trajectory prediction accuracy, threat detection) and to add fidelity to simulation models used to inform Minimum Operating Performance Standards (MOPS) for integrating UAS into routine NAS operations.
Extractive procedure for uranium determination in water samples by liquid scintillation counting.
Gomez Escobar, V; Vera Tomé, F; Lozano, J C; Martín Sánchez, A
1998-07-01
An extractive procedure for uranium determination using liquid scintillation counting with the URAEX cocktail is described. Interference from radon and a strong influence of nitrate ion were detected in this procedure. Interference from radium, thorium and polonium emissions were very low when optimal operating conditions were reached. Quenching effects were considered and the minimum detectable activity was evaluated for different sample volumes. Isotopic analysis of samples can be performed using the proposed method. Comparisons with the results obtained with the general procedure used in alpha spectrometry with passivated implanted planar silicon detectors showed good agreement. The proposed procedure is thus suitable for uranium determination in water samples and can be considered as an alternative to the laborious conventional chemical preparations needed for alpha spectrometry methods using semiconductor detectors.
Schäfer, Klaus; Brockmann, Klaus; Heland, Jörg; Wiesen, Peter; Jahn, Carsten; Legras, Olivier
2005-04-10
The detection limits for NO and NO2 in turbine exhausts by nonintrusive monitoring have to be improved. Multipass mode Fourier-transform infrared (FTIR) absorption spectrometry and use of a White mirror system were found from a sensitivity study with spectra simulations in the mid-infrared to be essential for the retrieval of NO2 abundances. A new White mirror system with a parallel infrared beam was developed and tested successfully with a commercial FTIR spectrometer in different turbine test beds. The minimum detection limits for a typical turbine plume of 50 cm in diameter are approximately 6 parts per million (ppm) for NO and 9 ppm for NO2 (as well 100 ppm for CO2 and 4 ppm for CO).
NASA Astrophysics Data System (ADS)
Papa, A.; Kettle, P.-R.; Ripiccini, E.; Rutar, G.
2016-07-01
Several scintillating fibre prototypes (single- and double-layers) made of 250 μm multi-clad square fibres coupled to silicon photomultiplier have been studied using electrons, positrons and muons at different energies. Current measurements show promising results: already for a single fibre layer and minimum ionizing particles we obtain a detection efficiency ≥ 95 % (mean collected light/fibre ≈ 8 phe), a timing resolution of 550 ps/fibre and a foreseen spatial resolution < 100 μm, based on the achieved negligible optical cross-talk between fibres (< 1 %). We will also discuss the performances of a double-layer staggered prototype configuration, for which a full detection efficiency (≥ 99 %) has been measured together with a timing resolution of ≈ 400 ps for double hit events.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smitherman, C; Chen, B; Samei, E
2014-06-15
Purpose: This work involved a comprehensive modeling of task-based performance of CT across a wide range of protocols. The approach was used for optimization and consistency of dose and image quality within a large multi-vendor clinical facility. Methods: 150 adult protocols from the Duke University Medical Center were grouped into sub-protocols with similar acquisition characteristics. A size based image quality phantom (Duke Mercury Phantom) was imaged using these sub-protocols for a range of clinically relevant doses on two CT manufacturer platforms (Siemens, GE). The images were analyzed to extract task-based image quality metrics such as the Task Transfer Function (TTF),more » Noise Power Spectrum, and Az based on designer nodule task functions. The data were analyzed in terms of the detectability of a lesion size/contrast as a function of dose, patient size, and protocol. A graphical user interface (GUI) was developed to predict image quality and dose to achieve a minimum level of detectability. Results: Image quality trends with variations in dose, patient size, and lesion contrast/size were evaluated and calculated data behaved as predicted. The GUI proved effective to predict the Az values representing radiologist confidence for a targeted lesion, patient size, and dose. As an example, an abdomen pelvis exam for the GE scanner, with a task size/contrast of 5-mm/50-HU, and an Az of 0.9 requires a dose of 4.0, 8.9, and 16.9 mGy for patient diameters of 25, 30, and 35 cm, respectively. For a constant patient diameter of 30 cm, the minimum detected lesion size at those dose levels would be 8.4, 5, and 3.9 mm, respectively. Conclusion: The designed CT protocol optimization platform can be used to evaluate minimum detectability across dose levels and patient diameters. The method can be used to improve individual protocols as well as to improve protocol consistency across CT scanners.« less
Verification of Minimum Detectable Activity for Radiological Threat Source Search
NASA Astrophysics Data System (ADS)
Gardiner, Hannah; Myjak, Mitchell; Baciak, James; Detwiler, Rebecca; Seifert, Carolyn
2015-10-01
The Department of Homeland Security's Domestic Nuclear Detection Office is working to develop advanced technologies that will improve the ability to detect, localize, and identify radiological and nuclear sources from airborne platforms. The Airborne Radiological Enhanced-sensor System (ARES) program is developing advanced data fusion algorithms for analyzing data from a helicopter-mounted radiation detector. This detector platform provides a rapid, wide-area assessment of radiological conditions at ground level. The NSCRAD (Nuisance-rejection Spectral Comparison Ratios for Anomaly Detection) algorithm was developed to distinguish low-count sources of interest from benign naturally occurring radiation and irrelevant nuisance sources. It uses a number of broad, overlapping regions of interest to statistically compare each newly measured spectrum with the current estimate for the background to identify anomalies. We recently developed a method to estimate the minimum detectable activity (MDA) of NSCRAD in real time. We present this method here and report on the MDA verification using both laboratory measurements and simulated injects on measured backgrounds at or near the detection limits. This work is supported by the US Department of Homeland Security, Domestic Nuclear Detection Office, under competitively awarded contract/IAA HSHQDC-12-X-00376. This support does not constitute an express or implied endorsement on the part of the Gov't.
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, W. Geoffrey; Gray, David Clinton
Purpose: To introduce the Joint Commission's requirements for annual diagnostic physics testing of all nuclear medicine equipment, effective 7/1/2014, and to highlight an acceptable methodology for testing lowcontrast resolution of the nuclear medicine imaging system. Methods: The Joint Commission's required diagnostic physics evaluations are to be conducted for all of the image types produced clinically by each scanner. Other accrediting bodies, such as the ACR and the IAC, have similar imaging metrics, but do not emphasize testing low-contrast resolution as it relates clinically. The proposed method for testing low contrast resolution introduces quantitative metrics that are clinically relevant. The acquisitionmore » protocol and calculation of contrast levels will utilize a modified version of the protocol defined in AAPM Report #52. Results: Using the Rose criterion for lesion detection with a SNRpixel = 4.335 and a CNRlesion = 4, the minimum contrast levels for 25.4 mm and 31.8 mm cold spheres were calculated to be 0.317 and 0.283, respectively. These contrast levels are the minimum threshold that must be attained to guard against false positive lesion detection. Conclusion: Low contrast resolution, or detectability, can be properly tested in a manner that is clinically relevant by measuring the contrast level of cold spheres within a Jaszczak phantom using pixel values within ROI's placed in the background and cold sphere regions. The measured contrast levels are then compared to a minimum threshold calculated using the Rose criterion and a CNRlesion = 4. The measured contrast levels must either meet or exceed this minimum threshold to prove acceptable lesion detectability. This research and development activity was performed by the authors while employed at West Physics Consulting, LLC. It is presented with the consent of West Physics, which has authorized the dissemination of the information and/or techniques described in the work.« less
Gunderson, Bruce D; Gillberg, Jeffrey M; Wood, Mark A; Vijayaraman, Pugazhendhi; Shepard, Richard K; Ellenbogen, Kenneth A
2006-02-01
Implantable cardioverter-defibrillator (ICD) lead failures often present as inappropriate shock therapy. An algorithm that can reliably discriminate between ventricular tachyarrhythmias and noise due to lead failure may prevent patient discomfort and anxiety and avoid device-induced proarrhythmia by preventing inappropriate ICD shocks. The goal of this analysis was to test an ICD tachycardia detection algorithm that differentiates noise due to lead failure from ventricular tachyarrhythmias. We tested an algorithm that uses a measure of the ventricular intracardiac electrogram baseline to discriminate the sinus rhythm isoelectric line from the right ventricular coil-can (i.e., far-field) electrogram during oversensing of noise caused by a lead failure. The baseline measure was defined as the product of the sum (mV) and standard deviation (mV) of the voltage samples for a 188-ms window centered on each sensed electrogram. If the minimum baseline measure of the last 12 beats was <0.35 mV-mV, then the detected rhythm was considered noise due to a lead failure. The first ICD-detected episode of lead failure and inappropriate detection from 24 ICD patients with a pace/sense lead failure and all ventricular arrhythmias from 56 ICD patients without a lead failure were selected. The stored data were analyzed to determine the sensitivity and specificity of the algorithm to detect lead failures. The minimum baseline measure for the 24 lead failure episodes (0.28 +/- 0.34 mV-mV) was smaller than the 135 ventricular tachycardia (40.8 +/- 43.0 mV-mV, P <.0001) and 55 ventricular fibrillation episodes (19.1 +/- 22.8 mV-mV, P <.05). A minimum baseline <0.35 mV-mV threshold had a sensitivity of 83% (20/24) with a 100% (190/190) specificity. A baseline measure of the far-field electrogram had a high sensitivity and specificity to detect lead failure noise compared with ventricular tachycardia or fibrillation.
Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates
Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.
2008-01-01
Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.
Effects of Linking Methods on Detection of DIF.
ERIC Educational Resources Information Center
Kim, Seock-Ho; Cohen, Allan S.
1992-01-01
Effects of the following methods for linking metrics on detection of differential item functioning (DIF) were compared: (1) test characteristic curve method (TCC); (2) weighted mean and sigma method; and (3) minimum chi-square method. With large samples, results were essentially the same. With small samples, TCC was most accurate. (SLD)
A Review on Sensor, Signal, and Information Processing Algorithms (PREPRINT)
2010-01-01
processing [214], ambi- guity surface averaging [215], optimum uncertain field tracking, and optimal minimum variance track - before - detect [216]. In [217, 218...2) (2001) 739–746. [216] S. L. Tantum, L. W. Nolte, J. L. Krolik, K. Harmanci, The performance of matched-field track - before - detect methods using
Shin, Hye-Young; Park, Hae-Young Lopilly; Jung, Younhea; Choi, Jin-A; Park, Chan Kee
2014-10-01
To compare the initial visual field (VF) defect pattern and the spectral-domain optical coherence tomography (OCT) parameters and investigate the effects of distinct types of optic disc damage on the diagnostic performance of these OCT parameters in early glaucoma. Retrospective, observational study. A total of 138 control eyes and 160 eyes with early glaucoma were enrolled. The glaucomatous eyes were subdivided into 4 groups according to the type of optic disc damage: focal ischemic (FI) group, myopic (MY) group, senile sclerotic (SS) group, and generalized enlargement (GE) group. The values of total deviation (TD) maps were analyzed, and superior-inferior (S-I) differences of TD were calculated. The optic nerve head (ONH) parameters, peripapillary retinal nerve fiber layer (pRNFL), and ganglion cell-inner plexiform layer (GCIPL) thicknesses were measured. Comparison of diagnostic ability using area under the receiver operating characteristic curves (AUCs). The S-I and central S-I difference of the FI group were larger than those of the GE group. The rim area of the SS group was larger than those of the 3 other groups, and the vertical cup-to-disc ratio (CDR) of the GE group was larger than that of the MY group. In addition, the minimum and inferotemporal GCIPL thicknesses of the FI group were smaller than those of the GE group. The AUC of the rim area (0.89) was lower than that of the minimum GCIPL (0.99) in the SS group, and the AUC of the vertical CDR (0.90) was lower than that of the minimum GCIPL (0.99) in the MY group. Furthermore, the AUCs of the minimum GCIPL thicknesses of the FI and MY group were greater than those of the average pRNFL thickness for detecting glaucoma, as opposed to the SS and GE. The OCT parameters differed among the 4 groups on the basis of the distinct optic disc appearance and initial glaucomatous damage pattern. Clinicians should be aware that the diagnostic capability of OCT parameters could differ according to the type of optic disc damage in early glaucoma. Copyright © 2014 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Batool, Nazre; Chellappa, Rama
2014-09-01
Facial retouching is widely used in media and entertainment industry. Professional software usually require a minimum level of user expertise to achieve the desirable results. In this paper, we present an algorithm to detect facial wrinkles/imperfection. We believe that any such algorithm would be amenable to facial retouching applications. The detection of wrinkles/imperfections can allow these skin features to be processed differently than the surrounding skin without much user interaction. For detection, Gabor filter responses along with texture orientation field are used as image features. A bimodal Gaussian mixture model (GMM) represents distributions of Gabor features of normal skin versus skin imperfections. Then, a Markov random field model is used to incorporate the spatial relationships among neighboring pixels for their GMM distributions and texture orientations. An expectation-maximization algorithm then classifies skin versus skin wrinkles/imperfections. Once detected automatically, wrinkles/imperfections are removed completely instead of being blended or blurred. We propose an exemplar-based constrained texture synthesis algorithm to inpaint irregularly shaped gaps left by the removal of detected wrinkles/imperfections. We present results conducted on images downloaded from the Internet to show the efficacy of our algorithms.
Item Anomaly Detection Based on Dynamic Partition for Time Series in Recommender Systems
Gao, Min; Tian, Renli; Wen, Junhao; Xiong, Qingyu; Ling, Bin; Yang, Linda
2015-01-01
In recent years, recommender systems have become an effective method to process information overload. However, recommendation technology still suffers from many problems. One of the problems is shilling attacks-attackers inject spam user profiles to disturb the list of recommendation items. There are two characteristics of all types of shilling attacks: 1) Item abnormality: The rating of target items is always maximum or minimum; and 2) Attack promptness: It takes only a very short period time to inject attack profiles. Some papers have proposed item anomaly detection methods based on these two characteristics, but their detection rate, false alarm rate, and universality need to be further improved. To solve these problems, this paper proposes an item anomaly detection method based on dynamic partitioning for time series. This method first dynamically partitions item-rating time series based on important points. Then, we use chi square distribution (χ2) to detect abnormal intervals. The experimental results on MovieLens 100K and 1M indicate that this approach has a high detection rate and a low false alarm rate and is stable toward different attack models and filler sizes. PMID:26267477
Item Anomaly Detection Based on Dynamic Partition for Time Series in Recommender Systems.
Gao, Min; Tian, Renli; Wen, Junhao; Xiong, Qingyu; Ling, Bin; Yang, Linda
2015-01-01
In recent years, recommender systems have become an effective method to process information overload. However, recommendation technology still suffers from many problems. One of the problems is shilling attacks-attackers inject spam user profiles to disturb the list of recommendation items. There are two characteristics of all types of shilling attacks: 1) Item abnormality: The rating of target items is always maximum or minimum; and 2) Attack promptness: It takes only a very short period time to inject attack profiles. Some papers have proposed item anomaly detection methods based on these two characteristics, but their detection rate, false alarm rate, and universality need to be further improved. To solve these problems, this paper proposes an item anomaly detection method based on dynamic partitioning for time series. This method first dynamically partitions item-rating time series based on important points. Then, we use chi square distribution (χ2) to detect abnormal intervals. The experimental results on MovieLens 100K and 1M indicate that this approach has a high detection rate and a low false alarm rate and is stable toward different attack models and filler sizes.
Do the Brazilian sardine commercial landings respond to local ocean circulation?
Gouveia, Mainara B; Gherardi, Douglas F M; Lentini, Carlos A D; Dias, Daniela F; Campos, Paula C
2017-01-01
It has been reported that sea surface temperature (SST) anomalies, flow intensity and mesoscale ocean processes, all affect sardine production, both in eastern and western boundary current systems. Here we tested the hypothesis whether extreme high and low commercial landings of the Brazilian sardine fisheries in the South Brazil Bight (SBB) are sensitive to different oceanic conditions. An ocean model (ROMS) and an individual based model (Ichthyop) were used to assess the relationship between oceanic conditions during the spawning season and commercial landings of the Brazilian sardine one year later. Model output was compared with remote sensing and analysis data showing good consistency. Simulations indicate that mortality of eggs and larvae by low temperature prior to maximum and minimum landings are significantly higher than mortality caused by offshore advection. However, when periods of maximum and minimum sardine landings are compared with respect to these causes of mortality no significant differences were detected. Results indicate that mortality caused by prevailing oceanic conditions at early life stages alone can not be invoked to explain the observed extreme commercial landings of the Brazilian sardine. Likely influencing factors include starvation and predation interacting with the strategy of spawning "at the right place and at the right time".
Tobías, Aurelio; Armstrong, Ben; Gasparrini, Antonio
2017-01-01
The minimum mortality temperature from J- or U-shaped curves varies across cities with different climates. This variation conveys information on adaptation, but ability to characterize is limited by the absence of a method to describe uncertainty in estimated minimum mortality temperatures. We propose an approximate parametric bootstrap estimator of confidence interval (CI) and standard error (SE) for the minimum mortality temperature from a temperature-mortality shape estimated by splines. The coverage of the estimated CIs was close to nominal value (95%) in the datasets simulated, although SEs were slightly high. Applying the method to 52 Spanish provincial capital cities showed larger minimum mortality temperatures in hotter cities, rising almost exactly at the same rate as annual mean temperature. The method proposed for computing CIs and SEs for minimums from spline curves allows comparing minimum mortality temperatures in different cities and investigating their associations with climate properly, allowing for estimation uncertainty.
Improved Conflict Detection for Reducing Operational Errors in Air Traffic Control
NASA Technical Reports Server (NTRS)
Paielli, Russell A.; Erzberger, Hainz
2003-01-01
An operational error is an incident in which an air traffic controller allows the separation between two aircraft to fall below the minimum separation standard. The rates of such errors in the US have increased significantly over the past few years. This paper proposes new detection methods that can help correct this trend by improving on the performance of Conflict Alert, the existing software in the Host Computer System that is intended to detect and warn controllers of imminent conflicts. In addition to the usual trajectory based on the flight plan, a "dead-reckoning" trajectory (current velocity projection) is also generated for each aircraft and checked for conflicts. Filters for reducing common types of false alerts were implemented. The new detection methods were tested in three different ways. First, a simple flightpath command language was developed t o generate precisely controlled encounters for the purpose of testing the detection software. Second, written reports and tracking data were obtained for actual operational errors that occurred in the field, and these were "replayed" to test the new detection algorithms. Finally, the detection methods were used to shadow live traffic, and performance was analysed, particularly with regard to the false-alert rate. The results indicate that the new detection methods can provide timely warnings of imminent conflicts more consistently than Conflict Alert.
Practical scheme for optimal measurement in quantum interferometric devices
NASA Astrophysics Data System (ADS)
Takeoka, Masahiro; Ban, Masashi; Sasaki, Masahide
2003-06-01
We apply a Kennedy-type detection scheme, which was originally proposed for a binary communications system, to interferometric sensing devices. We show that the minimum detectable perturbation of the proposed system reaches the ultimate precision bound which is predicted by quantum Neyman-Pearson hypothesis testing. To provide concrete examples, we apply our interferometric scheme to phase shift detection by using coherent and squeezed probe fields.
Mideksa, K G; Singh, A; Hoogenboom, N; Hellriegel, H; Krause, H; Schnitzler, A; Deuschl, G; Raethjen, J; Schmidt, G; Muthuraman, M
2016-08-01
One of the most commonly used therapy to treat patients with Parkinson's disease (PD) is deep brain stimulation (DBS) of the subthalamic nucleus (STN). Identifying the most optimal target area for the placement of the DBS electrodes have become one of the intensive research area. In this study, the first aim is to investigate the capabilities of different source-analysis techniques in detecting deep sources located at the sub-cortical level and validating it using the a-priori information about the location of the source, that is, the STN. Secondly, we aim at an investigation of whether EEG or MEG is best suited in mapping the DBS-induced brain activity. To do this, simultaneous EEG and MEG measurement were used to record the DBS-induced electromagnetic potentials and fields. The boundary-element method (BEM) have been used to solve the forward problem. The position of the DBS electrodes was then estimated using the dipole (moving, rotating, and fixed MUSIC), and current-density-reconstruction (CDR) (minimum-norm and sLORETA) approaches. The source-localization results from the dipole approaches demonstrated that the fixed MUSIC algorithm best localizes deep focal sources, whereas the moving dipole detects not only the region of interest but also neighboring regions that are affected by stimulating the STN. The results from the CDR approaches validated the capability of sLORETA in detecting the STN compared to minimum-norm. Moreover, the source-localization results using the EEG modality outperformed that of the MEG by locating the DBS-induced activity in the STN.
A new approach to keratoconus detection based on corneal morphogeometric analysis.
Cavas-Martínez, Francisco; Bataille, Laurent; Fernández-Pacheco, Daniel G; Cañavate, Francisco J F; Alió, Jorge L
2017-01-01
To characterize corneal structural changes in keratoconus using a new morphogeometric approach and to evaluate its potential diagnostic ability. Comparative study including 464 eyes of 464 patients (age, 16 and 72 years) divided into two groups: control group (143 healthy eyes) and keratoconus group (321 keratoconus eyes). Topographic information (Sirius, CSO, Italy) was processed with SolidWorks v2012 and a solid model representing the geometry of each cornea was generated. The following parameters were defined: anterior (Aant) and posterior (Apost) corneal surface areas, area of the cornea within the sagittal plane passing through the Z axis and the apex (Aapexant, Aapexpost) and minimum thickness points (Amctant, Amctpost) of the anterior and posterior corneal surfaces, and average distance from the Z axis to the apex (Dapexant, Dapexpost) and minimum thickness points (Dmctant, Dmctpost) of both corneal surfaces. Significant differences among control and keratoconus group were found in Aapexant, Aapexpost, Amctant, Amctpost, Dapexant, Dapexpost (all p<0.001), Apost (p = 0.014), and Dmctpost (p = 0.035). Significant correlations in keratoconus group were found between Aant and Apost (r = 0.836), Amctant and Amctpost (r = 0.983), and Dmctant and Dmctpost (r = 0.954, all p<0.001). A logistic regression analysis revealed that the detection of keratoconus grade I (Amsler Krumeich) was related to Apost, Atot, Aapexant, Amctant, Amctpost, Dapexpost, Dmctant and Dmctpost (Hosmer-Lemeshow: p>0.05, R2 Nagelkerke: 0.926). The overall percentage of cases correctly classified by the model was 97.30%. Our morphogeometric approach based on the analysis of the cornea as a solid is useful for the characterization and detection of keratoconus.
High precision automated face localization in thermal images: oral cancer dataset as test case
NASA Astrophysics Data System (ADS)
Chakraborty, M.; Raman, S. K.; Mukhopadhyay, S.; Patsa, S.; Anjum, N.; Ray, J. G.
2017-02-01
Automated face detection is the pivotal step in computer vision aided facial medical diagnosis and biometrics. This paper presents an automatic, subject adaptive framework for accurate face detection in the long infrared spectrum on our database for oral cancer detection consisting of malignant, precancerous and normal subjects of varied age group. Previous works on oral cancer detection using Digital Infrared Thermal Imaging(DITI) reveals that patients and normal subjects differ significantly in their facial thermal distribution. Therefore, it is a challenging task to formulate a completely adaptive framework to veraciously localize face from such a subject specific modality. Our model consists of first extracting the most probable facial regions by minimum error thresholding followed by ingenious adaptive methods to leverage the horizontal and vertical projections of the segmented thermal image. Additionally, the model incorporates our domain knowledge of exploiting temperature difference between strategic locations of the face. To our best knowledge, this is the pioneering work on detecting faces in thermal facial images comprising both patients and normal subjects. Previous works on face detection have not specifically targeted automated medical diagnosis; face bounding box returned by those algorithms are thus loose and not apt for further medical automation. Our algorithm significantly outperforms contemporary face detection algorithms in terms of commonly used metrics for evaluating face detection accuracy. Since our method has been tested on challenging dataset consisting of both patients and normal subjects of diverse age groups, it can be seamlessly adapted in any DITI guided facial healthcare or biometric applications.
NASA Astrophysics Data System (ADS)
Baruah, Upama; Chowdhury, Devasish
2016-04-01
Functionalized graphene oxide quantum dots (GOQDs)-poly(vinyl alcohol) (PVA) hybrid hydrogels were prepared using a simple, facile and cost-effective strategy. GOQDs bearing different surface functional groups were introduced as the cross-linking agent into the PVA matrix thereby resulting in gelation. The four different types of hybrid hydrogels were prepared using graphene oxide, reduced graphene oxide, ester functionalized graphene oxide and amine functionalized GOQDs as cross-linking agents. It was observed that the hybrid hydrogel prepared with amine functionalized GOQDs was the most stable. The potential applicability of using this solid sensing platform has been subsequently explored in an easy, simple, effective and sensitive method for optical detection of M2+ (Fe2+, Co2+ and Cu2+) in aqueous media involving colorimetric detection. Amine functionalized GOQDs-PVA hybrid hydrogel when put into the corresponding solution of Fe2+, Co2+ and Cu2+ renders brown, orange and blue coloration respectively of the solution detecting the presence of Fe2+, Co2+ and Cu2+ ions in the solution. The minimum detection limit observed was 1 × 10-7 M using UV-visible spectroscopy. Further, the applicability of the sensing material was also tested for a mixture of co-existing ions in solution to demonstrate the practical applicability of the system. Insight into the probable mechanistic pathway involved in the detection process is also being discussed.
A novel device for quantitative measurement of chloride concentration by fluorescence indicator
NASA Astrophysics Data System (ADS)
Wang, Junsheng; Wu, Xudong; Chon, Chanhee; Gonska, Tanja; Li, Dongqing
2012-02-01
Cystic fibrosis (CF) is a life-threatening genetic disease. At present, the common method for diagnosis of CF is to detect the chloride concentration in sweat using ion-selective electrodes. However, the current sweat testing methods require a relatively large quantity of sweat sample, at least 25 µL, which is very difficult to obtain, especially for newborns. This paper presents a new method and a new device for rapid detection of the chloride concentration from a small volume of solution. In this method, the chloride concentration is determined quantitatively by the fluorescence intensity of MQAE, a chloride ion fluorescent indicator. In this device, the sample is carried by a small piece of filter paper on a cover glass exposed to an UV LED light source. The resulting fluorescent signals are detected by a Si photodiode. Data acquisition and processing are accomplished by LabVIEW software in a PDA. Based on the Stern-Volmer relationship, the effects of different parameters on the fluorescence intensity were analyzed. The observed significant difference between 40 and 60 mM (the borderline of chloride concentration for CF) is discussed in this paper. The results show that detection can be completed within 10 s. The minimum detectable volume of the chloride solution is 1 μL. The novel method and the device are of great potential for CF diagnosis.
MHC class II B diversity in blue tits: a preliminary study.
Aguilar, Juan Rivero-de; Schut, Elske; Merino, Santiago; Martínez, Javier; Komdeur, Jan; Westerdahl, Helena
2013-07-01
In this study, we partly characterize major histocompatibility complex (MHC) class II B in the blue tit (Cyanistes caeruleus). A total of 22 individuals from three different European locations: Spain, The Netherlands, and Sweden were screened for MHC allelic diversity. The MHC genes were investigated using both PCR-based methods and unamplified genomic DNA with restriction fragment length polymorphism (RFLP) and southern blots. A total of 13 different exon 2 sequences were obtained independently from DNA and/or RNA, thus confirming gene transcription and likely functionality of the genes. Nine out of 13 alleles were found in more than one country, and two alleles appeared in all countries. Positive selection was detected in the region coding for the peptide binding region (PBR). A maximum of three alleles per individual was detected by sequencing and the RFLP pattern consisted of 4-7 fragments, indicating a minimum number of 2-4 loci per individual. A phylogenetic analysis, demonstrated that the blue tit sequences are divergent compared to sequences from other passerines resembling a different MHC lineage than those possessed by most passerines studied to date.
MHC class II B diversity in blue tits: a preliminary study
Aguilar, Juan Rivero-de; Schut, Elske; Merino, Santiago; Martínez, Javier; Komdeur, Jan; Westerdahl, Helena
2013-01-01
In this study, we partly characterize major histocompatibility complex (MHC) class II B in the blue tit (Cyanistes caeruleus). A total of 22 individuals from three different European locations: Spain, The Netherlands, and Sweden were screened for MHC allelic diversity. The MHC genes were investigated using both PCR-based methods and unamplified genomic DNA with restriction fragment length polymorphism (RFLP) and southern blots. A total of 13 different exon 2 sequences were obtained independently from DNA and/or RNA, thus confirming gene transcription and likely functionality of the genes. Nine out of 13 alleles were found in more than one country, and two alleles appeared in all countries. Positive selection was detected in the region coding for the peptide binding region (PBR). A maximum of three alleles per individual was detected by sequencing and the RFLP pattern consisted of 4–7 fragments, indicating a minimum number of 2–4 loci per individual. A phylogenetic analysis, demonstrated that the blue tit sequences are divergent compared to sequences from other passerines resembling a different MHC lineage than those possessed by most passerines studied to date. PMID:23919136
Detection of hail signatures from single-polarization C-band radar reflectivity
NASA Astrophysics Data System (ADS)
Kunz, Michael; Kugel, Petra I. S.
2015-02-01
Five different criteria that estimate hail signatures from single-polarization radar data are statistically evaluated over a 15-year period by categorical verification against loss data provided by a building insurance company. The criteria consider different levels or thresholds of radar reflectivity, some of them complemented by estimates of the 0 °C level or cloud top temperature. Applied to reflectivity data from a single C-band radar in southwest Germany, it is found that all criteria are able to reproduce most of the past damage-causing hail events. However, the criteria substantially overestimate hail occurrence by up to 80%, mainly due to the verification process using damage data. Best results in terms of highest Heidke Skill Score HSS or Critical Success Index CSI are obtained for the Hail Detection Algorithm (HDA) and the Probability of Severe Hail (POSH). Radar-derived hail probability shows a high spatial variability with a maximum on the lee side of the Black Forest mountains and a minimum in the broad Rhine valley.
A differentially amplified motion in the ear for near-threshold sound detection
Chen, Fangyi; Zha, Dingjun; Fridberger, Anders; Zheng, Jiefu; Choudhury, Niloy; Jacques, Steven L.; Wang, Ruikang K.; Shi, Xiaorui; Nuttall, Alfred L.
2011-01-01
The ear is a remarkably sensitive pressure fluctuation detector. In guinea pigs, behavioral measurements indicate a minimum detectable sound pressure of ~20 μPa at 16 kHz. Such faint sounds produce 0.1 nm basilar membrane displacements, a distance smaller than conformational transitions in ion channels. It seems that noise within the auditory system would swamp such tiny motions, making weak sounds imperceptible. Here, a new mechanism contributing to a resolution of this problem is proposed and validated through direct measurement. We hypothesize that vibration at the apical end of hair cells is enhanced compared to the commonly measured basilar membrane side. Using in vivo optical coherence tomography, we demonstrated that apical-side vibrations peak at a higher frequency, had different timing, and were enhanced compared to the basilar membrane. These effects depend nonlinearly on the stimulus level. The timing difference and enhancement are important for explaining how the noise problem is circumvented. PMID:21602821
Use of power analysis to develop detectable significance criteria for sea urchin toxicity tests
Carr, R.S.; Biedenbach, J.M.
1999-01-01
When sufficient data are available, the statistical power of a test can be determined using power analysis procedures. The term “detectable significance” has been coined to refer to this criterion based on power analysis and past performance of a test. This power analysis procedure has been performed with sea urchin (Arbacia punctulata) fertilization and embryological development data from sediment porewater toxicity tests. Data from 3100 and 2295 tests for the fertilization and embryological development tests, respectively, were used to calculate the criteria and regression equations describing the power curves. Using Dunnett's test, a minimum significant difference (MSD) (β = 0.05) of 15.5% and 19% for the fertilization test, and 16.4% and 20.6% for the embryological development test, for α ≤ 0.05 and α ≤ 0.01, respectively, were determined. The use of this second criterion reduces type I (false positive) errors and helps to establish a critical level of difference based on the past performance of the test.
Chen, Hsing-Yu; Kaneda, Noriaki; Lee, Jeffrey; Chen, Jyehong; Chen, Young-Kai
2017-03-20
The feasibility of a single sideband (SSB) PAM4 intensity-modulation and direct-detection (IM/DD) transmission based on a CMOS ADC and DAC is experimentally demonstrated in this work. To cost effectively build a >50 Gb/s system as well as to extend the transmission distance, a low cost EML and a passive optical filter are utilized to generate the SSB signal. However, the EML-induced chirp and dispersion-induced power fading limit the requirements of the SSB filter. To separate the effect of signal-signal beating interference, filters with different roll-off factors are employed to demonstrate the performance tolerance at different transmission distance. Moreover, a high resolution spectrum analysis is proposed to depict the system limitation. Experimental results show that a minimum roll-off factor of 7 dB/10GHz is required to achieve a 51.84Gb/s 40-km transmission with only linear feed-forward equalization.
An ultra low power feature extraction and classification system for wearable seizure detection.
Page, Adam; Pramod Tim Oates, Siddharth; Mohsenin, Tinoosh
2015-01-01
In this paper we explore the use of a variety of machine learning algorithms for designing a reliable and low-power, multi-channel EEG feature extractor and classifier for predicting seizures from electroencephalographic data (scalp EEG). Different machine learning classifiers including k-nearest neighbor, support vector machines, naïve Bayes, logistic regression, and neural networks are explored with the goal of maximizing detection accuracy while minimizing power, area, and latency. The input to each machine learning classifier is a 198 feature vector containing 9 features for each of the 22 EEG channels obtained over 1-second windows. All classifiers were able to obtain F1 scores over 80% and onset sensitivity of 100% when tested on 10 patients. Among five different classifiers that were explored, logistic regression (LR) proved to have minimum hardware complexity while providing average F-1 score of 91%. Both ASIC and FPGA implementations of logistic regression are presented and show the smallest area, power consumption, and the lowest latency when compared to the previous work.
Rohatensky, Mitchell G; Livingstone, Devon M; Mintchev, Paul; Barnes, Heather K; Nakoneshny, Steven C; Demetrick, Douglas J; Dort, Joseph C; van Marle, Guido
2018-02-08
Oropharyngeal Squamous Cell Carcinoma (OPSCC) is increasing in incidence despite a decline in traditional risk factors. Human Papilloma Virus (HPV), specifically subtypes 16, 18, 31 and 35, has been implicated as the high-risk etiologic agent. HPV positive cancers have a significantly better prognosis than HPV negative cancers of comparable stage, and may benefit from different treatment regimens. Currently, HPV related carcinogenesis is established indirectly through Immunohistochemistry (IHC) staining for p16, a tumour suppressor gene, or polymerase chain reaction (PCR) that directly tests for HPV DNA in biopsied tissue. Loop mediated isothermal amplification (LAMP) is more accurate than IHC, more rapid than PCR and is significantly less costly. In previous work we showed that a subtype specific HPV LAMP assay performed similar to PCR on purified DNA. In this study we examined the performance of this LAMP assay without DNA purification. We used LAMP assays using established primers for HPV 16 and 18, and new primers for HPV 31 and 35. LAMP reaction conditions were tested on serial dilutions of plasmid HPV DNA to confirm minimum viral copy number detection thresholds. LAMP was then performed directly on different human cell line samples without DNA purification. Our LAMP assays could detect 10 5 , 10 3 , 10 4 , and 10 5 copies of plasmid DNA for HPV 16, 18, 31, and 35, respectively. All primer sets were subtype specific, with no cross-amplification. Our LAMP assays also reliably amplified subtype specific HPV DNA from samples without requiring DNA isolation and purification. The high risk OPSCC HPV subtype specific LAMP primer sets demonstrated, excellent clinically relevant, minimum copy number detection thresholds with an easy readout system. Amplification directly from samples without purification illustrated the robust nature of the assay, and the primers used. This lends further support HPV type specific LAMP assays, and these specific primer sets and assays can be further developed to test for HPV in OPSCC in resource and lab limited settings, or even bedside testing.
Size, weight, and power reduction of mercury cadmium telluride infrared detection modules
NASA Astrophysics Data System (ADS)
Breiter, Rainer; Ihle, Tobias; Wendler, Joachim C.; Lutz, Holger; Rutzinger, Stefan; Schallenberg, Timo; Hofmann, Karl C.; Ziegler, Johann
2011-06-01
Application requirements driving present IR technology development activities are improved capability to detect and identify a threat as well as the need to reduce size weight and power consumption (SWaP) of thermal sights. In addition to the development of 3rd Gen IR modules providing dual-band or dual-color capability, AIM is focused on IR FPAs with reduced pitch and high operating temperature for SWaP reduction. State-of-the-art MCT technology allows AIM the production of mid-wave infrared (MWIR) detectors operating at temperatures exceeding 120 K without any need to sacrifice the 5-μm cut-off wavelength. These FPAs allow manufacturing of low cost IR modules with minimum size, weight, and power for state-of-the-art high performance IR systems. AIM has realized full TV format MCT 640×512 mid-wave and long-wave IR detection modules with a 15-μm pitch to meet the requirements of critical military applications like thermal weapon sights or thermal imagers in unmanned aerial vehicles applications. In typical configurations like an F/4.6 cold shield for the 640×512 MWIR module an noise equivalent temperature difference (NETD) <25 mK @ 5 ms integration time is achieved, while the long-wavelength infrared (LWIR) modules achieve an NETD <38 mK @ F/2 and 180 μs integration time. For the LWIR modules, FPAs with a cut-off up to 10 μm have been realized. The modules are available either with different integral rotary cooler configurations for portable applications that require minimum cooling power or a new split linear cooler providing long lifetime with a mean time to failure (MTTF) > 20000, e.g., for warning sensors in 24/7 operation. The modules are available with optional image processing electronics providing nonuniformity correction and further image processing for a complete IR imaging solution. The latest results and performance of those modules and their applications are presented.
Miyashita, Naoyuki; Kobayashi, Intetsu; Higa, Futoshi; Aoki, Yosuke; Kikuchi, Toshiaki; Seki, Masafumi; Tateda, Kazuhiro; Maki, Nobuko; Uchino, Kazuhiro; Ogasawara, Kazuhiko; Kurachi, Satoe; Ishikawa, Tatsuya; Ishimura, Yoshito; Kanesaka, Izumo; Kiyota, Hiroshi; Watanabe, Akira
2018-05-01
The activities of various antibiotics against 58 clinical isolates of Legionella species were evaluated using two methods, extracellular activity (minimum inhibitory concentration [MIC]) and intracellular activity. Susceptibility testing was performed using BSYEα agar. The minimum extracellular concentration inhibiting intracellular multiplication (MIEC) was determined using a human monocyte-derived cell line, THP-1. The most potent drugs in terms of MICs against clinical isolates were levofloxacin, garenoxacin, and rifampicin with MIC 90 values of 0.015 μg/ml. The activities of ciprofloxacin, pazufloxacin, moxifloxacin, clarithromycin, and azithromycin were slightly higher than those of levofloxacin, garenoxacin, and rifampicin with an MIC 90 of 0.03-0.06 μg/ml. Minocycline showed the highest activity, with an MIC 90 of 1 μg/ml. No resistance against the antibiotics tested was detected. No difference was detected in the MIC distributions of the antibiotics tested between L. pneumophila serogroup 1 and L. pneumophila non-serogroup 1. The MIECs of ciprofloxacin, pazufloxacin, levofloxacin, moxifloxacin, garenoxacin, clarithromycin, and azithromycin were almost the same as their MICs, with MIEC 90 values of 0.015-0.06 μg/ml, although the MIEC of minocycline was relatively lower and that of rifampicin was higher than their respective MICs. No difference was detected in the MIEC distributions of the antibiotics tested between L. pneumophila serogroup 1 and L. pneumophila non-serogroup 1. The ratios of MIEC:MIC for rifampicin (8) and pazufloxacin (2) were higher than those for levofloxacin (1), ciprofloxacin (1), moxifloxacin (1), garenoxacin (1), clarithromycin (1), and azithromycin (1). Our study showed that quinolones and macrolides had potent antimicrobial activity against both extracellular and intracellular Legionella species. The present data suggested the possible efficacy of these drugs in treatment of Legionella infections. Copyright © 2018 Japanese Society of Chemotherapy and The Japanese Association for Infectious Diseases. Published by Elsevier Ltd. All rights reserved.
Matias, M S; Melegari, S P; Vicentini, D S; Matias, W G; Ricordel, C; Hauchard, D
2015-08-15
Nanoscience is a field that has stood out in recent years. The accurate long-term health and environmental risks associated with these emerging materials are unknown. Therefore, this work investigated how to eliminate silver nanoparticles (AgNPs) from synthetic effluents by electrocoagulation (EC) due to the widespread use of this type of nanoparticle (NP) in industry and its potential inhibition power over microorganisms responsible for biological treatment in effluent treatment plants. Synthesized AgNPs were studied via four different routes by chemical reduction in aqueous solutions to simulate the chemical variations of a hypothetical industrial effluent, and efficiency conditions of the EC treatment were determined. All routes used silver nitrate as the source of silver ions, and two synthesis routes were studied with sodium citrate as a stabilizer. In route I, sodium citrate functioned simultaneously as the reducing agent and stabilizing agent, whereas route II used sodium borohydride as a reducing agent. Route III used D-glucose as the reducing agent and sodium pyrophosphate as the stabilizer; route IV used sodium pyrophosphate as the stabilizing agent and sodium borohydride as the reducing agent. The efficiency of the EC process of the different synthesized solutions was studied. For route I, after 85 min of treatment, a significant decrease in the plasmon resonance peak of the sample was observed, which reflects the efficiency in the mass reduction of AgNPs in the solution by 98.6%. In route II, after 12 min of EC, the absorbance results reached the detection limit of the measurement instrument, which indicates a minimum reduction of 99.9% of AgNPs in the solution. During the 4 min of treatment in route III, the absorbance intensities again reached the detection limit, which indicates a minimum reduction of 99.8%. In route IV, after 10 min of treatment, a minimum AgNP reduction of 99.9% was observed. Based on these results, it was possible to verify that the solutions containing citrate considerably increased the necessary times required to eliminate AgNPs from the synthesized effluent, whereas solutions free of this reagent showed better results on floc formation and, therefore, are best for the treatment. The elimination of AgNPs from effluents by EC proved effective for the studied routes. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ilić, L.; Kuzmanoski, M.; Kolarž, P.; Nina, A.; Srećković, V.; Mijić, Z.; Bajčetić, J.; Andrić, M.
2018-06-01
Measurements of atmospheric parameters were carried out during the partial solar eclipse (51% coverage of solar disc) observed in Belgrade on 20 March 2015. The measured parameters included height of the planetary boundary layer (PBL), meteorological parameters, solar radiation, surface ozone and air ions, as well as Very Low Frequency (VLF, 3-30 kHz) and Low Frequency (LF, 30-300 kHz) signals to detect low-ionospheric plasma perturbations. The observed decrease of global solar and UV-B radiation was 48%, similar to the solar disc coverage. Meteorological parameters showed similar behavior at two measurement sites, with different elevations and different measurement heights. Air temperature change due to solar eclipse was more pronounced at the lower measurement height, showing a decrease of 2.6 °C, with 15-min time delay relative to the eclipse maximum. However, at the other site temperature did not decrease; its morning increase ceased with the start of the eclipse, and continued after the eclipse maximum. Relative humidity at both sites remained almost constant until the eclipse maximum and then decreased as the temperature increased. The wind speed decreased and reached minimum 35 min after the last contact. The eclipse-induced decrease of PBL height was about 200 m, with minimum reached 20 min after the eclipse maximum. Although dependent on UV radiation, surface ozone concentration did not show the expected decrease, possibly due to less significant influence of photochemical reactions at the measurement site and decline of PBL height. Air-ion concentration decreased during the solar eclipse, with minimum almost coinciding with the eclipse maximum. Additionally, the referential Line-of-Sight (LOS) radio link was set in the area of Belgrade, using the carrier frequency of 3 GHz. Perturbation of the receiving signal level (RSL) was observed on March 20, probably induced by the solar eclipse. Eclipse-related perturbations in ionospheric D-region were detected based on the VLF/LF signal variations, as a consequence of Lyα radiation decrease.
NASA Astrophysics Data System (ADS)
Cui, Lifang; Wang, Lunche; Lai, Zhongping; Tian, Qing; Liu, Wen; Li, Jun
2017-11-01
The variation characteristics of air temperature and precipitation in the Yangtze River Basin (YRB), China during 1960-2015 were analysed using a linear regression (LR) analysis, a Mann-Kendall (MK) test with Sen's slope estimator and Sen's innovative trend analysis (ITA). The results showed that the annual maximum, minimum and mean temperature significantly increased at the rate of 0.15°C/10yr, 0.23°C/10yr and 0.19°C/10yr, respectively, over the whole study area during 1960-2015. The warming magnitudes for the above variables during 1980-2015 were much higher than those during 1960-2015:0.38°C/10yr, 0.35°C/10yr and 0.36°C/10yr, respectively. The seasonal maximum, minimum and mean temperature significantly increased in the spring, autumn and winter seasons during 1960-2015. Although the summer temperatures also increased at some extent, only the minimum temperature showed a significant increasing trend. Meanwhile, the highest rate of increase of seasonal mean temperature occurred in winter (0.24°C/10yr) during 1960-2015 and spring (0.50°C/10yr) during 1980-2015, which indicated that the significant warming trend for the whole YRB could be attributed to the remarkable temperature increases in winter and spring months. However, both the annual and seasonal warming magnitudes showed large regional differences, and a higher warming rate was detected in the eastern YRB and the western source region of the Yangtze River on the Qinghai-Tibetan Plateau (QTP). Additionally, annual precipitation increased by approximately 12.02 mm/10yr during 1960-2015 but decreased at the rate of 19.63 mm/10yr during 1980-2015. There were decreasing trends for precipitation in all four seasons since 1980 in the YRB, and a significant increasing trend was only detected in summer since 1960 (12.37 mm/10yr). Overall, a warming-wetting trend was detected in the south-eastern and north-western YRB, while there was a warming-drying trend in middle regions.
Roy, Ranita; Tiwari, Monalisa; Donelli, Gianfranco; Tiwari, Vishvanath
2018-01-01
ABSTRACT Biofilm refers to the complex, sessile communities of microbes found either attached to a surface or buried firmly in an extracellular matrix as aggregates. The biofilm matrix surrounding bacteria makes them tolerant to harsh conditions and resistant to antibacterial treatments. Moreover, the biofilms are responsible for causing a broad range of chronic diseases and due to the emergence of antibiotic resistance in bacteria it has really become difficult to treat them with efficacy. Furthermore, the antibiotics available till date are ineffective for treating these biofilm related infections due to their higher values of minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC), which may result in in-vivo toxicity. Hence, it is critically important to design or screen anti-biofilm molecules that can effectively minimize and eradicate biofilm related infections. In the present article, we have highlighted the mechanism of biofilm formation with reference to different models and various methods used for biofilm detection. A major focus has been put on various anti-biofilm molecules discovered or tested till date which may include herbal active compounds, chelating agents, peptide antibiotics, lantibiotics and synthetic chemical compounds along with their structures, mechanism of action and their respective MICs, MBCs, minimum biofilm inhibitory concentrations (MBICs) as well as the half maximal inhibitory concentration (IC50) values available in the literature so far. Different mode of action of anti biofilm molecules addressed here are inhibition via interference in the quorum sensing pathways, adhesion mechanism, disruption of extracellular DNA, protein, lipopolysaccharides, exopolysaccharides and secondary messengers involved in various signaling pathways. From this study, we conclude that the molecules considered here might be used to treat biofilm-associated infections after significant structural modifications, thereby investigating its effective delivery in the host. It should also be ensured that minimum effective concentration of these molecules must be capable of eradicating biofilm infections with maximum potency without posing any adverse side effects on the host. PMID:28362216
Detection of Leaks in Water Distribution System using Non-Destructive Techniques
NASA Astrophysics Data System (ADS)
Aslam, H.; Kaur, M.; Sasi, S.; Mortula, Md M.; Yehia, S.; Ali, T.
2018-05-01
Water is scarce and needs to be conserved. A considerable amount of water which flows in the water distribution systems was found to be lost due to pipe leaks. Consequently, innovations in methods of pipe leakage detections for early recognition and repair of these leaks is vital to ensure minimum wastage of water in distribution systems. A major component of detection of pipe leaks is the ability to accurately locate the leak location in pipes through minimum invasion. Therefore, this paper studies the leak detection abilities of the three NDT’s: Ground Penetration Radar (GPR) and spectrometer and aims at determining whether these instruments are effective in identifying the leak. An experimental setup was constructed to simulate the underground conditions of water distribution systems. After analysing the experimental data, it was concluded that both the GPR and the spectrometer were effective in detecting leaks in the pipes. However, the results obtained from the spectrometer were not very differentiating in terms of observing the leaks in comparison to the results obtained from the GPR. In addition to this, it was concluded that both instruments could not be used if the water from the leaks had reached on the surface, resulting in surface ponding.
Van Dun, Bram; Wouters, Jan; Moonen, Marc
2009-07-01
Auditory steady-state responses (ASSRs) are used for hearing threshold estimation at audiometric frequencies. Hearing impaired newborns, in particular, benefit from this technique as it allows for a more precise diagnosis than traditional techniques, and a hearing aid can be better fitted at an early age. However, measurement duration of current single-channel techniques is still too long for clinical widespread use. This paper evaluates the practical performance of a multi-channel electroencephalogram (EEG) processing strategy based on a detection theory approach. A minimum electrode set is determined for ASSRs with frequencies between 80 and 110 Hz using eight-channel EEG measurements of ten normal-hearing adults. This set provides a near-optimal hearing threshold estimate for all subjects and improves response detection significantly for EEG data with numerous artifacts. Multi-channel processing does not significantly improve response detection for EEG data with few artifacts. In this case, best response detection is obtained when noise-weighted averaging is applied on single-channel data. The same test setup (eight channels, ten normal-hearing subjects) is also used to determine a minimum electrode setup for 10-Hz ASSRs. This configuration allows to record near-optimal signal-to-noise ratios for 80% of subjects.
The Effect of Minimum Wages on Adolescent Fertility: A Nationwide Analysis.
Bullinger, Lindsey Rose
2017-03-01
To investigate the effect of minimum wage laws on adolescent birth rates in the United States. I used a difference-in-differences approach and vital statistics data measured quarterly at the state level from 2003 to 2014. All models included state covariates, state and quarter-year fixed effects, and state-specific quarter-year nonlinear time trends, which provided plausibly causal estimates of the effect of minimum wage on adolescent birth rates. A $1 increase in minimum wage reduces adolescent birth rates by about 2%. The effects are driven by non-Hispanic White and Hispanic adolescents. Nationwide, increasing minimum wages by $1 would likely result in roughly 5000 fewer adolescent births annually.
Optimization of Sensor Monitoring Strategies for Emissions
NASA Astrophysics Data System (ADS)
Klise, K. A.; Laird, C. D.; Downey, N.; Baker Hebert, L.; Blewitt, D.; Smith, G. R.
2016-12-01
Continuous or regularly scheduled monitoring has the potential to quickly identify changes in air quality. However, even with low-cost sensors, only a limited number of sensors can be placed to monitor airborne pollutants. The physical placement of these sensors and the sensor technology used can have a large impact on the performance of a monitoring strategy. Furthermore, sensors can be placed for different objectives, including maximum coverage, minimum time to detection or exposure, or to quantify emissions. Different objectives may require different monitoring strategies, which need to be evaluated by stakeholders before sensors are placed in the field. In this presentation, we outline methods to enhance ambient detection programs through optimal design of the monitoring strategy. These methods integrate atmospheric transport models with sensor characteristics, including fixed and mobile sensors, sensor cost and failure rate. The methods use site specific pre-computed scenarios which capture differences in meteorology, terrain, concentration averaging times, gas concentration, and emission characteristics. The pre-computed scenarios become input to a mixed-integer, stochastic programming problem that solves for sensor locations and types that maximize the effectiveness of the detection program. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
NASA Astrophysics Data System (ADS)
Li, Gang; Zhao, Qing
2017-03-01
In this paper, a minimum entropy deconvolution based sinusoidal synthesis (MEDSS) filter is proposed to improve the fault detection performance of the regular sinusoidal synthesis (SS) method. The SS filter is an efficient linear predictor that exploits the frequency properties during model construction. The phase information of the harmonic components is not used in the regular SS filter. However, the phase relationships are important in differentiating noise from characteristic impulsive fault signatures. Therefore, in this work, the minimum entropy deconvolution (MED) technique is used to optimize the SS filter during the model construction process. A time-weighted-error Kalman filter is used to estimate the MEDSS model parameters adaptively. Three simulation examples and a practical application case study are provided to illustrate the effectiveness of the proposed method. The regular SS method and the autoregressive MED (ARMED) method are also implemented for comparison. The MEDSS model has demonstrated superior performance compared to the regular SS method and it also shows comparable or better performance with much less computational intensity than the ARMED method.
NASA Astrophysics Data System (ADS)
Le, Zichun; Suo, Kaihua; Fu, Minglei; Jiang, Ling; Dong, Wen
2012-03-01
In order to minimize the average end to end delay for data transporting in hybrid wireless optical broadband access network, a novel routing algorithm named MSTMCF (minimum spanning tree and minimum cost flow) is devised. The routing problem is described as a minimum spanning tree and minimum cost flow model and corresponding algorithm procedures are given. To verify the effectiveness of MSTMCF algorithm, extensively simulations based on OWNS have been done under different types of traffic source.
40 CFR 264.304 - Response actions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... into the leak detection system exceeds the action leakage rate for any sump, the owner or operator must... thereafter, as long as the flow rate in the leak detection system exceeds the action leakage rate, the owner... forth the actions to be taken if the action leakage rate has been exceeded. At a minimum, the response...
40 CFR 264.253 - Response actions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... rate into the leak detection system exceeds the action leakage rate for any sump, the owner or operator... thereafter, as long as the flow rate in the leak detection system exceeds the action leakage rate, the owner... must set forth the actions to be taken if the action leakage rate has been exceeded. At a minimum, the...
Mapping Health Data: Improved Privacy Protection With Donut Method Geomasking
Hampton, Kristen H.; Fitch, Molly K.; Allshouse, William B.; Doherty, Irene A.; Gesink, Dionne C.; Leone, Peter A.; Serre, Marc L.; Miller, William C.
2010-01-01
A major challenge in mapping health data is protecting patient privacy while maintaining the spatial resolution necessary for spatial surveillance and outbreak identification. A new adaptive geomasking technique, referred to as the donut method, extends current methods of random displacement by ensuring a user-defined minimum level of geoprivacy. In donut method geomasking, each geocoded address is relocated in a random direction by at least a minimum distance, but less than a maximum distance. The authors compared the donut method with current methods of random perturbation and aggregation regarding measures of privacy protection and cluster detection performance by masking multiple disease field simulations under a range of parameters. Both the donut method and random perturbation performed better than aggregation in cluster detection measures. The performance of the donut method in geoprivacy measures was at least 42.7% higher and in cluster detection measures was less than 4.8% lower than that of random perturbation. Results show that the donut method provides a consistently higher level of privacy protection with a minimal decrease in cluster detection performance, especially in areas where the risk to individual geoprivacy is greatest. PMID:20817785
Mapping health data: improved privacy protection with donut method geomasking.
Hampton, Kristen H; Fitch, Molly K; Allshouse, William B; Doherty, Irene A; Gesink, Dionne C; Leone, Peter A; Serre, Marc L; Miller, William C
2010-11-01
A major challenge in mapping health data is protecting patient privacy while maintaining the spatial resolution necessary for spatial surveillance and outbreak identification. A new adaptive geomasking technique, referred to as the donut method, extends current methods of random displacement by ensuring a user-defined minimum level of geoprivacy. In donut method geomasking, each geocoded address is relocated in a random direction by at least a minimum distance, but less than a maximum distance. The authors compared the donut method with current methods of random perturbation and aggregation regarding measures of privacy protection and cluster detection performance by masking multiple disease field simulations under a range of parameters. Both the donut method and random perturbation performed better than aggregation in cluster detection measures. The performance of the donut method in geoprivacy measures was at least 42.7% higher and in cluster detection measures was less than 4.8% lower than that of random perturbation. Results show that the donut method provides a consistently higher level of privacy protection with a minimal decrease in cluster detection performance, especially in areas where the risk to individual geoprivacy is greatest.
Influencial factors in thermographic analysis in substations
NASA Astrophysics Data System (ADS)
Zarco-Periñán, Pedro J.; Martínez-Ramos, José L.
2018-05-01
Thermography is one of the best predictive maintenance tools available due to its low cost, fast implementation and effectiveness of the results obtained. The detected hot spots enable serious incidents to be prevented, both in the facilities and equipment where they have been located. In accordance with the criticality of such points, the repair is carried out with greater or lesser urgency. However, for detection to remain reliable, the facility must meet a set of requirements that are normally assumed, otherwise hot spots cannot be detected correctly and will subsequently cause unwanted defects. This paper analyses three aspects that influence the reliability of the results obtained: the minimum percentage of load that a circuit must contain in order to be able to locate all the hot spots therein; the minimum waiting time from when an item of equipment or facility is energized until a thermographic inspection can be carried out with a complete guarantee of hot spot detection; and the influence on the generation of hot spots exerted by the tightening torque realized in the assembly process.
Geological Carbon Sequestration: A New Approach for Near-Surface Assurance Monitoring
Wielopolski, Lucian
2011-01-01
There are two distinct objectives in monitoring geological carbon sequestration (GCS): Deep monitoring of the reservoir’s integrity and plume movement and near-surface monitoring (NSM) to ensure public health and the safety of the environment. However, the minimum detection limits of the current instrumentation for NSM is too high for detecting weak signals that are embedded in the background levels of the natural variations, and the data obtained represents point measurements in space and time. A new approach for NSM, based on gamma-ray spectroscopy induced by inelastic neutron scatterings (INS), offers novel and unique characteristics providing the following: (1) High sensitivity with a reducible error of measurement and detection limits, and, (2) temporal- and spatial-integration of carbon in soil that results from underground CO2 seepage. Preliminary field results validated this approach showing carbon suppression of 14% in the first year and 7% in the second year. In addition the temporal behavior of the error propagation is presented and it is shown that for a signal at the level of the minimum detection level the error asymptotically approaches 47%. PMID:21556180
The use of NOAA AVHRR data for assessment of the urban heat sland effect
Gallo, K.P.; McNab, A. L.; Karl, Thomas R.; Brown, Jesslyn F.; Hood, J. J.; Tarpley, J.D.
1993-01-01
A vegetation index and a radiative surface temperature were derived from satellite data acquired at approximately 1330 LST for each of 37 cities and for their respective nearby rural regions from 28 June through 8 August 1991. Urbanrural differences for the vegetation index and the surface temperatures were computed and then compared to observed urbanrural differences in minimum air temperatures. The purpose of these comparisons was to evaluate the use of satellite data to assess the influence of the urban environment on observed minimum air temperatures (the urban heat island effect). The temporal consistency of the data, from daily data to weekly, biweekly, and monthly intervals, was also evaluated. The satellite-derived normalized difference (ND) vegetation-index data, sampled over urban and rural regions composed of a variety of land surface environments, were linearly related to the difference in observed urban and rural minimum temperatures. The relationship between the ND index and observed differences in minimum temperature was improved when analyses were restricted by elevation differences between the sample locations and when biweekly or monthly intervals were utilized. The difference in the ND index between urban and rural regions appears to be an indicator of the difference in surface properties (evaporation and heat storage capacity) between the two environments that are responsible for differences in urban and rural minimum temperatures. The urban and rural differences in the ND index explain a greater amount of the variation observed in minimum temperature differences than past analyses that utilized urban population data. The use of satellite data may contribute to a globally consistent method for analysis of urban heat island bias.
Magnetic Sensors with Picotesla Magnetic Field Sensitivity at Room Temperature
2008-06-01
such small fields require cryogenic cooling such as SQUID sensors, require sophisticated detection systems such as atomic magnetometers and fluxgate ... magnetometers , or have large size and poor low frequency performance such as coil systems. [3-7] The minimum detectable field (the field noise times...Kingdon, "Development of a Combined EMI/ Magnetometer Sensor for UXO Detection," Proc. Symposium on the Applications of Geophysics to Environmental and
Entanglement detection in optical lattices of bosonic atoms with collective measurements
NASA Astrophysics Data System (ADS)
Tóth, Géza
2004-05-01
The minimum requirements for entanglement detection are discussed for a spin chain in which the spins cannot be individually accessed. The methods presented detect entangled states close to a cluster state and a many-body singlet state, and seem to be viable for experimental realization in optical lattices of two-state bosonic atoms. The entanglement criteria are based on entanglement witnesses and on the uncertainty of collective observables.
Gamma-ray astronomy with a large muon detector in the ARGO-YBJ experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di Sciascio, G.; Di Girolamo, T.; Megna, R.
2005-02-21
The ARGO-YBJ experiment, currently under construction at the YangBaJing Laboratory (Tibet, P.R. China, 4300 m a.s.l.), could be upgraded with a large ({approx} 2500 m2) muon detector both to extend the sensitivity to {gamma}-ray sources to energies greater than {approx} 20 TeV and to perform a cosmic ray primary composition study. In this paper we present an evaluation of the rejection power for proton-induced showers achievable with the upgraded ARGO-YBJ detector. Minimum detectable {gamma}-ray fluxes are calculated for different experimental setups.
Machine vision system for automated detection of stained pistachio nuts
NASA Astrophysics Data System (ADS)
Pearson, Tom C.
1995-01-01
A machine vision system was developed to separate stained pistachio nuts, which comprise of about 5% of the California crop, from unstained nuts. The system may be used to reduce labor involved with manual grading or to remove aflatoxin contaminated product from low grade process streams. The system was tested on two different pistachio process streams: the bi- chromatic color sorter reject stream and the small nut shelling stock stream. The system had a minimum overall error rate of 14% for the bi-chromatic sorter reject stream and 15% for the small shelling stock stream.
NASA Astrophysics Data System (ADS)
Prieto, E.; Casanovas, R.; Salvadó, M.
2018-03-01
A scintillation gamma-ray spectrometry water monitor with a 2″ × 2″ LaBr3(Ce) detector was characterized in this study. This monitor measures gamma-ray spectra of river water. Energy and resolution calibrations were performed experimentally, whereas the detector efficiency was determined using Monte Carlo simulations with EGS5 code system. Values of the minimum detectable activity concentrations for 131I and 137Cs were calculated for different integration times. As an example of the monitor performance after calibration, a radiological increment during a rainfall episode was studied.
Walenkamp, Monique M J; de Muinck Keizer, Robert-Jan; Goslings, J Carel; Vos, Lara M; Rosenwasser, Melvin P; Schep, Niels W L
2015-10-01
The Patient-rated Wrist Evaluation (PRWE) is a commonly used instrument in upper extremity surgery and in research. However, to recognize a treatment effect expressed as a change in PRWE, it is important to be aware of the minimum clinically important difference (MCID) and the minimum detectable change (MDC). The MCID of an outcome tool like the PRWE is defined as the smallest change in a score that is likely to be appreciated by a patient as an important change, while the MDC is defined as the smallest amount of change that can be detected by an outcome measure. A numerical change in score that is less than the MCID, even when statistically significant, does not represent a true clinically relevant change. To our knowledge, the MCID and MDC of the PRWE have not been determined in patients with distal radius fractures. We asked: (1) What is the MCID of the PRWE score for patients with distal radius fractures? (2) What is the MDC of the PRWE? Our prospective cohort study included 102 patients with a distal radius fracture and a median age of 59 years (interquartile range [IQR], 48-66 years). All patients completed the PRWE questionnaire during each of two separate visits. At the second visit, patients were asked to indicate the degree of clinical change they appreciated since the previous visit. Accordingly, patients were categorized in two groups: (1) minimally improved or (2) no change. The groups were used to anchor the changes observed in the PRWE score to patients' perspectives of what was clinically important. We determined the MCID using an anchor-based receiver operator characteristic method. In this context, the change in the PRWE score was considered a diagnostic test, and the anchor (minimally improved or no change as noted by the patients from visit to visit) was the gold standard. The optimal receiver operator characteristic cutoff point calculated with the Youden index reflected the value of the MCID. In our study, the MCID of the PRWE was 11.5 points. The area under the curve was 0.54 (95% CI, 0.37-0.70) for the pain subscale and 0.71 (95% CI, 0.57-0.85) for the function subscale. We determined the MDC to be 11.0 points. We determined the MCID of the PRWE score for patients with distal radius fractures using the anchor-based approach and verified that the MDC of the PRWE was sufficiently small to detect our MCID. We recommend using an improvement on the PRWE of more than 11.5 points as the smallest clinically relevant difference when evaluating the effects of treatments and when performing sample-size calculations on studies of distal radius fractures.
NASA Astrophysics Data System (ADS)
Sudharsanan, Subramania I.; Mahalanobis, Abhijit; Sundareshan, Malur K.
1990-12-01
Discrete frequency domain design of Minimum Average Correlation Energy filters for optical pattern recognition introduces an implementational limitation of circular correlation. An alternative methodology which uses space domain computations to overcome this problem is presented. The technique is generalized to construct an improved synthetic discriminant function which satisfies the conflicting requirements of reduced noise variance and sharp correlation peaks to facilitate ease of detection. A quantitative evaluation of the performance characteristics of the new filter is conducted and is shown to compare favorably with the well known Minimum Variance Synthetic Discriminant Function and the space domain Minimum Average Correlation Energy filter, which are special cases of the present design.
Minimum resolvable power contrast model
NASA Astrophysics Data System (ADS)
Qian, Shuai; Wang, Xia; Zhou, Jingjing
2018-01-01
Signal-to-noise ratio and MTF are important indexs to evaluate the performance of optical systems. However,whether they are used alone or joint assessment cannot intuitively describe the overall performance of the system. Therefore, an index is proposed to reflect the comprehensive system performance-Minimum Resolvable Radiation Performance Contrast (MRP) model. MRP is an evaluation model without human eyes. It starts from the radiance of the target and the background, transforms the target and background into the equivalent strips,and considers attenuation of the atmosphere, the optical imaging system, and the detector. Combining with the signal-to-noise ratio and the MTF, the Minimum Resolvable Radiation Performance Contrast is obtained. Finally the detection probability model of MRP is given.
Fluorescent hybridization probes for nucleic acid detection.
Guo, Jia; Ju, Jingyue; Turro, Nicholas J
2012-04-01
Due to their high sensitivity and selectivity, minimum interference with living biological systems, and ease of design and synthesis, fluorescent hybridization probes have been widely used to detect nucleic acids both in vivo and in vitro. Molecular beacons (MBs) and binary probes (BPs) are two very important hybridization probes that are designed based on well-established photophysical principles. These probes have shown particular applicability in a variety of studies, such as mRNA tracking, single nucleotide polymorphism (SNP) detection, polymerase chain reaction (PCR) monitoring, and microorganism identification. Molecular beacons are hairpin oligonucleotide probes that present distinctive fluorescent signatures in the presence and absence of their target. Binary probes consist of two fluorescently labeled oligonucleotide strands that can hybridize to adjacent regions of their target and generate distinctive fluorescence signals. These probes have been extensively studied and modified for different applications by modulating their structures or using various combinations of fluorophores, excimer-forming molecules, and metal complexes. This review describes the applicability and advantages of various hybridization probes that utilize novel and creative design to enhance their target detection sensitivity and specificity.
Porto, Suely K S S; Nogueira, Thiago; Blanes, Lucas; Doble, Philip; Sabino, Bruno D; do Lago, Claudimir L; Angnes, Lúcio
2014-11-01
A method for the identification of 3,4-methylenedioxymethamphetamine (MDMA) and meta-chlorophenylpiperazine (mCPP) was developed employing capillary electrophoresis (CE) with capacitively coupled contactless conductivity detection (C(4) D). Sample extraction, separation, and detection of "Ecstasy" tablets were performed in <10 min without sample derivatization. The separation electrolyte was 20 mm TAPS/Lithium, pH 8.7. Average minimal detectable amounts for MDMA and mCPP were 0.04 mg/tablet, several orders of magnitude lower than the minimum amount encountered in a tablet. Seven different Ecstasy tablets seized in Rio de Janeiro, Brazil, were analyzed by CE-C(4) D and compared against routine gas chromatography-mass spectrometry (GC-MS). The CE method demonstrated sufficient selectivity to discriminate the two target drugs, MDMA and mCPP, from the other drugs present in seizures, namely amphepramone, fenproporex, caffeine, lidocaine, and cocaine. Separation was performed in <90 sec. The advantages of using C(4) D instead of traditional CE-UV methods for in-field analysis are also discussed. © 2014 American Academy of Forensic Sciences.
Performance analysis of robust road sign identification
NASA Astrophysics Data System (ADS)
Ali, Nursabillilah M.; Mustafah, Y. M.; Rashid, N. K. A. M.
2013-12-01
This study describes performance analysis of a robust system for road sign identification that incorporated two stages of different algorithms. The proposed algorithms consist of HSV color filtering and PCA techniques respectively in detection and recognition stages. The proposed algorithms are able to detect the three standard types of colored images namely Red, Yellow and Blue. The hypothesis of the study is that road sign images can be used to detect and identify signs that are involved with the existence of occlusions and rotational changes. PCA is known as feature extraction technique that reduces dimensional size. The sign image can be easily recognized and identified by the PCA method as is has been used in many application areas. Based on the experimental result, it shows that the HSV is robust in road sign detection with minimum of 88% and 77% successful rate for non-partial and partial occlusions images. For successful recognition rates using PCA can be achieved in the range of 94-98%. The occurrences of all classes are recognized successfully is between 5% and 10% level of occlusions.
De la Varga, Herminia; Águeda, Beatriz; Ágreda, Teresa; Martínez-Peña, Fernando; Parladé, Javier; Pera, Joan
2013-07-01
The annual belowground dynamics of extraradical soil mycelium and sporocarp production of two ectomycorrhizal fungi, Boletus edulis and Lactarius deliciosus, have been studied in two different pine forests (Pinar Grande and Pinares Llanos, respectively) in Soria (central Spain). Soil samples (five per plot) were taken monthly (from September 2009 to August 2010 in Pinar Grande and from September 2010 to September 2011 in Pinares Llanos) in eight permanent plots (four for each site). B. edulis and L. deliciosus extraradical soil mycelium was quantified by real-time polymerase chain reaction, with DNA extracted from soil samples, using specific primers and TaqMan® probes. The quantities of B. edulis soil mycelium did not differ significantly between plots, but there was a significant difference over time with a maximum in February (0.1576 mg mycelium/g soil) and a minimum in October (0.0170 mg mycelium/g soil). For L. deliciosus, significant differences were detected between plots and over time. The highest amount of mycelium was found in December (1.84 mg mycelium/g soil) and the minimum in February (0.0332 mg mycelium/g soil). B. edulis mycelium quantities were positively correlated with precipitation of the current month and negatively correlated with the mean temperature of the previous month. Mycelium biomass of L. deliciosus was positively correlated with relative humidity and negatively correlated with mean temperature and radiation. No significant correlation between productivity of the plots with the soil mycelium biomass was observed for any of the two species. No correlations were found between B. edulis sporocarp production and weather parameters. Sporocarp production of L. deliciosus was positively correlated with precipitation and relative humidity and negatively correlated with maximum and minimum temperatures. Both species have similar distribution over time, presenting an annual dynamics characterized by a seasonal variability, with a clear increase on the amounts of biomass during the coldest months of the year. Soil mycelial dynamics of both species are strongly dependent on the weather.
Methods for the preparation and analysis of solids and suspended solids for total mercury
Olund, Shane D.; DeWild, John F.; Olson, Mark L.; Tate, Michael T.
2004-01-01
The methods documented in this report are utilized by the Wisconsin District Mercury Lab for analysis of total mercury in solids (soils and sediments) and suspended solids (isolated on filters). Separate procedures are required for the different sample types. For solids, samples are prepared by room-temperature acid digestion and oxidation with aqua regia. The samples are brought up to volume with a 5 percent bromine monochloride solution to ensure complete oxidation and heated at 50?C in an oven overnight. Samples are then analyzed with an automated flow injection system incorporating a cold vapor atomic fluorescence spectrometer. A method detection limit of 0.3 ng of mercury per digestion bomb was established using multiple analyses of an environmental sample. Based on the range of masses processed, the minimum sample reporting limit varies from 0.6 ng/g to 6 ng/g. Suspended solids samples are oxidized with a 5 percent bromine monochloride solution and held at 50?C in an oven for 5 days. The samples are then analyzed with an automated flow injection system incorporating a cold vapor atomic fluorescence spectrometer. Using a certified reference material as a surrogate for an environmental sample, a method detection limit of 0.059 ng of mercury per filter was established. The minimum sample reporting limit varies from 0.059 ng/L to 1.18 ng/L, depending on the volume of water filtered.
1982-08-01
DATA NUMBER OF POINTS 1988 CHANNEL MINIMUM MAXIMUM 1 PHMG -130.13 130.00 2 PS3 -218.12 294.77 3 T3 -341.54 738.15 4 T5 -464.78 623.47 5 PT51 12.317...Continued) CRUISE AND TAKE-OFF MODE DATA I NUMBER OF POINTS 4137 CHANNEL MINIMUM MAXIMUM 1 PHMG -130.13 130.00 2 P53 -218.12 376.60 3 T3 -482.72
Advanced detection, isolation and accommodation of sensor failures: Real-time evaluation
NASA Technical Reports Server (NTRS)
Merrill, Walter C.; Delaat, John C.; Bruton, William M.
1987-01-01
The objective of the Advanced Detection, Isolation, and Accommodation (ADIA) Program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines by using analytical redundacy to detect sensor failures. The results of a real time hybrid computer evaluation of the ADIA algorithm are presented. Minimum detectable levels of sensor failures for an F100 engine control system are determined. Also included are details about the microprocessor implementation of the algorithm as well as a description of the algorithm itself.
NASA Technical Reports Server (NTRS)
Rorie, Conrad; Fern, Lisa; Pack, Jessica; Shively, Jay; Draper, Mark H.
2015-01-01
The pilot-in-the-loop Detect-and-Avoid (DAA) task requires the pilot to carry out three major functions: 1) detect a potential threat, 2) determine an appropriate resolution maneuver, and 3) execute that resolution maneuver via the GCS control and navigation interface(s). The purpose of the present study was to examine two main questions with respect to DAA display considerations that could impact pilots ability to maintain well clear from other aircraft. First, what is the effect of a minimum (or basic) information display compared to an advanced information display on pilot performance? Second, what is the effect of display location on UAS pilot performance? Two levels of information level (basic, advanced) were compared across two levels of display location (standalone, integrated), for a total of four displays. The results indicate that the advanced displays had faster overall response times compared to the basic displays, however, there were no significant differences between the standalone and integrated displays.
Variability in Population Density of House Dust Mites of Bitlis and Muş, Turkey.
Aykut, M; Erman, O K; Doğan, S
2016-05-01
This study was conducted to investigate the relationship between the number of house dust mites/g dust and different physical and environmental variables. A total of 1,040 house dust samples were collected from houses in Bitlis and Muş Provinces, Turkey, between May 2010 and February 2012. Overall, 751 (72.2%) of dust samples were mite positive. The number of mites/g dust varied between 20 and 1,840 in mite-positive houses. A significant correlation was detected between mean number of mites and altitude of houses, frequency of monthly vacuum cleaning, number of individuals in the household, and relative humidity. No association was found between the number of mites and temperature, type of heating, existence of allergic diseases, age and structure of houses. A maximum number of mites were detected in summer and a minimum number was detected in autumn. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Davidson, C A; Griffith, C J; Peters, A C; Fielding, L M
1999-01-01
The minimum bacterial detection limits and operator reproducibility of the Biotrace Clean-Tracetrade mark Rapid Cleanliness Test and traditional hygiene swabbing were determined. Areas (100 cm2) of food grade stainless steel were separately inoculated with known levels of Staphylococcus aureus (NCTC 6571) and Escherichia coli (ATCC 25922). Surfaces were sampled either immediately after inoculation while still wet, or after 60 min when completely dry. For both organisms the minimum detection limit of the ATP Clean-Tracetrade mark Rapid Cleanliness Test was 10(4) cfu/100 cm2 (p < 0.05) and was the same for wet and dry surfaces. Both organism type and surface status (i.e. wet or dry) influenced the minimum detection limits of hygiene swabbing, which ranged from 10(2) cfu/100 cm2 to >10(7) cfu/100 cm2. Hygiene swabbing percentage recovery rates for both organisms were less than 0.1% for dried surfaces but ranged from 0.33% to 8.8% for wet surfaces. When assessed by six technically qualified operators, the Biotrace Clean-Tracetrade mark Rapid Cleanliness Test gave superior reproducibility for both clean and inoculated surfaces, giving mean coefficients of variation of 24% and 32%, respectively. Hygiene swabbing of inoculated surfaces gave a mean CV of 130%. The results are discussed in the context of hygiene monitoring within the food industry. Copyright 1999 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Wilson, Jennifer G.; Cummins, Kenneth L.; Krider, E. Philip
2009-12-01
The NASA Kennedy Space Center (KSC) and Air Force Eastern Range (ER) use data from two cloud-to-ground (CG) lightning detection networks, the Cloud-to-Ground Lightning Surveillance System (CGLSS) and the U.S. National Lightning Detection Network™ (NLDN), and a volumetric lightning mapping array, the Lightning Detection and Ranging (LDAR) system, to monitor and characterize lightning that is potentially hazardous to launch or ground operations. Data obtained from these systems during June-August 2006 have been examined to check the classification of small, negative CGLSS reports that have an estimated peak current, ∣Ip∣ less than 7 kA, and to determine the smallest values of Ip that are produced by first strokes, by subsequent strokes that create a new ground contact (NGC), and by subsequent strokes that remain in a preexisting channel (PEC). The results show that within 20 km of the KSC-ER, 21% of the low-amplitude negative CGLSS reports were produced by first strokes, with a minimum Ip of -2.9 kA; 31% were by NGCs, with a minimum Ip of -2.0 kA; and 14% were by PECs, with a minimum Ip of -2.2 kA. The remaining 34% were produced by cloud pulses or lightning events that we were not able to classify.
Resource sharing on CSMA/CD networks in the presence of noise. M.S. Thesis
NASA Technical Reports Server (NTRS)
Dinschel, Duane Edward
1987-01-01
Resource sharing on carrier sense multiple access with collision detection (CSMA/CD) networks can be accomplished by using window-control algorithms for bus contention. The window-control algorithms are designed to grant permission to transmit to the station with the minimum contention parameter. Proper operation of the window-control algorithm requires that all stations sense the same state of the newtork in each contention slot. Noise causes the state of the network to appear as a collision. False collisions can cause the window-control algorithm to terminate without isolating any stations. A two-phase window-control protocol and approximate recurrence equation with noise as a parameter to improve the performance of the window-control algorithms in the presence of noise are developed. The results are compared through simulation, with the approximate recurrence equation yielding the best overall performance. Noise is even a bigger problem when it is not detected by all stations. In such cases it is possible for the window boundaries of the contending stations to become out of phase. Consequently, it is possible to isolate a station other than the one with the minimum contention parameter. To guarantee proper isolation of the minimum, a broadcast phase must be added after the termination of the algorithm. The protocol required to correct the window-control algorithm when noise is not detected by all stations is discussed.
Adaptive road crack detection system by pavement classification.
Gavilán, Miguel; Balcones, David; Marcos, Oscar; Llorca, David F; Sotelo, Miguel A; Parra, Ignacio; Ocaña, Manuel; Aliseda, Pedro; Yarza, Pedro; Amírola, Alejandro
2011-01-01
This paper presents a road distress detection system involving the phases needed to properly deal with fully automatic road distress assessment. A vehicle equipped with line scan cameras, laser illumination and acquisition HW-SW is used to storage the digital images that will be further processed to identify road cracks. Pre-processing is firstly carried out to both smooth the texture and enhance the linear features. Non-crack features detection is then applied to mask areas of the images with joints, sealed cracks and white painting, that usually generate false positive cracking. A seed-based approach is proposed to deal with road crack detection, combining Multiple Directional Non-Minimum Suppression (MDNMS) with a symmetry check. Seeds are linked by computing the paths with the lowest cost that meet the symmetry restrictions. The whole detection process involves the use of several parameters. A correct setting becomes essential to get optimal results without manual intervention. A fully automatic approach by means of a linear SVM-based classifier ensemble able to distinguish between up to 10 different types of pavement that appear in the Spanish roads is proposed. The optimal feature vector includes different texture-based features. The parameters are then tuned depending on the output provided by the classifier. Regarding non-crack features detection, results show that the introduction of such module reduces the impact of false positives due to non-crack features up to a factor of 2. In addition, the observed performance of the crack detection system is significantly boosted by adapting the parameters to the type of pavement.
Adaptive Road Crack Detection System by Pavement Classification
Gavilán, Miguel; Balcones, David; Marcos, Oscar; Llorca, David F.; Sotelo, Miguel A.; Parra, Ignacio; Ocaña, Manuel; Aliseda, Pedro; Yarza, Pedro; Amírola, Alejandro
2011-01-01
This paper presents a road distress detection system involving the phases needed to properly deal with fully automatic road distress assessment. A vehicle equipped with line scan cameras, laser illumination and acquisition HW-SW is used to storage the digital images that will be further processed to identify road cracks. Pre-processing is firstly carried out to both smooth the texture and enhance the linear features. Non-crack features detection is then applied to mask areas of the images with joints, sealed cracks and white painting, that usually generate false positive cracking. A seed-based approach is proposed to deal with road crack detection, combining Multiple Directional Non-Minimum Suppression (MDNMS) with a symmetry check. Seeds are linked by computing the paths with the lowest cost that meet the symmetry restrictions. The whole detection process involves the use of several parameters. A correct setting becomes essential to get optimal results without manual intervention. A fully automatic approach by means of a linear SVM-based classifier ensemble able to distinguish between up to 10 different types of pavement that appear in the Spanish roads is proposed. The optimal feature vector includes different texture-based features. The parameters are then tuned depending on the output provided by the classifier. Regarding non-crack features detection, results show that the introduction of such module reduces the impact of false positives due to non-crack features up to a factor of 2. In addition, the observed performance of the crack detection system is significantly boosted by adapting the parameters to the type of pavement. PMID:22163717
Peptide-activated gold nanoparticles for selective visual sensing of virus
NASA Astrophysics Data System (ADS)
Sajjanar, Basavaraj; Kakodia, Bhuvna; Bisht, Deepika; Saxena, Shikha; Singh, Arvind Kumar; Joshi, Vinay; Tiwari, Ashok Kumar; Kumar, Satish
2015-05-01
In this study, we report peptide-gold nanoparticles (AuNP)-based visual sensor for viruses. Citrate-stabilized AuNP (20 ± 1.9 nm) were functionalized with strong sulfur-gold interface using cysteinylated virus-specific peptide. Peptide-Cys-AuNP formed complexes with the viruses which made them to aggregate. The aggregation can be observed with naked eye and also with UV-Vis spectrophotometer as a color change from bright red to purple. The test allows for fast and selective detection of specific viruses. Spectroscopic measurements showed high linear correlation ( R 2 = 0.995) between the changes in optical density ratio (OD610/OD520) with the different concentrations of virus. The new method was compared with the hemagglutinating (HA) test for Newcastle disease virus (NDV). The results indicated that peptide-Cys-AuNP was more sensitive and can visually detect minimum number of virus particles present in the biological samples. The limit of detection for the NDV was 0.125 HA units of the virus. The method allows for selective detection and quantification of the NDV, and requires no isolation of viral RNA and PCR experiments. This strategy may be utilized for detection of other important human and animal viral pathogens.
When Dijkstra Meets Vanishing Point: A Stereo Vision Approach for Road Detection.
Zhang, Yigong; Su, Yingna; Yang, Jian; Ponce, Jean; Kong, Hui
2018-05-01
In this paper, we propose a vanishing-point constrained Dijkstra road model for road detection in a stereo-vision paradigm. First, the stereo-camera is used to generate the u- and v-disparity maps of road image, from which the horizon can be extracted. With the horizon and ground region constraints, we can robustly locate the vanishing point of road region. Second, a weighted graph is constructed using all pixels of the image, and the detected vanishing point is treated as the source node of the graph. By computing a vanishing-point constrained Dijkstra minimum-cost map, where both disparity and gradient of gray image are used to calculate cost between two neighbor pixels, the problem of detecting road borders in image is transformed into that of finding two shortest paths that originate from the vanishing point to two pixels in the last row of image. The proposed approach has been implemented and tested over 2600 grayscale images of different road scenes in the KITTI data set. The experimental results demonstrate that this training-free approach can detect horizon, vanishing point, and road regions very accurately and robustly. It can achieve promising performance.
NASA Technical Reports Server (NTRS)
Baily, N. A.
1973-01-01
The radiological implications of statistical variations in energy deposition by ionizing radiation were investigated in the conduct of the following experiments: (1) study of the production of secondary particles generated by the passage of the primary radiation through bone and muscle; (2) the study of the ratio of nonreparable to reparable damage in DNA as a function of different energy deposition patterns generated by X rays versus heavy fast charged particles; (3) the use of electronic radiography systems for direct fluoroscopic tomography and for the synthesis of multiple planes and; (4) the determination of the characteristics of systems response to split fields having different contrast levels, and of minimum detectable contrast levels between the halves under realistic clinical situations.
HST images of the eclipsing pulsar B1957+20
NASA Technical Reports Server (NTRS)
Fruchter, Andrew S.; Bookbinder, Jay; Bailyn, Charles D.
1995-01-01
We have obtained images of the eclipsing pulsar binary PSR B1957+20 using the Planetary Camera of the Hubble Space Telescope (HST). The high spatial resolution of this instrument has allowed us to separate the pulsar system from a nearby background star which has confounded ground-based observations of this system near optical minimum. Our images limit the temperature of the backside of the companion to T less than or approximately = 2800 K, about a factor of 2 less than the average temperature of the side of the companion facing the pulsar, and provide a marginal detection of the companion at optical minimum. The magnitude of this detection is consistent with previous work which suggests that the companion nearly fills its Roche lobe and is supported through tidal dissipation.
NASA Astrophysics Data System (ADS)
MacMahon, Heber; Vyborny, Carl; Powell, Gregory; Doi, Kunio; Metz, Charles E.
1984-08-01
In digital radiography the pixel size used determines the potential spatial resolution of the system. The need for spatial resolution varies depending on the subject matter imaged. In many areas, including the chest, the minimum spatial resolution requirements have not been determined. Sarcoidosis is a disease which frequently causes subtle interstitial infiltrates in the lungs. As the initial step in an investigation designed to determine the minimum pixel size required in digital chest radiographic systems, we have studied 1 mm pixel digitized images on patients with early pulmonary sarcoidosis. The results of this preliminary study suggest that neither mild interstitial pulmonary infiltrates nor other abnormalities such as pneumothoraces may be detected reliably with 1 mm pixel digital images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mauk, F.J.; Christensen, D.H.
1980-09-01
Probabilistic estimations of earthquake detection and location capabilities for the states of Illinois, Indiana, Kentucky, Ohio and West Virginia are presented in this document. The algorithm used in these epicentrality and minimum-magnitude estimations is a version of the program NETWORTH by Wirth, Blandford, and Husted (DARPA Order No. 2551, 1978) which was modified for local array evaluation at the University of Michigan Seismological Observatory. Estimations of earthquake detection capability for the years 1970 and 1980 are presented in four regional minimum m/sub b/ magnitude contour maps. Regional 90% confidence error ellipsoids are included for m/sub b/ magnitude events from 2.0more » through 5.0 at 0.5 m/sub b/ unit increments. The close agreement between these predicted epicentral 90% confidence estimates and the calculated error ellipses associated with actual earthquakes within the studied region suggest that these error determinations can be used to estimate the reliability of epicenter location. 8 refs., 14 figs., 2 tabs.« less
Zhang, Qing; Zhu, Liang; Feng, Hanhua; Ang, Simon; Chau, Fook Siong; Liu, Wen-Tso
2006-01-18
This paper reported the development of a microfludic device for the rapid detection of viable and nonviable microbial cells through dual labeling by fluorescent in situ hybridization (FISH) and quantum dots (QDs)-labeled immunofluorescent assay (IFA). The coin sized device consists of a microchannel and filtering pillars (gap=1-2 microm) and was demonstrated to effectively trap and concentrate microbial cells (i.e. Giardia lamblia). After sample injection, FISH probe solution and QDs-labeled antibody solution were sequentially pumped into the device to accelerate the fluorescent labeling reactions at optimized flow rates (i.e. 1 and 20 microL/min, respectively). After 2 min washing for each assay, the whole process could be finished within 30 min, with minimum consumption of labeling reagents and superior fluorescent signal intensity. The choice of QDs 525 for IFA resulted in bright and stable fluorescent signal, with minimum interference with the Cy3 signal from FISH detection.
Reduction of Calcofluor in Solithane Conformal Coatings of Printed Wiring Boards
NASA Technical Reports Server (NTRS)
Miller, Michael K.
1997-01-01
An investigation on the outgassing of a pigment employed as a fluorescent medium in conformal coatings has been performed. The conformal coatings in question are used to protect printed wiring boards from environmental hazards such as dust and moisture. The pigment is included in the coating at low concentration to allow visual inspection of the conformal coating for flaw detection. Calcofluor, the fluorescent pigment has been found to be a significant outgasser under vacuum conditions and a potential source of contamination to flight hardware. A minimum acceptable concentration of Calcofluor for flaw detection is desirable. Tests have been carried out using a series of Solithane(TM) conformal coating samples, with progressively lower Calcofluor concentrations, to determine the minimum required concentration of Calcofluor. It was found that the concentration of Calcofluor could be reduced from 0.115% to 0.0135% without significant loss in the ability to detect flaws, while at the same time significant reductions in Calcofluor outgassing and possible contamination of systems could be realized.
NASA Astrophysics Data System (ADS)
Wang, Yi; Zhang, Ao; Ma, Jing
2017-07-01
Minimum-shift keying (MSK) has the advantages of constant envelope, continuous phase, and high spectral efficiency, and it is applied in radio communication and optical fiber communication. MSK modulation of coherent detection is proposed in the ground-to-satellite laser communication system; in addition, considering the inherent noise of uplink, such as intensity scintillation and beam wander, the communication performance of the MSK modulation system with coherent detection is studied in the uplink ground-to-satellite laser. Based on the gamma-gamma channel model, the closed form of bit error rate (BER) of MSK modulation with coherent detection is derived. In weak, medium, and strong turbulence, the BER performance of the MSK modulation system is simulated and analyzed. To meet the requirements of the ground-to-satellite coherent MSK system to optimize the parameters and configuration of the transmitter and receiver, the influence of the beam divergence angle, the zenith angle, the transmitter beam radius, and the receiver diameter are studied.
Real-Time Detection of Telomerase in a Microelectromechanical Systems Platform
2005-05-01
contains color images. 14. ABSTRACT 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 147 19a...Telomerase accomplishes this by alleviating the “end-replication problem” (6,10,14,23,33,43). First described by Hayflick in 1965, the end-replication...were produced to determine the minimum detection limit of the ABI Prism 7000 as an optical fluorescent detection device. In addition, I wanted to
2013-09-30
performance of algorithms detecting dives, strokes , clicks, respiration and gait changes. (ii) Calibration errors: Size and power constraints in...acceptance parameters used to detect and classify events. For example, swim stroke detection requires parameters defining the minimum magnitude and the min...and max duration of a stroke . Species dependent parameters can be selected from existing DTAG data but other parameters depend on the size of the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chida, K.; Yamauchi, Y.; Arakawa, T.
2013-12-04
We performed the resistively-detected nuclear magnetic resonance (RDNMR) to study the electron spin polarization in the non-equilibrium quantum Hall regime. By measuring the Knight shift, we derive source-drain bias voltage dependence of the electron spin polarization in quantum wires. The electron spin polarization shows minimum value around the threshold voltage of the dynamic nuclear polarization.
Elemental GCR Observations during the 2009-2010 Solar Minimum Period
NASA Technical Reports Server (NTRS)
Lave, K. A.; Israel, M. H.; Binns, W. R.; Christian, E. R.; Cummings, A. C.; Davis, A. J.; deNolfo, G. A.; Leske, R. A.; Mewaldt, R. A.; Stone, E. C.;
2013-01-01
Using observations from the Cosmic Ray Isotope Spectrometer (CRIS) onboard the Advanced Composition Explorer (ACE), we present new measurements of the galactic cosmic ray (GCR) elemental composition and energy spectra for the species B through Ni in the energy range approx. 50-550 MeV/nucleon during the record setting 2009-2010 solar minimum period. These data are compared with our observations from the 1997-1998 solar minimum period, when solar modulation in the heliosphere was somewhat higher. For these species, we find that the intensities during the 2009-2010 solar minimum were approx. 20% higher than those in the previous solar minimum, and in fact were the highest GCR intensities recorded during the space age. Relative abundances for these species during the two solar minimum periods differed by small but statistically significant amounts, which are attributed to the combination of spectral shape differences between primary and secondary GCRs in the interstellar medium and differences between the levels of solar modulation in the two solar minima. We also present the secondary-to-primary ratios B/C and (Sc+Ti+V)/Fe for both solar minimum periods, and demonstrate that these ratios are reasonably well fit by a simple "leaky-box" galactic transport model that is combined with a spherically symmetric solar modulation model.
40 CFR 63.7525 - What are my monitoring, installation, operation, and maintenance requirements?
Code of Federal Regulations, 2010 CFR
2010-07-01
... tolerance of 1.27 centimeters of water or a transducer with a minimum tolerance of 1 percent of the pressure... use a fabric filter bag leak detection system to comply with the requirements of this subpart, you must install, calibrate, maintain, and continuously operate a bag leak detection system as specified in...
Esralew, Rachel A.; Andrews, William J.; Smith, S. Jerrod
2011-01-01
The U.S. Geological Survey, in cooperation with the city of Oklahoma City, collected water-quality samples from the North Canadian River at the streamflow-gaging station near Harrah, Oklahoma (Harrah station), since 1968, and at an upstream streamflow-gaging station at Britton Road at Oklahoma City, Oklahoma (Britton Road station), since 1988. Statistical summaries and frequencies of detection of water-quality constituent data from water samples, and summaries of water-quality constituent data from continuous water-quality monitors are described from the start of monitoring at those stations through 2009. Differences in concentrations between stations and time trends for selected constituents were evaluated to determine the effects of: (1) wastewater effluent discharges, (2) changes in land-cover, (3) changes in streamflow, (4) increases in urban development, and (5) other anthropogenic sources of contamination on water quality in the North Canadian River downstream from Oklahoma City. Land-cover changes between 1992 and 2001 in the basin between the Harrah station and Lake Overholser upstream included an increase in developed/barren land-cover and a decrease in pasture/hay land cover. There were no significant trends in median and greater streamflows at either streamflow-gaging station, but there were significant downward trends in lesser streamflows, especially after 1999, which may have been associated with decreases in precipitation between 1999 and 2009 or construction of low-water dams on the river upstream from Oklahoma City in 1999. Concentrations of dissolved chloride, lead, cadmium, and chlordane most frequently exceeded the Criterion Continuous Concentration (a water-quality standard for protection of aquatic life) in water-quality samples collected at both streamflow-gaging stations. Visual trends in annual frequencies of detection were investigated for selected pesticides with frequencies of detection greater than 10 percent in all water samples collected at both streamflow-gaging stations. Annual frequencies of detection of 2,4-dichlorophenoxyacetic acid and bromacil increased with time. Annual frequencies of detection of atrazine, chlorpyrifos, diazinon, dichlorprop, and lindane decreased with time. Dissolved nitrogen and phosphorus concentrations were significantly greater in water samples collected at the Harrah station than at the Britton Road station, whereas specific conductance was greater at the Britton Road station. Concentrations of dissolved oxygen, biochemical oxygen demand, and fecal coliform bacteria were not significantly different between stations. Daily minimum, mean, and maximum specific conductance collected from continuous water-quality monitors were significantly greater at the Britton Road station than in water samples collected at the Harrah station. Daily minimum, maximum, and diurnal fluctuations of water temperature collected from continuous water-quality monitors were significantly greater at the Harrah station than at the Britton Road station. The daily maximums and diurnal range of dissolved oxygen concentrations were significantly greater in water samples collected at the Britton Road station than at the Harrah station, but daily mean dissolved oxygen concentrations in water at those streamflow-gaging stations were not significantly different. Daily mean and diurnal water temperature ranges increased with time at the Britton Road and Harrah streamflow-gaging stations, whereas daily mean and diurnal specific conductance ranges decreased with time at both streamflow-gaging stations from 1988–2009. Daily minimum dissolved oxygen concentrations collected from continuous water-quality monitors more frequently indicated hypoxic conditions at the Harrah station than at the Britton Road station after 1999. Fecal coliform bacteria counts in water decreased slightly from 1988–2009 at the Britton Road station. The Seasonal Kendall's tau test indicated significant downward trends in
Haas, Derek A; Eslinger, Paul W; Bowyer, Theodore W; Cameron, Ian M; Hayes, James C; Lowrey, Justin D; Miley, Harry S
2017-11-01
The Comprehensive Nuclear-Test-Ban Treaty bans all nuclear tests and mandates development of verification measures to detect treaty violations. One verification measure is detection of radioactive xenon isotopes produced in the fission of actinides. The International Monitoring System (IMS) currently deploys automated radioxenon systems that can detect four radioxenon isotopes. Radioxenon systems with lower detection limits are currently in development. Historically, the sensitivity of radioxenon systems was measured by the minimum detectable concentration for each isotope. In this paper we analyze the response of radioxenon systems using rigorous metrics in conjunction with hypothetical representative releases indicative of an underground nuclear explosion instead of using only minimum detectable concentrations. Our analyses incorporate the impact of potential spectral interferences on detection limits and the importance of measuring isotopic ratios of the relevant radioxenon isotopes in order to improve discrimination from background sources particularly for low-level releases. To provide a sufficient data set for analysis, hypothetical representative releases are simulated every day from the same location for an entire year. The performance of three types of samplers are evaluated assuming they are located at 15 IMS radionuclide stations in the region of the release point. The performance of two IMS-deployed samplers and a next-generation system is compared with proposed metrics for detection and discrimination using representative releases from the nuclear test site used by the Democratic People's Republic of Korea. Copyright © 2017 Elsevier Ltd. All rights reserved.
Structure for identifying, locating and quantifying physical phenomena
Richardson, John G.
2006-10-24
A method and system for detecting, locating and quantifying a physical phenomena such as strain or a deformation in a structure. A minimum resolvable distance along the structure is selected and a quantity of laterally adjacent conductors is determined. Each conductor includes a plurality of segments coupled in series which define the minimum resolvable distance along the structure. When a deformation occurs, changes in the defined energy transmission characteristics along each conductor are compared to determine which segment contains the deformation.
Richardson, John G.
2006-01-24
A method and system for detecting, locating and quantifying a physical phenomena such as strain or a deformation in a structure. A minimum resolvable distance along the structure is selected and a quantity of laterally adjacent conductors is determined. Each conductor includes a plurality of segments coupled in series which define the minimum resolvable distance along the structure. When a deformation occurs, changes in the defined energy transmission characteristics along each conductor are compared to determine which segment contains the deformation.
A Study of the World’s Naval Surface-to-Air Missile Defense Systems.
1984-12-01
in various installations to foreign radars, such as the Contraves SeaHunter, SanGiogio NA9. A light- weight , three-round launcher (based on the short... Losses . . . . . . . . . . . . . . 22 7- B. MINIMUM SIGNAL DETECTION . . . . . . . . . . . 22 1. Noise Figure F ......... ..... 25 2. Minimum Signal-to...idealized radar equation becomes Pr 4>, 2 (2. 11) a, 4 2. S Losses There are many sources of power loss in the echo, and these losses reduce the energy of
NASA Technical Reports Server (NTRS)
Eaton, J. E.; Cherepashchuk, A. M.; Khaliullin, K. F.
1982-01-01
The 1200-1900 angstrom region and fine error sensor observations in the optical for V444 Cyg were continuously observed. More than half of a primary minimum and almost a complete secondary minimum were observed. It is found that the time of minimum for the secondary eclipse is consistent with that for primary eclipse, and the ultraviolet times of minimum are consistent with the optical ones. The spectrum shows a considerable amount of phase dependence. The general shaps and depths of the light curves for the FES signal and the 1565-1900 angstrom continuum are similar to those for the blue continuum. The FES, however, detected an atmospheric eclipse in line absorption at about the phase the NIV absorption was strongest. It is suggested that there is a source of continuum absorption shortward of 1460 angstrom which exists throughout a large part of the extended atmosphere and which, by implication, must redden considerably the ultraviolet continuua of WN stars. A fairly high degree of ionization for the inner part of the WN star a atmosphere is implied.
The Gap Detection Test: Can It Be Used to Diagnose Tinnitus?
Boyen, Kris; Başkent, Deniz; van Dijk, Pim
2015-01-01
Animals with induced tinnitus showed difficulties in detecting silent gaps in sounds, suggesting that the tinnitus percept may be filling the gap. The main purpose of this study was to evaluate the applicability of this approach to detect tinnitus in human patients. The authors first hypothesized that gap detection would be impaired in patients with tinnitus, and second, that gap detection would be more impaired at frequencies close to the tinnitus frequency of the patient. Twenty-two adults with bilateral tinnitus, 20 age-matched and hearing loss-matched subjects without tinnitus, and 10 young normal-hearing subjects participated in the study. To determine the characteristics of the tinnitus, subjects matched an external sound to their perceived tinnitus in pitch and loudness. To determine the minimum detectable gap, the gap threshold, an adaptive psychoacoustic test was performed three times by each subject. In this gap detection test, four different stimuli, with various frequencies and bandwidths, were presented at three intensity levels each. Similar to previous reports of gap detection, increasing sensation level yielded shorter gap thresholds for all stimuli in all groups. Interestingly, the tinnitus group did not display elevated gap thresholds in any of the four stimuli. Moreover, visual inspection of the data revealed no relation between gap detection performance and perceived tinnitus pitch. These findings show that tinnitus in humans has no effect on the ability to detect gaps in auditory stimuli. Thus, the testing procedure in its present form is not suitable for clinical detection of tinnitus in humans.
Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process
Haines, Aaron M.; Zak, Matthew; Hammond, Katie; Scott, J. Michael; Goble, Dale D.; Rachlow, Janet L.
2013-01-01
Simple Summary The objective of our study was to evaluate the mention of uncertainty (i.e., variance) associated with population size estimates within U.S. recovery plans for endangered animals. To do this we reviewed all finalized recovery plans for listed terrestrial vertebrate species. We found that more recent recovery plans reported more estimates of population size and uncertainty. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty. We recommend that updated recovery plans combine uncertainty of population size estimates with a minimum detectable difference to aid in successful recovery. Abstract United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data. PMID:26479531
NASA Astrophysics Data System (ADS)
Cicchetti, A.; Nenna, C.; Plaut, J. J.; Plettemeier, D.; Noschese, R.; Cartacci, M.; Orosei, R.
2017-11-01
The Mars Advanced Radar for Subsurface and Ionosphere Sounding (MARSIS) (Picardi et al., 2005) is a synthetic aperture low frequency radar altimeter, onboard the ESA Mars Express orbiter, launched in June 2003. It is the first and so far the only spaceborne radar that has observed the Martian moon Phobos. Radar echoes were collected on different flyby trajectories. The primary aim of sounding Phobos is to prove the feasibility of deep sounding, into its subsurface. MARSIS is optimized for deep penetration investigations and is capable of transmitting at four different bands between 1.3 MHz and 5.5 MHz with a 1 MHz bandwidth. Unfortunately the instrument was originally designed to operate exclusively on Mars, assuming that Phobos would not be observed. Following this assumption, a protection mechanism was implemented in the hardware (HW) to maintain a minimum time separation between transmission and reception phases of the radar. This limitation does not have any impact on Mars observation but it prevented the observation of Phobos. In order to successfully operate the instrument at Phobos, a particular configuration of the MARSIS onboard software (SW) parameters, called ;Range Ambiguity,; was implemented to override the HW protection zone, ensuring at the same time a high level of safety of the instrument. This paper describes the principles of MARSIS onboard processing, and the procedure through which the parameters of the processing software were tuned to observe targets below the minimum distance allowed by hardware. Some preliminary results of data analysis will be shown, with the support of radar echo simulations. A qualitative comparison between the simulated results and the actual data, does not support the detection of subsurface reflectors.
Escobar, André; da Rocha, Rozana Wendler; Midon, Monica; de Almeida, Ricardo Miyasaka; Filho, Darcio Zangirolami; Werther, Karin
2017-06-01
The aim of this study was to determine the minimum anesthetic concentration (MAC) of isoflurane, and to investigate if tramadol changes the isoflurane MAC in white-eyed parakeets (Psittacara leucophthalmus). Ten adult birds weighing 157 ± 9 g were anesthetized with isoflurane in oxygen under mechanical ventilation. Isoflurane concentration for the first bird was adjusted to 2.2%, and after 15 min an electrical stimulus was applied in the thigh area to observe the response (movement or nonmovement). Isoflurane concentration for the subsequent bird was increased by 10% if the previous bird moved, or decreased by 10% if the previous bird did not move. This procedure was performed serially until at least four sequential crossover events were detected. A crossover event was defined as a sequence of two birds with different responses (positive or negative) to the electrical stimulus. Isoflurane MAC was calculated as the mean isoflurane concentration value at the crossover events. After 1 wk, the same birds were reanesthetized with isoflurane and MAC was determined at 15 and 30 min after intramuscular administration of 10 mg/kg of tramadol using the same method. A paired t-test (P < 0.05%) was used to detect significant differences for MAC between treatments. Isoflurane MAC in this population of white-eyed parakeets was 2.47 ± 0.09%. Isoflurane MAC values 15 and 30 min after tramadol administration were indistinguishable from each other (pooled value was 2.50 ± 0.18%); they were also indistinguishable from isoflurane MAC without tramadol. The isoflurane MAC value in white-eyed parakeets is higher than reported for other bird species. Tramadol (10 mg/kg, i.m.) does not change isoflurane MAC in these birds.
Ma, Wang Kei; Borgen, Rita; Kelly, Judith; Millington, Sara; Hilton, Beverley; Aspin, Rob; Lança, Carla; Hogg, Peter
2017-03-01
Blurred images in full-field digital mammography are a problem in the UK Breast Screening Programme. Technical recalls may be due to blurring not being seen on lower resolution monitors used for review. This study assesses the visual detection of blurring on a 2.3-MP monitor and a 5-MP report grade monitor and proposes an observer standard for the visual detection of blurring on a 5-MP reporting grade monitor. 28 observers assessed 120 images for blurring; 20 images had no blurring present, whereas 100 images had blurring imposed through mathematical simulation at 0.2, 0.4, 0.6, 0.8 and 1.0 mm levels of motion. Technical recall rate for both monitors and angular size at each level of motion were calculated. χ 2 tests were used to test whether significant differences in blurring detection existed between 2.3- and 5-MP monitors. The technical recall rate for 2.3- and 5-MP monitors are 20.3% and 9.1%, respectively. The angular size for 0.2- to 1-mm motion varied from 55 to 275 arc s. The minimum amount of motion for visual detection of blurring in this study is 0.4 mm. For 0.2-mm simulated motion, there was no significant difference [χ 2 (1, N = 1095) = 1.61, p = 0.20] in blurring detection between the 2.3- and 5-MP monitors. According to this study, monitors ≤2.3 MP are not suitable for technical review of full-field digital mammography images for the detection of blur. Advances in knowledge: This research proposes the first observer standard for the visual detection of blurring.
Research on the Optimum Water Content of Detecting Soil Nitrogen Using Near Infrared Sensor
He, Yong; Nie, Pengcheng; Dong, Tao; Qu, Fangfang; Lin, Lei
2017-01-01
Nitrogen is one of the important indexes to evaluate the physiological and biochemical properties of soil. The level of soil nitrogen content influences the nutrient levels of crops directly. The near infrared sensor can be used to detect the soil nitrogen content rapidly, nondestructively, and conveniently. In order to investigate the effect of the different soil water content on soil nitrogen detection by near infrared sensor, the soil samples were dealt with different drying times and the corresponding water content was measured. The drying time was set from 1 h to 8 h, and every 1 h 90 samples (each nitrogen concentration of 10 samples) were detected. The spectral information of samples was obtained by near infrared sensor, meanwhile, the soil water content was calculated every 1 h. The prediction model of soil nitrogen content was established by two linear modeling methods, including partial least squares (PLS) and uninformative variable elimination (UVE). The experiment shows that the soil has the highest detection accuracy when the drying time is 3 h and the corresponding soil water content is 1.03%. The correlation coefficients of the calibration set are 0.9721 and 0.9656, and the correlation coefficients of the prediction set are 0.9712 and 0.9682, respectively. The prediction accuracy of both models is high, while the prediction effect of PLS model is better and more stable. The results indicate that the soil water content at 1.03% has the minimum influence on the detection of soil nitrogen content using a near infrared sensor while the detection accuracy is the highest and the time cost is the lowest, which is of great significance to develop a portable apparatus detecting nitrogen in the field accurately and rapidly. PMID:28880202
Research on the Optimum Water Content of Detecting Soil Nitrogen Using Near Infrared Sensor.
He, Yong; Xiao, Shupei; Nie, Pengcheng; Dong, Tao; Qu, Fangfang; Lin, Lei
2017-09-07
Nitrogen is one of the important indexes to evaluate the physiological and biochemical properties of soil. The level of soil nitrogen content influences the nutrient levels of crops directly. The near infrared sensor can be used to detect the soil nitrogen content rapidly, nondestructively, and conveniently. In order to investigate the effect of the different soil water content on soil nitrogen detection by near infrared sensor, the soil samples were dealt with different drying times and the corresponding water content was measured. The drying time was set from 1 h to 8 h, and every 1 h 90 samples (each nitrogen concentration of 10 samples) were detected. The spectral information of samples was obtained by near infrared sensor, meanwhile, the soil water content was calculated every 1 h. The prediction model of soil nitrogen content was established by two linear modeling methods, including partial least squares (PLS) and uninformative variable elimination (UVE). The experiment shows that the soil has the highest detection accuracy when the drying time is 3 h and the corresponding soil water content is 1.03%. The correlation coefficients of the calibration set are 0.9721 and 0.9656, and the correlation coefficients of the prediction set are 0.9712 and 0.9682, respectively. The prediction accuracy of both models is high, while the prediction effect of PLS model is better and more stable. The results indicate that the soil water content at 1.03% has the minimum influence on the detection of soil nitrogen content using a near infrared sensor while the detection accuracy is the highest and the time cost is the lowest, which is of great significance to develop a portable apparatus detecting nitrogen in the field accurately and rapidly.
Borgen, Rita; Kelly, Judith; Millington, Sara; Hilton, Beverley; Aspin, Rob; Lança, Carla; Hogg, Peter
2017-01-01
Objective: Blurred images in full-field digital mammography are a problem in the UK Breast Screening Programme. Technical recalls may be due to blurring not being seen on lower resolution monitors used for review. This study assesses the visual detection of blurring on a 2.3-MP monitor and a 5-MP report grade monitor and proposes an observer standard for the visual detection of blurring on a 5-MP reporting grade monitor. Methods: 28 observers assessed 120 images for blurring; 20 images had no blurring present, whereas 100 images had blurring imposed through mathematical simulation at 0.2, 0.4, 0.6, 0.8 and 1.0 mm levels of motion. Technical recall rate for both monitors and angular size at each level of motion were calculated. χ2 tests were used to test whether significant differences in blurring detection existed between 2.3- and 5-MP monitors. Results: The technical recall rate for 2.3- and 5-MP monitors are 20.3% and 9.1%, respectively. The angular size for 0.2- to 1-mm motion varied from 55 to 275 arc s. The minimum amount of motion for visual detection of blurring in this study is 0.4 mm. For 0.2-mm simulated motion, there was no significant difference [χ2 (1, N = 1095) = 1.61, p = 0.20] in blurring detection between the 2.3- and 5-MP monitors. Conclusion: According to this study, monitors ≤2.3 MP are not suitable for technical review of full-field digital mammography images for the detection of blur. Advances in knowledge: This research proposes the first observer standard for the visual detection of blurring. PMID:28134567
Advancing the detection of steady-state visual evoked potentials in brain-computer interfaces.
Abu-Alqumsan, Mohammad; Peer, Angelika
2016-06-01
Spatial filtering has proved to be a powerful pre-processing step in detection of steady-state visual evoked potentials and boosted typical detection rates both in offline analysis and online SSVEP-based brain-computer interface applications. State-of-the-art detection methods and the spatial filters used thereby share many common foundations as they all build upon the second order statistics of the acquired Electroencephalographic (EEG) data, that is, its spatial autocovariance and cross-covariance with what is assumed to be a pure SSVEP response. The present study aims at highlighting the similarities and differences between these methods. We consider the canonical correlation analysis (CCA) method as a basis for the theoretical and empirical (with real EEG data) analysis of the state-of-the-art detection methods and the spatial filters used thereby. We build upon the findings of this analysis and prior research and propose a new detection method (CVARS) that combines the power of the canonical variates and that of the autoregressive spectral analysis in estimating the signal and noise power levels. We found that the multivariate synchronization index method and the maximum contrast combination method are variations of the CCA method. All three methods were found to provide relatively unreliable detections in low signal-to-noise ratio (SNR) regimes. CVARS and the minimum energy combination methods were found to provide better estimates for different SNR levels. Our theoretical and empirical results demonstrate that the proposed CVARS method outperforms other state-of-the-art detection methods when used in an unsupervised fashion. Furthermore, when used in a supervised fashion, a linear classifier learned from a short training session is able to estimate the hidden user intention, including the idle state (when the user is not attending to any stimulus), rapidly, accurately and reliably.
Colón-González, Felipe J; Lake, Iain R; Morbey, Roger A; Elliot, Alex J; Pebody, Richard; Smith, Gillian E
2018-04-24
Syndromic surveillance complements traditional public health surveillance by collecting and analysing health indicators in near real time. The rationale of syndromic surveillance is that it may detect health threats faster than traditional surveillance systems permitting more timely, and hence potentially more effective public health action. The effectiveness of syndromic surveillance largely relies on the methods used to detect aberrations. Very few studies have evaluated the performance of syndromic surveillance systems and consequently little is known about the types of events that such systems can and cannot detect. We introduce a framework for the evaluation of syndromic surveillance systems that can be used in any setting based upon the use of simulated scenarios. For a range of scenarios this allows the time and probability of detection to be determined and uncertainty is fully incorporated. In addition, we demonstrate how such a framework can model the benefits of increases in the number of centres reporting syndromic data and also determine the minimum size of outbreaks that can or cannot be detected. Here, we demonstrate its utility using simulations of national influenza outbreaks and localised outbreaks of cryptosporidiosis. Influenza outbreaks are consistently detected with larger outbreaks being detected in a more timely manner. Small cryptosporidiosis outbreaks (<1000 symptomatic individuals) are unlikely to be detected. We also demonstrate the advantages of having multiple syndromic data streams (e.g. emergency attendance data, telephone helpline data, general practice consultation data) as different streams are able to detect different outbreak types with different efficacy (e.g. emergency attendance data are useful for the detection of pandemic influenza but not for outbreaks of cryptosporidiosis). We also highlight that for any one disease, the utility of data streams may vary geographically, and that the detection ability of syndromic surveillance varies seasonally (e.g. an influenza outbreak starting in July is detected sooner than one starting later in the year). We argue that our framework constitutes a useful tool for public health emergency preparedness in multiple settings. The proposed framework allows the exhaustive evaluation of any syndromic surveillance system and constitutes a useful tool for emergency preparedness and response.
Target intersection probabilities for parallel-line and continuous-grid types of search
McCammon, R.B.
1977-01-01
The expressions for calculating the probability of intersection of hidden targets of different sizes and shapes for parallel-line and continuous-grid types of search can be formulated by vsing the concept of conditional probability. When the prior probability of the orientation of a widden target is represented by a uniform distribution, the calculated posterior probabilities are identical with the results obtained by the classic methods of probability. For hidden targets of different sizes and shapes, the following generalizations about the probability of intersection can be made: (1) to a first approximation, the probability of intersection of a hidden target is proportional to the ratio of the greatest dimension of the target (viewed in plane projection) to the minimum line spacing of the search pattern; (2) the shape of the hidden target does not greatly affect the probability of the intersection when the largest dimension of the target is small relative to the minimum spacing of the search pattern, (3) the probability of intersecting a target twice for a particular type of search can be used as a lower bound if there is an element of uncertainty of detection for a particular type of tool; (4) the geometry of the search pattern becomes more critical when the largest dimension of the target equals or exceeds the minimum spacing of the search pattern; (5) for elongate targets, the probability of intersection is greater for parallel-line search than for an equivalent continuous square-grid search when the largest dimension of the target is less than the minimum spacing of the search pattern, whereas the opposite is true when the largest dimension exceeds the minimum spacing; (6) the probability of intersection for nonorthogonal continuous-grid search patterns is not greatly different from the probability of intersection for the equivalent orthogonal continuous-grid pattern when the orientation of the target is unknown. The probability of intersection for an elliptically shaped target can be approximated by treating the ellipse as intermediate between a circle and a line. A search conducted along a continuous rectangular grid can be represented as intermediate between a search along parallel lines and along a continuous square grid. On this basis, an upper and lower bound for the probability of intersection of an elliptically shaped target for a continuous rectangular grid can be calculated. Charts have been constructed that permit the values for these probabilities to be obtained graphically. The use of conditional probability allows the explorationist greater flexibility in considering alternate search strategies for locating hidden targets. ?? 1977 Plenum Publishing Corp.
Biver, Marc; Filella, Montserrat
2016-05-03
The toxicity of Cd being well established and that of Te suspected, the bulk, surface-normalized steady-state dissolution rates of two industrially important binary tellurides-polycrystalline cadmium and bismuth tellurides- were studied over the pH range 3-11, at various temperatures (25-70 °C) and dissolved oxygen concentrations (0-100% O2 in the gas phase). The behavior of both tellurides is strikingly different. The dissolution rates of CdTe monotonically decreased with increasing pH, the trend becoming more pronounced with increasing temperature. Activation energies were of the order of magnitude associated with surface controlled processes; they decreased with decreasing acidity. At pH 7, the CdTe dissolution rate increased linearly with dissolved oxygen. In anoxic solution, CdTe dissolved at a finite rate. In contrast, the dissolution rate of Bi2Te3 passed through a minimum at pH 5.3. The activation energy had a maximum in the rate minimum at pH 5.3 and fell below the threshold for diffusion control at pH 11. No oxygen dependence was detected. Bi2Te3 dissolves much more slowly than CdTe; from one to more than 3.5 orders of magnitude in the Bi2Te3 rate minimum. Both will readily dissolve under long-term landfill deposition conditions but comparatively slowly.
Muhamad, Hairul Masrini; Xu, Xiaomei; Zhang, Xuelei; Jaaman, Saifullah Arifin; Muda, Azmi Marzuki
2018-05-01
Studies of Irrawaddy dolphins' acoustics assist in understanding the behaviour of the species and thereby conservation of this species. Whistle signals emitted by Irrawaddy dolphin within the Bay of Brunei in Malaysian waters were characterized. A total of 199 whistles were analysed from seven sightings between January and April 2016. Six types of whistles contours named constant, upsweep, downsweep, concave, convex, and sine were detected when the dolphins engaged in traveling, foraging, and socializing activities. The whistle durations ranged between 0.06 and 3.86 s. The minimum frequency recorded was 443 Hz [Mean = 6000 Hz, standard deviation (SD) = 2320 Hz] and the maximum frequency recorded was 16 071 Hz (Mean = 7139 Hz, SD = 2522 Hz). The mean frequency range (F.R.) for the whistles was 1148 Hz (Minimum F.R. = 0 Hz, Maximum F.R. = 4446 Hz; SD = 876 Hz). Whistles in the Bay of Brunei were compared with population recorded from the waters of Matang and Kalimantan. The comparisons showed differences in whistle duration, minimum frequency, start frequency, and number of inflection point. Variation in whistle occurrence and frequency may be associated with surface behaviour, ambient noise, and recording limitation. This will be an important element when planning a monitoring program.
Pérez-Castro, Rosalía; Castellanos, Jaime E; Olano, Víctor A; Matiz, María Inés; Jaramillo, Juan F; Vargas, Sandra L; Sarmiento, Diana M; Stenström, Thor Axel; Overgaard, Hans J
2016-01-01
The Aedes aegypti vector for dengue virus (DENV) has been reported in urban and periurban areas. The information about DENV circulation in mosquitoes in Colombian rural areas is limited, so we aimed to evaluate the presence of DENV in Ae. aegypti females caught in rural locations of two Colombian municipalities, Anapoima and La Mesa. Mosquitoes from 497 rural households in 44 different rural settlements were collected. Pools of about 20 Ae. aegypti females were processed for DENV serotype detection. DENV in mosquitoes was detected in 74% of the analysed settlements with a pool positivity rate of 62%. The estimated individual mosquito infection rate was 4.12% and the minimum infection rate was 33.3/1,000 mosquitoes. All four serotypes were detected; the most frequent being DENV-2 (50%) and DENV-1 (35%). Two-three serotypes were detected simultaneously in separate pools. This is the first report on the co-occurrence of natural DENV infection of mosquitoes in Colombian rural areas. The findings are important for understanding dengue transmission and planning control strategies. A potential latent virus reservoir in rural areas could spill over to urban areas during population movements. Detecting DENV in wild-caught adult mosquitoes should be included in the development of dengue epidemic forecasting models. PMID:27074252
Pérez-Castro, Rosalía; Castellanos, Jaime E; Olano, Víctor A; Matiz, María Inés; Jaramillo, Juan F; Vargas, Sandra L; Sarmiento, Diana M; Stenström, Thor Axel; Overgaard, Hans J
2016-04-01
The Aedes aegypti vector for dengue virus (DENV) has been reported in urban and periurban areas. The information about DENV circulation in mosquitoes in Colombian rural areas is limited, so we aimed to evaluate the presence of DENV in Ae. aegypti females caught in rural locations of two Colombian municipalities, Anapoima and La Mesa. Mosquitoes from 497 rural households in 44 different rural settlements were collected. Pools of about 20 Ae. aegypti females were processed for DENV serotype detection. DENV in mosquitoes was detected in 74% of the analysed settlements with a pool positivity rate of 62%. The estimated individual mosquito infection rate was 4.12% and the minimum infection rate was 33.3/1,000 mosquitoes. All four serotypes were detected; the most frequent being DENV-2 (50%) and DENV-1 (35%). Two-three serotypes were detected simultaneously in separate pools. This is the first report on the co-occurrence of natural DENV infection of mosquitoes in Colombian rural areas. The findings are important for understanding dengue transmission and planning control strategies. A potential latent virus reservoir in rural areas could spill over to urban areas during population movements. Detecting DENV in wild-caught adult mosquitoes should be included in the development of dengue epidemic forecasting models.
Estimation of Rain Intensity Spectra over the Continental US Using Ground Radar-Gauge Measurements
NASA Technical Reports Server (NTRS)
Lin, Xin; Hou, Arthur Y.
2013-01-01
A high-resolution surface rainfall product is used to estimate rain characteristics over the continental US as a function of rain intensity. By defining each data at 4-km horizontal resolutions and 1-h temporal resolutions as an individual precipitating/nonprecipitating sample, statistics of rain occurrence and rain volume including their geographical and seasonal variations are documented. Quantitative estimations are also conducted to evaluate the impact of missing light rain events due to satellite sensors' detection capabilities. It is found that statistics of rain characteristics have large seasonal and geographical variations across the continental US. Although heavy rain events (> 10 mm/hr.) only occupy 2.6% of total rain occurrence, they may contribute to 27% of total rain volume. Light rain events (< 1.0 mm/hr.), occurring much more frequently (65%) than heavy rain events, can also make important contributions (15%) to the total rain volume. For minimum detectable rain rates setting at 0.5 and 0.2 mm/hr which are close to sensitivities of the current and future space-borne precipitation radars, there are about 43% and 11% of total rain occurrence below these thresholds, and they respectively represent 7% and 0.8% of total rain volume. For passive microwave sensors with their rain pixel sizes ranging from 14 to 16 km and the minimum detectable rain rates around 1 mm/hr., the missed light rain events may account for 70% of train occurrence and 16% of rain volume. Statistics of rain characteristics are also examined on domains with different temporal and spatial resolutions. Current issues in estimates of rain characteristics from satellite measurements and model outputs are discussed.
A new approach to keratoconus detection based on corneal morphogeometric analysis
Bataille, Laurent; Fernández-Pacheco, Daniel G.; Cañavate, Francisco J. F.; Alió, Jorge L.
2017-01-01
Purpose To characterize corneal structural changes in keratoconus using a new morphogeometric approach and to evaluate its potential diagnostic ability. Methods Comparative study including 464 eyes of 464 patients (age, 16 and 72 years) divided into two groups: control group (143 healthy eyes) and keratoconus group (321 keratoconus eyes). Topographic information (Sirius, CSO, Italy) was processed with SolidWorks v2012 and a solid model representing the geometry of each cornea was generated. The following parameters were defined: anterior (Aant) and posterior (Apost) corneal surface areas, area of the cornea within the sagittal plane passing through the Z axis and the apex (Aapexant, Aapexpost) and minimum thickness points (Amctant, Amctpost) of the anterior and posterior corneal surfaces, and average distance from the Z axis to the apex (Dapexant, Dapexpost) and minimum thickness points (Dmctant, Dmctpost) of both corneal surfaces. Results Significant differences among control and keratoconus group were found in Aapexant, Aapexpost, Amctant, Amctpost, Dapexant, Dapexpost (all p<0.001), Apost (p = 0.014), and Dmctpost (p = 0.035). Significant correlations in keratoconus group were found between Aant and Apost (r = 0.836), Amctant and Amctpost (r = 0.983), and Dmctant and Dmctpost (r = 0.954, all p<0.001). A logistic regression analysis revealed that the detection of keratoconus grade I (Amsler Krumeich) was related to Apost, Atot, Aapexant, Amctant, Amctpost, Dapexpost, Dmctant and Dmctpost (Hosmer-Lemeshow: p>0.05, R2 Nagelkerke: 0.926). The overall percentage of cases correctly classified by the model was 97.30%. Conclusions Our morphogeometric approach based on the analysis of the cornea as a solid is useful for the characterization and detection of keratoconus. PMID:28886157
Subband-Based Group Delay Segmentation of Spontaneous Speech into Syllable-Like Units
NASA Astrophysics Data System (ADS)
Nagarajan, T.; Murthy, H. A.
2004-12-01
In the development of a syllable-centric automatic speech recognition (ASR) system, segmentation of the acoustic signal into syllabic units is an important stage. Although the short-term energy (STE) function contains useful information about syllable segment boundaries, it has to be processed before segment boundaries can be extracted. This paper presents a subband-based group delay approach to segment spontaneous speech into syllable-like units. This technique exploits the additive property of the Fourier transform phase and the deconvolution property of the cepstrum to smooth the STE function of the speech signal and make it suitable for syllable boundary detection. By treating the STE function as a magnitude spectrum of an arbitrary signal, a minimum-phase group delay function is derived. This group delay function is found to be a better representative of the STE function for syllable boundary detection. Although the group delay function derived from the STE function of the speech signal contains segment boundaries, the boundaries are difficult to determine in the context of long silences, semivowels, and fricatives. In this paper, these issues are specifically addressed and algorithms are developed to improve the segmentation performance. The speech signal is first passed through a bank of three filters, corresponding to three different spectral bands. The STE functions of these signals are computed. Using these three STE functions, three minimum-phase group delay functions are derived. By combining the evidence derived from these group delay functions, the syllable boundaries are detected. Further, a multiresolution-based technique is presented to overcome the problem of shift in segment boundaries during smoothing. Experiments carried out on the Switchboard and OGI-MLTS corpora show that the error in segmentation is at most 25 milliseconds for 67% and 76.6% of the syllable segments, respectively.
Code of Federal Regulations, 2010 CFR
2010-07-01
... of the Act. (f) Detection limit means the minimum concentration of an analyte (substance) that can be measured and reported with a 99% confidence that the analyte concentration is greater than zero as...
Code of Federal Regulations, 2014 CFR
2014-07-01
... of the Act. (f) Detection limit means the minimum concentration of an analyte (substance) that can be measured and reported with a 99% confidence that the analyte concentration is greater than zero as...
Code of Federal Regulations, 2011 CFR
2011-07-01
... of the Act. (f) Detection limit means the minimum concentration of an analyte (substance) that can be measured and reported with a 99% confidence that the analyte concentration is greater than zero as...
Code of Federal Regulations, 2012 CFR
2012-07-01
... of the Act. (f) Detection limit means the minimum concentration of an analyte (substance) that can be measured and reported with a 99% confidence that the analyte concentration is greater than zero as...
Code of Federal Regulations, 2013 CFR
2013-07-01
... of the Act. (f) Detection limit means the minimum concentration of an analyte (substance) that can be measured and reported with a 99% confidence that the analyte concentration is greater than zero as...
Comparative synthesis and antimicrobial action of silver nanoparticles and silver nitrate
NASA Astrophysics Data System (ADS)
Mosselhy, Dina A.; El-Aziz, Mohamed Abd; Hanna, Magdy; Ahmed, Mohamed A.; Husien, Mona M.; Feng, Qingling
2015-12-01
The high wave of antibiotic bacterial resistance has addressed an importance for administration of different antibacterial agents, as silver nanoparticles (Ag NPs). However, many investigators still suffer conflict in the mechanistic antimicrobial action of Ag NPs and Ag+ ions. In this regard, our study investigated the comparative antimicrobial action of different sizes of Ag NPs as 8 (nAg1) and 29 (nAg2) nm, in comparison with silver nitrate (AgNO3) against five different bacterial species; Aeromonas hydrophila ( A. hydrophila), Pseudomonas putida ( Ps. putida), Escherichia coli ( E. coli), Staphylococcus aureus ( S. aureus), and Bacillus subtilis ( B. subtilis) using agar diffusion assay and minimum inhibitory concentration (MIC). The key role of the size of nanomaterials was detected, as the smaller Ag NPs (nAg1) showed more antimicrobial action than the larger particles. Transmission electron microscopy (TEM) studies demonstrated the different mechanistic antibacterial actions of Ag NPs and AgNO3. The effect of combining Ag NPs with antibiotics was also investigated. Synergistic effect of combining Ag NPs with ampicillin was detected against S. aureus, in a size-dependent manner as well. To summarize, our results point towards the major role played by the size of Ag NPs in their antimicrobial effects and the different toxic mechanisms of actions induced by Ag NPs and AgNO3.
Sharma, Mukesh Kumar; Narayanan, J; Pardasani, Deepak; Srivastava, Divesh N; Upadhyay, Sanjay; Goel, Ajay Kumar
2016-06-15
Bacillus anthracis, the causative agent of anthrax, is a well known bioterrorism agent. The determination of surface array protein (Sap), a unique biomarker for B. anthracis can offer an opportunity for specific detection of B. anthracis in culture broth. In this study, we designed a new catalytic bionanolabel and fabricated a novel electrochemical immunosensor for ultrasensitive detection of B. anthracis Sap antigen. Bimetallic gold-palladium nanoparticles were in-situ grown on poly (diallyldimethylammonium chloride) functionalized boron nitride nanosheets (Au-Pd NPs@BNNSs) and conjugated with the mouse anti-B. anthracis Sap antibodies (Ab2); named Au-Pd NPs@BNNSs/Ab2. The resulting Au-Pd NPs@BNNSs/Ab2 bionanolabel demonstrated high catalytic activity towards reduction of 4-nitrophenol. The sensitivity of the electrochemical immunosensor along with redox cycling of 4-aminophenol to 4-quinoneimine was improved to a great extent. Under optimal conditions, the proposed immunosensor exhibited a wide working range from 5 pg/mL to 100 ng/mL with a minimum detection limit of 1 pg/mL B. anthracis Sap antigen. The practical applicability of the immunosensor was demonstrated by specific detection of Sap secreted by the B. anthracis in culture broth just after 1h of growth. These labels open a new direction for the ultrasensitive detection of different biological warfare agents and their markers in different matrices. Copyright © 2016 Elsevier B.V. All rights reserved.
Averós, Xavier; Aparicio, Miguel A; Ferrari, Paolo; Guy, Jonathan H; Hubbard, Carmen; Schmid, Otto; Ilieski, Vlatko; Spoolder, Hans A M
2013-08-14
Information about animal welfare standards and initiatives from eight European countries was collected, grouped, and compared to EU welfare standards to detect those aspects beyond minimum welfare levels demanded by EU welfare legislation. Literature was reviewed to determine the scientific relevance of standards and initiatives, and those aspects going beyond minimum EU standards. Standards and initiatives were assessed to determine their strengths and weaknesses regarding animal welfare. Attitudes of stakeholders in the improvement of animal welfare were determined through a Policy Delphi exercise. Social perception of animal welfare, economic implications of upraising welfare levels, and differences between countries were considered. Literature review revealed that on-farm space allowance, climate control, and environmental enrichment are relevant for all animal categories. Experts' assessment revealed that on-farm prevention of thermal stress, air quality, and races and passageways' design were not sufficiently included. Stakeholders considered that housing conditions are particularly relevant regarding animal welfare, and that animal-based and farm-level indicators are fundamental to monitor the progress of animal welfare. The most notable differences between what society offers and what farm animals are likely to need are related to transportation and space availability, with economic constraints being the most plausible explanation.
A versatile entropic measure of grey level inhomogeneity
NASA Astrophysics Data System (ADS)
Piasecki, Ryszard
2009-06-01
An entropic measure for the analysis of grey level inhomogeneity (GLI) is proposed as a function of length scale. It allows us to quantify the statistical dissimilarity of the actual macrostate and the maximizing entropy of the reference one. The maximums (minimums) of the measure indicate those scales at which higher (lower) average grey level inhomogeneity appears compared to neighbour scales. Even a deeply hidden statistical grey level periodicity can be detected by the equally distant minimums of the measure. The striking effect of multiple intersecting curves (MICs) of the measure has been revealed for pairs of simulated patterns, which differ in shades of grey or symmetry properties only. In turn, for evolving photosphere granulation patterns, the stability in time of the first peak position has been found. Interestingly, the third peak is dominant at initial steps of the evolution. This indicates a temporary grouping of granules at a length scale that may belong to the mesogranulation phenomenon. This behaviour has similarities with that reported by Consolini, Berrilli et al. [G. Consolini, F. Berrilli, A. Florio, E. Pietropaolo, L.A. Smaldone, Astron. Astrophys. 402 (2003) 1115; F. Berrilli, D. Del Moro, S. Russo, G. Consolini, Th. Straus, Astrophys. J. 632 (2005) 677] for binarized granulation images of a different data set.
Functional Brain Networks: Does the Choice of Dependency Estimator and Binarization Method Matter?
NASA Astrophysics Data System (ADS)
Jalili, Mahdi
2016-07-01
The human brain can be modelled as a complex networked structure with brain regions as individual nodes and their anatomical/functional links as edges. Functional brain networks are constructed by first extracting weighted connectivity matrices, and then binarizing them to minimize the noise level. Different methods have been used to estimate the dependency values between the nodes and to obtain a binary network from a weighted connectivity matrix. In this work we study topological properties of EEG-based functional networks in Alzheimer’s Disease (AD). To estimate the connectivity strength between two time series, we use Pearson correlation, coherence, phase order parameter and synchronization likelihood. In order to binarize the weighted connectivity matrices, we use Minimum Spanning Tree (MST), Minimum Connected Component (MCC), uniform threshold and density-preserving methods. We find that the detected AD-related abnormalities highly depend on the methods used for dependency estimation and binarization. Topological properties of networks constructed using coherence method and MCC binarization show more significant differences between AD and healthy subjects than the other methods. These results might explain contradictory results reported in the literature for network properties specific to AD symptoms. The analysis method should be seriously taken into account in the interpretation of network-based analysis of brain signals.
Do the Brazilian sardine commercial landings respond to local ocean circulation?
Gherardi, Douglas F. M.; Lentini, Carlos A. D.; Dias, Daniela F.; Campos, Paula C.
2017-01-01
It has been reported that sea surface temperature (SST) anomalies, flow intensity and mesoscale ocean processes, all affect sardine production, both in eastern and western boundary current systems. Here we tested the hypothesis whether extreme high and low commercial landings of the Brazilian sardine fisheries in the South Brazil Bight (SBB) are sensitive to different oceanic conditions. An ocean model (ROMS) and an individual based model (Ichthyop) were used to assess the relationship between oceanic conditions during the spawning season and commercial landings of the Brazilian sardine one year later. Model output was compared with remote sensing and analysis data showing good consistency. Simulations indicate that mortality of eggs and larvae by low temperature prior to maximum and minimum landings are significantly higher than mortality caused by offshore advection. However, when periods of maximum and minimum sardine landings are compared with respect to these causes of mortality no significant differences were detected. Results indicate that mortality caused by prevailing oceanic conditions at early life stages alone can not be invoked to explain the observed extreme commercial landings of the Brazilian sardine. Likely influencing factors include starvation and predation interacting with the strategy of spawning “at the right place and at the right time”. PMID:28489925
Schulze, Walther H. W.; Jiang, Yuan; Wilhelms, Mathias; Luik, Armin; Dössel, Olaf; Seemann, Gunnar
2015-01-01
In case of chest pain, immediate diagnosis of myocardial ischemia is required to respond with an appropriate treatment. The diagnostic capability of the electrocardiogram (ECG), however, is strongly limited for ischemic events that do not lead to ST elevation. This computational study investigates the potential of different electrode setups in detecting early ischemia at 10 minutes after onset: standard 3-channel and 12-lead ECG as well as body surface potential maps (BSPMs). Further, it was assessed if an additional ECG electrode with optimized position or the right-sided Wilson leads can improve sensitivity of the standard 12-lead ECG. To this end, a simulation study was performed for 765 different locations and sizes of ischemia in the left ventricle. Improvements by adding a single, subject specifically optimized electrode were similar to those of the BSPM: 2–11% increased detection rate depending on the desired specificity. Adding right-sided Wilson leads had negligible effect. Absence of ST deviation could not be related to specific locations of the ischemic region or its transmurality. As alternative to the ST time integral as a feature of ST deviation, the K point deviation was introduced: the baseline deviation at the minimum of the ST-segment envelope signal, which increased 12-lead detection rate by 7% for a reasonable threshold. PMID:26587538
Loewe, Axel; Schulze, Walther H W; Jiang, Yuan; Wilhelms, Mathias; Luik, Armin; Dössel, Olaf; Seemann, Gunnar
2015-01-01
In case of chest pain, immediate diagnosis of myocardial ischemia is required to respond with an appropriate treatment. The diagnostic capability of the electrocardiogram (ECG), however, is strongly limited for ischemic events that do not lead to ST elevation. This computational study investigates the potential of different electrode setups in detecting early ischemia at 10 minutes after onset: standard 3-channel and 12-lead ECG as well as body surface potential maps (BSPMs). Further, it was assessed if an additional ECG electrode with optimized position or the right-sided Wilson leads can improve sensitivity of the standard 12-lead ECG. To this end, a simulation study was performed for 765 different locations and sizes of ischemia in the left ventricle. Improvements by adding a single, subject specifically optimized electrode were similar to those of the BSPM: 2-11% increased detection rate depending on the desired specificity. Adding right-sided Wilson leads had negligible effect. Absence of ST deviation could not be related to specific locations of the ischemic region or its transmurality. As alternative to the ST time integral as a feature of ST deviation, the K point deviation was introduced: the baseline deviation at the minimum of the ST-segment envelope signal, which increased 12-lead detection rate by 7% for a reasonable threshold.
Diakaridia, Sanogo; Pan, Yue; Xu, Pengbai; Zhou, Dengwang; Wang, Benzhang; Teng, Lei; Lu, Zhiwei; Ba, Dexin; Dong, Yongkang
2017-07-24
In distributed Brillouin optical fiber sensor when the length of the perturbation to be detected is much smaller than the spatial resolution that is defined by the pulse width, the measured Brillouin gain spectrum (BGS) experiences two or multiple peaks. In this work, we propose and demonstrate a technique using differential pulse pair Brillouin optical time-domain analysis (DPP-BOTDA) based on double-peak BGS to enhance small-scale events detection capability, where two types of single mode fiber (main fiber and secondary fiber) with 116 MHz Brillouin frequency shift (BFS) difference have been used. We have realized detection of a 5-cm hot spot at the far end of 24-km single mode fiber by employing a 50-cm spatial resolution DPP-BOTDA with only 1GS/s sampling rate (corresponding to 10 cm/point). The BFS at the far end of 24-km sensing fiber has been measured with 0.54 MHz standard deviation which corresponds to a 0.5°C temperature accuracy. This technique is simple and cost effective because it is implemented using the similar experimental setup of the standard BOTDA, however, it should be noted that the consecutive small-scale events have to be separated by a minimum length corresponding to the spatial resolution defined by the pulse width difference.
Ertas, Gokhan
2018-07-01
To assess the value of joint evaluation of diffusion tensor imaging (DTI) measures by using logistic regression modelling to detect high GS risk group prostate tumors. Fifty tumors imaged using DTI on a 3 T MRI device were analyzed. Regions of interests focusing on the center of tumor foci and noncancerous tissue on the maps of mean diffusivity (MD) and fractional anisotropy (FA) were used to extract the minimum, the maximum and the mean measures. Measure ratio was computed by dividing tumor measure by noncancerous tissue measure. Logistic regression models were fitted for all possible pair combinations of the measures using 5-fold cross validation. Systematic differences are present for all MD measures and also for all FA measures in distinguishing the high risk tumors [GS ≥ 7(4 + 3)] from the low risk tumors [GS ≤ 7(3 + 4)] (P < 0.05). Smaller value for MD measures and larger value for FA measures indicate the high risk. The models enrolling the measures achieve good fits and good classification performances (R 2 adj = 0.55-0.60, AUC = 0.88-0.91), however the models using the measure ratios perform better (R 2 adj = 0.59-0.75, AUC = 0.88-0.95). The model that employs the ratios of minimum MD and maximum FA accomplishes the highest sensitivity, specificity and accuracy (Se = 77.8%, Sp = 96.9% and Acc = 90.0%). Joint evaluation of MD and FA diffusion tensor imaging measures is valuable to detect high GS risk group peripheral zone prostate tumors. However, use of the ratios of the measures improves the accuracy of the detections substantially. Logistic regression modelling provides a favorable solution for the joint evaluations easily adoptable in clinical practice. Copyright © 2018 Elsevier Inc. All rights reserved.
Robust human detection, tracking, and recognition in crowded urban areas
NASA Astrophysics Data System (ADS)
Chen, Hai-Wen; McGurr, Mike
2014-06-01
In this paper, we present algorithms we recently developed to support an automated security surveillance system for very crowded urban areas. In our approach for human detection, the color features are obtained by taking the difference of R, G, B spectrum and converting R, G, B to HSV (Hue, Saturation, Value) space. Morphological patch filtering and regional minimum and maximum segmentation on the extracted features are applied for target detection. The human tracking process approach includes: 1) Color and intensity feature matching track candidate selection; 2) Separate three parallel trackers for color, bright (above mean intensity), and dim (below mean intensity) detections, respectively; 3) Adaptive track gate size selection for reducing false tracking probability; and 4) Forward position prediction based on previous moving speed and direction for continuing tracking even when detections are missed from frame to frame. The Human target recognition is improved with a Super-Resolution Image Enhancement (SRIE) process. This process can improve target resolution by 3-5 times and can simultaneously process many targets that are tracked. Our approach can project tracks from one camera to another camera with a different perspective viewing angle to obtain additional biometric features from different perspective angles, and to continue tracking the same person from the 2nd camera even though the person moved out of the Field of View (FOV) of the 1st camera with `Tracking Relay'. Finally, the multiple cameras at different view poses have been geo-rectified to nadir view plane and geo-registered with Google- Earth (or other GIS) to obtain accurate positions (latitude, longitude, and altitude) of the tracked human for pin-point targeting and for a large area total human motion activity top-view. Preliminary tests of our algorithms indicate than high probability of detection can be achieved for both moving and stationary humans. Our algorithms can simultaneously track more than 100 human targets with averaged tracking period (time length) longer than the performance of the current state-of-the-art.
High sensitivity detection of trace gases at atmospheric pressure using tunable diode lasers
NASA Technical Reports Server (NTRS)
Reid, J.; Sinclair, R. L.; Grant, W. B.; Menzies, R. T.
1985-01-01
A detailed study of the detection of trace gases at atmospheric pressure using tunable diode lasers is described. The influence of multipass cells, retroreflectors and topographical targets is examined. The minimum detectable infrared absorption ranges from 0.1 percent for a pathlength of 1.2 km to 0.01 percent over short pathlengths. The factors which limit this sensitivity are discussed, and the techniques are illustrated by monitoring atmospehric CO2 and CH4.
NASA Astrophysics Data System (ADS)
Alakent, Burak; Camurdan, Mehmet C.; Doruker, Pemra
2005-10-01
Time series models, which are constructed from the projections of the molecular-dynamics (MD) runs on principal components (modes), are used to mimic the dynamics of two proteins: tendamistat and immunity protein of colicin E7 (ImmE7). Four independent MD runs of tendamistat and three independent runs of ImmE7 protein in vacuum are used to investigate the energy landscapes of these proteins. It is found that mean-square displacements of residues along the modes in different time scales can be mimicked by time series models, which are utilized in dividing protein dynamics into different regimes with respect to the dominating motion type. The first two regimes constitute the dominance of intraminimum motions during the first 5ps and the random walk motion in a hierarchically higher-level energy minimum, which comprise the initial time period of the trajectories up to 20-40ps for tendamistat and 80-120ps for ImmE7. These are also the time ranges within which the linear nonstationary time series are completely satisfactory in explaining protein dynamics. Encountering energy barriers enclosing higher-level energy minima constrains the random walk motion of the proteins, and pseudorelaxation processes at different levels of minima are detected in tendamistat, depending on the sampling window size. Correlation (relaxation) times of 30-40ps and 150-200ps are detected for two energy envelopes of successive levels for tendamistat, which gives an overall idea about the hierarchical structure of the energy landscape. However, it should be stressed that correlation times of the modes are highly variable with respect to conformational subspaces and sampling window sizes, indicating the absence of an actual relaxation. The random-walk step sizes and the time length of the second regime are used to illuminate an important difference between the dynamics of the two proteins, which cannot be clarified by the investigation of relaxation times alone: ImmE7 has lower-energy barriers enclosing the higher-level energy minimum, preventing the protein to relax and letting it move in a random-walk fashion for a longer period of time.
DETECTION OF KOI-13.01 USING THE PHOTOMETRIC ORBIT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shporer, Avi; Jenkins, Jon M.; Seader, Shawn E.
2011-12-15
We use the KOI-13 transiting star-planet system as a test case for the recently developed BEER algorithm, aimed at identifying non-transiting low-mass companions by detecting the photometric variability induced by the companion along its orbit. Such photometric variability is generated by three mechanisms: the beaming effect, tidal ellipsoidal distortion, and reflection/heating. We use data from three Kepler quarters, from the first year of the mission, while ignoring measurements within the transit and occultation, and show that the planet's ephemeris is clearly detected. We fit for the amplitude of each of the three effects and use the beaming effect amplitude tomore » estimate the planet's minimum mass, which results in M{sub p} sin i = 9.2 {+-} 1.1 M{sub J} (assuming the host star parameters derived by Szabo et al.). Our results show that non-transiting star-planet systems similar to KOI-13.01 can be detected in Kepler data, including a measurement of the orbital ephemeris and the planet's minimum mass. Moreover, we derive a realistic estimate of the amplitudes uncertainties, and use it to show that data obtained during the entire lifetime of the Kepler mission of 3.5 years will allow detecting non-transiting close-in low-mass companions orbiting bright stars, down to the few Jupiter mass level. Data from the Kepler Extended Mission, if funded by NASA, will further improve the detection capabilities.« less
Esophagram findings in cervical esophageal stenosis: A case-controlled quantitative analysis.
West, Jacob; Kim, Cherine H; Reichert, Zachary; Krishna, Priya; Crawley, Brianna K; Inman, Jared C
2018-01-04
Cervical esophageal stenosis is often diagnosed with a qualitative evaluation of a barium esophagram. Although the esophagram is frequently the initial screening exam for dysphagia, a clear objective standard for stenosis has not been defined. In this study, we measured esophagram diameters in order to establish a quantitative standard for defining cervical esophageal stenosis that requires surgical intervention. Single institution case-control study. Patients with clinically significant cervical esophageal stenosis defined by moderate symptoms of dysphagia (Functional Outcome Swallowing Scale > 2 and Functional Oral Intake Scale < 6) persisting for 6 months and responding to dilation treatment were matched with age, sex, and height controls. Both qualitative and quantitative barium esophagram measurements for the upper, mid-, and lower vertebral bodies of C5 through T1 were analyzed in lateral, oblique, and anterior-posterior views. Stenotic patients versus nonstenotic controls showed no significant differences in age, sex, height, body mass index, or ethnicity. Stenosis was most commonly at the sixth cervical vertebra (C 6) lower border and C7 upper border. The mean intraesophageal minimum/maximum ratios of controls and stenotic groups in the lateral view were 0.63 ± 0.08 and 0.36 ± 0.12, respectively (P < 0.0001). Receiver operating characteristic analysis of the minimum/maximum ratios, with a <0.50 ratio delineating stenosis, demonstrated that lateral view measurements had the best diagnostic ability. The sensitivity of the radiologists' qualitative interpretation was 56%. With application of lateral intraesophageal minimum/maximum ratios, we observed improved sensitivity to 94% of the esophagram, detecting clinically significant stenosis. Applying quantitative determinants in esophagram analysis may improve the sensitivity of detecting cervical esophageal stenosis in dysphagic patients who may benefit from surgical therapy. IIIb. Laryngoscope, 2018. © 2018 The American Laryngological, Rhinological and Otological Society, Inc.
Turbofan engine demonstration of sensor failure detection
NASA Technical Reports Server (NTRS)
Merrill, Walter C.; Delaat, John C.; Abdelwahab, Mahmood
1991-01-01
In the paper, the results of a full-scale engine demonstration of a sensor failure detection algorithm are presented. The algorithm detects, isolates, and accommodates sensor failures using analytical redundancy. The experimental hardware, including the F100 engine, is described. Demonstration results were obtained over a large portion of a typical flight envelope for the F100 engine. They include both subsonic and supersonic conditions at both medium and full, nonafter burning, power. Estimated accuracy, minimum detectable levels of sensor failures, and failure accommodation performance for an F100 turbofan engine control system are discussed.
Kelly, Brian P.; Rydlund, Jr., Paul H.
2006-01-01
Riverbank filtration substantially improves the source-water quality of the Independence, Missouri well field. Coliform bacteria, Cryptosporidium, Giardia, viruses and selected constituents were analyzed in water samples from the Missouri River, two vertical wells, and a collector well. Total coliform bacteria, Cryptosporidium, Giardia, and total culturable viruses were detected in the Missouri River, but were undetected in samples from wells. Using minimum reporting levels for non-detections in well samples, minimum log removals were 4.57 for total coliform bacteria, 1.67 for Cryptosporidium, 1.67 for Giardia, and 1.15 for total culturable virus. Ground-water flow rates between the Missouri River and wells were calculated from water temperature profiles and ranged between 1.2 and 6.7 feet per day. Log removals based on sample pairs separated by the traveltime between the Missouri River and wells were infinite for total coliform bacteria (minimum detection level equal to zero), between 0.8 and 3.5 for turbidity, between 1.5 and 2.1 for Giardia, and between 0.4 and 2.6 for total culturable viruses. Cryptosporidium was detected once in the Missouri River but no corresponding well samples were available. No clear relation was evident between changes in water quality in the Missouri River and in wells for almost all constituents. Results of analyses for organic wastewater compounds and the distribution of dissolved oxygen, specific conductance, and temperature in the Missouri River indicate water quality on the south side of the river was moderately influenced by the south bank inflows to the river upstream from the Independence well field.
NASA Astrophysics Data System (ADS)
Zhang, Qiang; Li, Jianfeng; David Chen, Yongqin; Chen, Xiaohong
2011-12-01
The purpose of this study was to statistically examine changes of surface air temperature in time and space and to analyze two factors potentially influencing air temperature changes in China, i.e., urbanization and net solar radiation. Trends within the temperature series were detected by using Mann-Kendall trend test technique. The scientific problem this study expected to address was that what could be the role of human activities in the changes of temperature extremes. Other influencing factors such as net solar radiation were also discussed. The results of this study indicated that: (1) increasing temperature was observed mainly in the northeast and northwest China; (2) different behaviors were identified in the changes of maximum and minimum temperature respectively. Maximum temperature seemed to be more influenced by urbanization, which could be due to increasing urban albedo, aerosol, and air pollutions in the urbanized areas. Minimum temperature was subject to influences of variations of net solar radiation; (3) not significant increasing and even decreasing temperature extremes in the Yangtze River basin and the regions south to the Yangtze River basin could be the consequences of higher relative humidity as a result of increasing precipitation; (4) the entire China was dominated by increasing minimum temperature. Thus, we can say that the warming process of China was reflected mainly by increasing minimum temperature. In addition, consistently increasing temperature was found in the upper reaches of the Yellow River basin, the Yangtze River basin, which have the potential to enhance the melting of permafrost in these areas. This may trigger new ecological problems and raise new challenges for the river basin scale water resource management.
Occupational Disease Registries-Characteristics and Experiences.
Davoodi, Somayeh; Haghighi, Khosro Sadeghniat; Kalhori, Sharareh Rostam Niakan; Hosseini, Narges Shams; Mohammadzadeh, Zeinab; Safdari, Reza
2017-06-01
Due to growth of occupational diseases and also increase of public awareness about their consequences, attention to various aspects of diseases and improve occupational health and safety has found great importance. Therefore, there is the need for appropriate information management tools such as registries in order to recognitions of diseases patterns and then making decision about prevention, early detection and treatment of them. These registries have different characteristics in various countries according to their occupational health priorities. Aim of this study is evaluate dimensions of occupational diseases registries including objectives, data sources, responsible institutions, minimum data set, classification systems and process of registration in different countries. In this study, the papers were searched using the MEDLINE (PubMed) Google scholar, Scopus, ProQuest and Google. The search was done based on keyword in English for all motor engines including "occupational disease", "work related disease", "surveillance", "reporting", "registration system" and "registry" combined with name of the countries including all subheadings. After categorizing search findings in tables, results were compared with each other. Important aspects of the registries studied in ten countries including Finland, France, United Kingdom, Australia, Czech Republic, Malaysia, United States, Singapore, Russia and Turkey. The results show that surveyed countries have statistical, treatment and prevention objectives. Data sources in almost the rest of registries were physicians and employers. The minimum data sets in most of them consist of information about patient, disease, occupation and employer. Some of countries have special occupational related classification systems for themselves and some of them apply international classification systems such as ICD-10. Finally, the process of registration system was different in countries. Because occupational diseases are often preventable, but not curable, it is necessary to all countries, to consider prevention and early detection of occupational diseases as the objectives of their registry systems. Also it is recommended that all countries reach an agreement about global characteristics of occupational disease registries. This enables country to compare their data at international levels.
NASA Technical Reports Server (NTRS)
Cecil, Daniel J.; Goodman, Steven J.; Boccippio, Dennis J.; Zipser, Edward J.; Nesbitt, Stephen W.
2004-01-01
During its first three years, the Tropical Rainfall Measuring Mission (TRMM) satellite observed nearly six million precipitation features. The population of precipitation features is sorted by lightning flash rate, minimum brightness temperature, maximum radar reflectivity, areal extent, and volumetric rainfall. For each of these characteristics, essentially describing the convective intensity or the size of the features, the population is broken into categories consisting of the top 0.001%, top 0.01%, top 0.1%, top 1%, top 2.4%, and remaining 97.6%. The set of 'weakest / smallest' features comprises 97.6% of the population because that fraction does not have detected lightning, with a minimum detectable flash rate 0.7 fl/min. The greatest observed flash rate is 1351 fl/min; the lowest brightness temperatures are 42 K (85-GHz) and 69 K (37- GHz). The largest precipitation feature covers 335,000 sq km and the greatest rainfall from an individual precipitation feature exceeds 2 x 10(exp 12) kg of water. There is considerable overlap between the greatest storms according to different measures of convective intensity. The largest storms are mostly independent of the most intense storms. The set of storms producing the most rainfall is a convolution of the largest and the most intense storms. This analysis is a composite of the global tropics and subtropics. Significant variability is known to exist between locations, seasons, and meteorological regimes. Such variability will be examined in Part II. In Part I, only a crude land / Ocean separation is made. The known differences in bulk lightning flash rates over land and Ocean result from at least two differences in the precipitation feature population: the frequency of occurrence of intense storms, and the magnitude of those intense storms that do occur. Even when restricted to storms with the same brightness temperature, same size, or same radar reflectivity aloft, the storms over water are considerably less likely to produce lightning than are comparable storms over land.
Sohrabi, M; Hakimi, A; Soltani, Z
2016-12-01
A recent novel development of 50-Hz-HV ECE of 1-mm-thick and 250-µm-thick polycarbonate track detectors (PCTDs) has proved some promising results for some health physics, dosimetry and ion-beam-related applications. The method while proved having some good characteristics for some applications provided a relatively higher background track density (BGTD) in particular when very high voltages are applied to the PCTDs. In order to decrease the minimum detection limit (MDL) of the PCTDs and to further promote its applications for low dose, the BGTD was reduced by applying a layer removal methodology applying ethylendiamine (EDA). The effects of EDA concentrations (50, 60, 65, 70, 75, 80, 85 and 90 %) in water at room temperature (26°C) and soaking durations up to 100 min at different EDA concentration on BGTD reduction were studied. The thickness of the layer removed from the surface of a PCTD highly depends on the soaking time and EDA concentration; it increases as the EDA concentration increases up to for example 700 µm after 2 h of soaking in the EDA solution. After ∼10 min of soaking duration at any of the above-stated concentrations, the BGTD reaches its minimum value, a value which differs from concentration to concentration. An EDA concentration of 85 % in water provided the lowest BGTD of 64.06 ± 3.12 tracks cm - 2 ; ∼6 times lower than that of its original value. It is shown that the layer removal process does not change the registration characteristics of the PCTD and its appearance significantly. The MDL of the PCTDs depends strongly on the BGTD. The MDL values for a desired confidence level were also studied by three calculation methods. The results of the BGTD and the MDL studies under different conditions applied are presented and discussed. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Cecil, Daniel J.; Goodman, Steven J.; Boccippio, Dennis J.; Zipser, Edward J.; Nesbitt, Stephen W.
2005-01-01
During its first three years, the Tropical Rainfall Measuring Mission (TRMM) satellite observed nearly six million precipitation features. The population of precipitation features is sorted by lightning flash rate, minimum brightness temperature, maximum radar reflectivity. areal extent, and volumetric rainfall. For each of these characteristics, essentially describing the convective intensity or the size of the features, the population is broken into categories consisting of the top 0.001%, top 0.01%, top 0.1%, top 1%, top 2.4%. and remaining 97.6%. The set of weakest/smallest features composes 97.6% of the population because that fraction does not have detected lightning, with a minimum detectable flash rate of 0.7 flashes (fl) per minute. The greatest observed flash rate is 1351 fl per minute; the lowest brightness temperatures are 42 K (85 GHz) and 69 K (37 GHz). The largest precipitation feature covers 335 000 square kilometers and the greatest rainfall from an individual precipitation feature exceeds 2 x 10 kg per hour of water. There is considerable overlap between the greatest storms according to different measures of convective intensity. The largest storms are mostly independent of the most intense storms. The set of storms producing the most rainfall is a convolution of the largest and the most intense storms. This analysis is a composite of the global Tropics and subtropics. Significant variability is known to exist between locations. seasons, and meteorological regimes. Such variability will be examined in Part II. In Part I, only a crude land-ocean separation is made. The known differences in bulk lightning flash rates over land and ocean result from at least two differences in the precipitation feature population: the frequency of occurrence of intense storms and the magnitude of those intense storms that do occur. Even when restricted to storms with the same brightness temperature, same size, or same radar reflectivity aloft, the storms over water are considerably less likely to produce lightning than are comparable storms over land.
Sensitive detection of formaldehyde using an interband cascade laser near 3.6 μm
Ren, Wei; Luo, Longqiang; Tittel, Frank K.
2015-12-31
Here, we report the development of a formaldehyde (H 2CO) trace gas sensor using a continuous wave (CW), thermoelectrically-cooled (TEC), distributed-feedback interband cascade laser (DFB-ICL) at 3.6 μm. Wavelength modulation spectroscopy was used to detect the second harmonic spectra of a strong H 2CO absorption feature centered at 2778.5 cm -1 (3599 nm) in its ν 1 fundamental vibrational band. A compact and novel multipass cell (7.6-cm physical length and 32-ml sampling volume) was implemented to achieve an effective optical path length of 3.75 m. A minimum detection limit of 6 parts per billion (ppb) at an optimum gas pressuremore » of 200 Torr was achieved with a 1-s data acquisition time. An Allan-Werle deviation analysis was performed to investigate the long-term stability of the sensor system and a 1.5 ppb minimum detectable concentration could be achieved by averaging up to 140 s. Absorption interference eeffects from atmospheric H 2O (2%) and CH 4(5 ppm) were also analyzed in this work and proved to be insignificant for the current sensor configuration.« less
Sensitive detection of formaldehyde using an interband cascade laser near 3.6 μm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Wei; Luo, Longqiang; Tittel, Frank K.
Here, we report the development of a formaldehyde (H 2CO) trace gas sensor using a continuous wave (CW), thermoelectrically-cooled (TEC), distributed-feedback interband cascade laser (DFB-ICL) at 3.6 μm. Wavelength modulation spectroscopy was used to detect the second harmonic spectra of a strong H 2CO absorption feature centered at 2778.5 cm -1 (3599 nm) in its ν 1 fundamental vibrational band. A compact and novel multipass cell (7.6-cm physical length and 32-ml sampling volume) was implemented to achieve an effective optical path length of 3.75 m. A minimum detection limit of 6 parts per billion (ppb) at an optimum gas pressuremore » of 200 Torr was achieved with a 1-s data acquisition time. An Allan-Werle deviation analysis was performed to investigate the long-term stability of the sensor system and a 1.5 ppb minimum detectable concentration could be achieved by averaging up to 140 s. Absorption interference eeffects from atmospheric H 2O (2%) and CH 4(5 ppm) were also analyzed in this work and proved to be insignificant for the current sensor configuration.« less
Herrera, Melina E; Mobilia, Liliana N; Posse, Graciela R
2011-01-01
The objective of this study is to perform a comparative evaluation of the prediffusion and minimum inhibitory concentration (MIC) methods for the detection of sensitivity to colistin, and to detect Acinetobacter baumanii-calcoaceticus complex (ABC) heteroresistant isolates to colistin. We studied 75 isolates of ABC recovered from clinically significant samples obtained from various centers. Sensitivity to colistin was determined by prediffusion as well as by MIC. All the isolates were sensitive to colistin, with MIC = 2µg/ml. The results were analyzed by dispersion graph and linear regression analysis, revealing that the prediffusion method did not correlate with the MIC values for isolates sensitive to colistin (r² = 0.2017). Detection of heteroresistance to colistin was determined by plaque efficiency of all the isolates with the same initial MICs of 2, 1, and 0.5 µg/ml, which resulted in 14 of them with a greater than 8-fold increase in the MIC in some cases. When the sensitivity of these resistant colonies was determined by prediffusion, the resulting dispersion graph and linear regression analysis yielded an r² = 0.604, which revealed a correlation between the methodologies used.
Li, Chuanliang; Wu, Yingfa; Qiu, Xuanbing; Wei, Jilin; Deng, Lunhua
2017-05-01
Wavelength modulation spectroscopy (WMS) combined with a multipass absorption cell has been used to measure a weak absorption line of carbon monoxide (CO) at 1.578 µm. A 0.95m Herriott-type cell provides an effective absorption path length of 55.1 m. The WMS signals from the first and second harmonic output of a lock-in amplifier (WMS-1 f and 2 f, respectively) agree with the Beer-Lambert law, especially at low concentrations. After boxcar averaging, the minimum detection limit achieved is 4.3 ppm for a measurement time of 0.125 s. The corresponding normalized detection limit is 84 ppm m Hz -1/2 . If the integrated time is increased to 88 s, the minimum detectable limit of CO can reach to 0.29 ppm based on an Allan variation analysis. The pressure-dependent relationship is validated after accounting for the pressure factor in data processing. Finally, a linear correlation between the WMS-2 f amplitudes and gas concentrations is obtained at concentration ratios less than 15.5%, and the accuracy is better than 92% at total pressure less than 62.7 Torr.
Optimal sensor placement for active guided wave interrogation of complex metallic components
NASA Astrophysics Data System (ADS)
Coelho, Clyde K.; Kim, Seung Bum; Chattopadhyay, Aditi
2011-04-01
With research in structural health monitoring (SHM) moving towards increasingly complex structures for damage interrogation, the placement of sensors is becoming a key issue in the performance of the damage detection methodologies. For ultrasonic wave based approaches, this is especially important because of the sensitivity of the travelling Lamb waves to material properties, geometry and boundary conditions that may obscure the presence of damage if they are not taken into account during sensor placement. The framework proposed in this paper defines a sensing region for a pair of piezoelectric transducers in a pitch-catch damage detection approach by taking into account the material attenuation and probability of false alarm. Using information about the region interrogated by a sensoractuator pair, a simulated annealing optimization framework was implemented in order to place sensors on complex metallic geometries such that a selected minimum damage type and size could be detected with an acceptable probability of false alarm anywhere on the structure. This approach was demonstrated on a lug joint to detect a crack and on a large Naval SHM test bed and resulted in a placement of sensors that was able to interrogate all parts of the structure using the minimum number of transducers.
Brown, J. E.; Brown, P. K.; Pinto, L. H.
1977-01-01
1. The metallochromic indicator dye, arsenazo III, was injected intracellularly into Limulus ventral photoreceptor cells to concentrations greater than 1 mM. 2. The absorption spectrum (450-750 nm) of the dye in single dark-adapted cells was measured by a scanning microspectrophotometer. When a cell was light-adapted, the absorption of the dye changed; the difference spectrum had two maxima at about 610 and 660 nm, a broad minimum at about 540 nm and an isosbestic point at about 585 nm. 3. When intracellular calcium concentration was raised in dark-adapted cells previously injected with arsenazo III, the difference spectum had two maxima at about 610 and 660 nm, a broad minimum at about 530 nm and an isosbestic point at about 585 nm. The injection of Mg2+ into dark-adapted cells previously injected with the dye induced a difference spectrum that had a single maximum at about 620 nm. Also, decreasing the intracellular pH of cells previously injected with the dye induced a difference spectrum that had a minimum at about 620 nm. The evidence suggests that there is a rise of intracellular ionized calcium when a Limulus ventral photoreceptor is light-adapted. 4. The intracellular calcium concentration, [Ca2+]1, in light-adapted photoreceptors was estimated to reach at least 10-4 M by compaing the light-induced difference spectra measured in ventral photoreceptors with a standard curve determined in microcuvettes containing 2mM arsenazo III in 400 mM-KCl, 1 mM-MgCl2 and 25 mM MOPS at pH 7·0. 5. In cells injected to less than 3 mM arsenazo III, light induced a transient decrease in optical transmission at 660 nm (T660). This decrease in T660 indicates that illumination of a ventral photoreceptor normally causes a transient increase of [Ca2+]1. 6. Arsenazo III was found to be sensitive, selective and rapid enough to measure light-induced changes of intracellular ionized calcium in Limulus ventral photoreceptor cells. PMID:17732
Cavity-Enhanced Absorption Spectroscopy and Photoacoustic Spectroscopy for Human Breath Analysis
NASA Astrophysics Data System (ADS)
Wojtas, J.; Tittel, F. K.; Stacewicz, T.; Bielecki, Z.; Lewicki, R.; Mikolajczyk, J.; Nowakowski, M.; Szabra, D.; Stefanski, P.; Tarka, J.
2014-12-01
This paper describes two different optoelectronic detection techniques: cavity-enhanced absorption spectroscopy and photoacoustic spectroscopy. These techniques are designed to perform a sensitive analysis of trace gas species in exhaled human breath for medical applications. With such systems, the detection of pathogenic changes at the molecular level can be achieved. The presence of certain gases (biomarkers), at increased concentration levels, indicates numerous human diseases. Diagnosis of a disease in its early stage would significantly increase chances for effective therapy. Non-invasive, real-time measurements, and high sensitivity and selectivity, capable of minimum discomfort for patients, are the main advantages of human breath analysis. At present, monitoring of volatile biomarkers in breath is commonly useful for diagnostic screening, treatment for specific conditions, therapy monitoring, control of exogenous gases (such as bacterial and poisonous emissions), as well as for analysis of metabolic gases.
UAS in the NAS Flight Test Series 4 Overview
NASA Technical Reports Server (NTRS)
Murphy, Jim
2016-01-01
Flight Test Series 4 (FT4) provides the researchers with an opportunity to expand on the data collected during the first flight tests. Following Flight Test Series 3, additional scripted encounters with different aircraft performance and sensors will be conducted. FT4 is presently planned for Spring of 2016 to ensure collection of data to support the validation of the final RTCA Phase 1 DAA (Detect and Avoid) Minimum Operational Performance Standards (MOPS). There are three research objectives associated with this goal: Evaluate the performance of the DAA system against cooperative and non-cooperative aircraft encounters Evaluate UAS (Unmanned Aircraft Systems) pilot performance in response to DAA maneuver guidance and alerting with live intruder encounters Evaluate TCAS/DAA (Traffic Alert and Collision Avoidance System/Detect and Avoid) interoperability. This flight test series will focus on only the Scripted Encounters configuration, supporting the collection of data to validate the interoperability of DAA and collision avoidance algorithms.
Volatile Organic Compound Optical Fiber Sensors: A Review
Elosua, Cesar; Matias, Ignacio R.; Bariain, Candido; Arregui, Francisco J.
2006-01-01
Volatile organic compound (VOC) detection is a topic of growing interest with applications in diverse fields, ranging from environmental uses to the food or chemical industries. Optical fiber VOC sensors offering new and interesting properties which overcame some of the inconveniences found on traditional gas sensors appeared over two decades ago. Thanks to its minimum invasive nature and the advantages that optical fiber offers such as light weight, passive nature, low attenuation and the possibility of multiplexing, among others, these sensors are a real alternative to electronic ones in electrically noisy environments where electronic sensors cannot operate correctly. In the present work, a classification of these devices has been made according to the sensing mechanism and taking also into account the sensing materials or the different methods of fabrication. In addition, some solutions already implemented for the detection of VOCs using optical fiber sensors will be described with detail.
Effect of occlusal appliances and clenching on the internally deranged TMJ space.
Kuboki, T; Takenami, Y; Orsini, M G; Maekawa, K; Yamashita, A; Azuma, Y; Clark, G T
1999-01-01
Stabilization appliances and mandibular anterior repositioning appliances have been used to treat patients with internal derangement of the temporomandibular joint (TMJ) based on the assumption that these appliances work by decompressing the TMJ. The purpose of this study was to indirectly test this assumption. Bilateral TMJ tomograms of 7 subjects with unilateral anterior disc displacement without reduction (ADDwor) were taken during comfortable closure and during maximum clenching in maximum intercuspation; tomograms were also taken with the 2 types of occlusal appliances in use. Outlines of the condyle and the temporal fossa were automatically determined by an edge-detection protocol, and the minimum joint space dimension of the joints with and without ADDwor was automatically measured for each experimental condition as the outcome variable. Upon comfortable closure and maximum clenching, the minimum joint space dimensions of the ipsilateral and contralateral joints with the use of stabilization appliances and mandibular anterior repositioning appliances were not significantly different from those seen in maximum intercuspation. These findings do not indicate that these appliances induce an increase in joint space during closing and clenching in joints with ADDwor.
The perception of complex tones by a false killer whale (Pseudorca crassidens).
Yuen, Michelle M L; Nachtigall, Paul E; Breese, Marlee; Vlachos, Stephanie A
2007-03-01
Complex tonal whistles are frequently produced by some odontocete species. However, no experimental evidence exists regarding the detection of complex tones or the discrimination of harmonic frequencies by a marine mammal. The objectives of this investigation were to examine the ability of a false killer whale to discriminate pure tones from complex tones and to determine the minimum intensity level of a harmonic tone required for the whale to make the discrimination. The study was conducted with a go/no-go modified staircase procedure. The different stimuli were complex tones with a fundamental frequency of 5 kHz with one to five harmonic frequencies. The results from this complex tone discrimination task demonstrated: (1) that the false killer whale was able to discriminate a 5 kHz pure tone from a complex tone with up to five harmonics, and (2) that discrimination thresholds or minimum intensity levels exist for each harmonic combination measured. These results indicate that both frequency level and harmonic content may have contributed to the false killer whale's discrimination of complex tones.
Quadroni, Silvia; Crosa, Giuseppe; Gentili, Gaetano; Espa, Paolo
2017-12-31
The present work focuses on evaluating the ecological effects of hydropower-induced streamflow alteration within four catchments in the central Italian Alps. Downstream from the water diversions, minimum flows are released as an environmental protection measure, ranging approximately from 5 to 10% of the mean annual natural flow estimated at the intake section. Benthic macroinvertebrates as well as daily averaged streamflow were monitored for five years at twenty regulated stream reaches, and possible relationships between benthos-based stream quality metrics and environmental variables were investigated. Despite the non-negligible inter-site differences in basic streamflow metrics, benthic macroinvertebrate communities were generally dominated by few highly resilient taxa. The highest level of diversity was detected at sites where upstream minimum flow exceedance is higher and further anthropogenic pressures (other than hydropower) are lower. However, according to the current Italian normative index, the ecological quality was good/high on average at all of the investigated reaches, thus complying the Water Framework Directive standards. Copyright © 2017 Elsevier B.V. All rights reserved.
Determination of vigabatrin in plasma by reversed-phase high-performance liquid chromatography.
Tsanaclis, L M; Wicks, J; Williams, J; Richens, A
1991-05-01
A method is described for the determination of vigabatrin in 50 microliters of plasma by isocratic high-performance liquid chromatography using fluorescence detection. The procedure involves protein precipitation with methanol followed by precolumn derivatisation with o-phthaldialdehyde reagent. Separation of the derivatised vigabatrin was achieved on a Microsorb C18 column using a mobile phase of 10 mM orthophosphoric acid:acetonitrile:methanol (6:3:1) at a flow rate of 2.0 ml/min. Assay time is 15 min and chromatograms show no interference from commonly coadministered anticonvulsant drugs. The total analytical error within the range of 0.85-85 micrograms/ml was found to be 7.6% with the within-replicates error of 2.76%. The minimum detection limit was 0.08 micrograms/ml and the minimum quantitation limit was 0.54 micrograms/ml.
NASA Technical Reports Server (NTRS)
Ghatas, Rania W.; Jack, Devin P.; Tsakpinis, Dimitrios; Sturdy, James L.; Vincent, Michael J.; Hoffler, Keith D.; Myer, Robert R.; DeHaven, Anna M.
2017-01-01
As Unmanned Aircraft Systems (UAS) make their way to mainstream aviation operations within the National Airspace System (NAS), research efforts are underway to develop a safe and effective environment for their integration into the NAS. Detect and Avoid (DAA) systems are required to account for the lack of "eyes in the sky" due to having no human on-board the aircraft. The technique, results, and lessons learned from a detailed End-to-End Verification and Validation (E2-V2) simulation study of a DAA system representative of RTCA SC-228's proposed Phase I DAA Minimum Operational Performance Standards (MOPS), based on specific test vectors and encounter cases, will be presented in this paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loomis, Ryan A.; McGuire, Brett A.; Remijan, Anthony J.
Recently, Lattelais et al. have interpreted aggregated observations of molecular isomers to suggest that there exists a ''minimum energy principle'', such that molecular formation will favor more stable molecular isomers for thermodynamic reasons. To test the predictive power of this principle, we have fully characterized the spectra of the three isomers of C{sub 3}H{sub 2}O toward the well-known molecular region Sgr B2(N). Evidence for the detection of the isomers cyclopropenone (c-C{sub 3}H{sub 2}O) and propynal (HCCCHO) is presented, along with evidence for the non-detection of the lowest zero-point energy isomer, propadienone (CH{sub 2}CCO). We interpret these observations as evidence that chemicalmore » formation pathways, which may be under kinetic control, have a more pronounced effect on final isomer abundances than thermodynamic effects such as the minimum energy principle.« less
NASA Astrophysics Data System (ADS)
Suhaila, Jamaludin; Yusop, Zulkifli
2017-06-01
Most of the trend analysis that has been conducted has not considered the existence of a change point in the time series analysis. If these occurred, then the trend analysis will not be able to detect an obvious increasing or decreasing trend over certain parts of the time series. Furthermore, the lack of discussion on the possible factors that influenced either the decreasing or the increasing trend in the series needs to be addressed in any trend analysis. Hence, this study proposes to investigate the trends, and change point detection of mean, maximum and minimum temperature series, both annually and seasonally in Peninsular Malaysia and determine the possible factors that could contribute to the significance trends. In this study, Pettitt and sequential Mann-Kendall (SQ-MK) tests were used to examine the occurrence of any abrupt climate changes in the independent series. The analyses of the abrupt changes in temperature series suggested that most of the change points in Peninsular Malaysia were detected during the years 1996, 1997 and 1998. These detection points captured by Pettitt and SQ-MK tests are possibly related to climatic factors, such as El Niño and La Niña events. The findings also showed that the majority of the significant change points that exist in the series are related to the significant trend of the stations. Significant increasing trends of annual and seasonal mean, maximum and minimum temperatures in Peninsular Malaysia were found with a range of 2-5 °C/100 years during the last 32 years. It was observed that the magnitudes of the increasing trend in minimum temperatures were larger than the maximum temperatures for most of the studied stations, particularly at the urban stations. These increases are suspected to be linked with the effect of urban heat island other than El Niño event.
Trend analysis of long-term temperature time series in the Greater Toronto Area (GTA)
NASA Astrophysics Data System (ADS)
Mohsin, Tanzina; Gough, William A.
2010-08-01
As the majority of the world’s population is living in urban environments, there is growing interest in studying local urban climates. In this paper, for the first time, the long-term trends (31-162 years) of temperature change have been analyzed for the Greater Toronto Area (GTA). Annual and seasonal time series for a number of urban, suburban, and rural weather stations are considered. Non-parametric statistical techniques such as Mann-Kendall test and Theil-Sen slope estimation are used primarily for the assessing of the significance and detection of trends, and the sequential Mann test is used to detect any abrupt climate change. Statistically significant trends for annual mean and minimum temperatures are detected for almost all stations in the GTA. Winter is found to be the most coherent season contributing substantially to the increase in annual minimum temperature. The analyses of the abrupt changes in temperature suggest that the beginning of the increasing trend in Toronto started after the 1920s and then continued to increase to the 1960s. For all stations, there is a significant increase of annual and seasonal (particularly winter) temperatures after the 1980s. In terms of the linkage between urbanization and spatiotemporal thermal patterns, significant linear trends in annual mean and minimum temperature are detected for the period of 1878-1978 for the urban station, Toronto, while for the rural counterparts, the trends are not significant. Also, for all stations in the GTA that are situated in all directions except south of Toronto, substantial temperature change is detected for the periods of 1970-2000 and 1989-2000. It is concluded that the urbanization in the GTA has significantly contributed to the increase of the annual mean temperatures during the past three decades. In addition to urbanization, the influence of local climate, topography, and larger scale warming are incorporated in the analysis of the trends.
Unsupervised Ensemble Anomaly Detection Using Time-Periodic Packet Sampling
NASA Astrophysics Data System (ADS)
Uchida, Masato; Nawata, Shuichi; Gu, Yu; Tsuru, Masato; Oie, Yuji
We propose an anomaly detection method for finding patterns in network traffic that do not conform to legitimate (i.e., normal) behavior. The proposed method trains a baseline model describing the normal behavior of network traffic without using manually labeled traffic data. The trained baseline model is used as the basis for comparison with the audit network traffic. This anomaly detection works in an unsupervised manner through the use of time-periodic packet sampling, which is used in a manner that differs from its intended purpose — the lossy nature of packet sampling is used to extract normal packets from the unlabeled original traffic data. Evaluation using actual traffic traces showed that the proposed method has false positive and false negative rates in the detection of anomalies regarding TCP SYN packets comparable to those of a conventional method that uses manually labeled traffic data to train the baseline model. Performance variation due to the probabilistic nature of sampled traffic data is mitigated by using ensemble anomaly detection that collectively exploits multiple baseline models in parallel. Alarm sensitivity is adjusted for the intended use by using maximum- and minimum-based anomaly detection that effectively take advantage of the performance variations among the multiple baseline models. Testing using actual traffic traces showed that the proposed anomaly detection method performs as well as one using manually labeled traffic data and better than one using randomly sampled (unlabeled) traffic data.
NASA Astrophysics Data System (ADS)
Zheng, Chuan-Tao; Huang, Jian-Qiang; Ye, Wei-Lin; Lv, Mo; Dang, Jing-Min; Cao, Tian-Shu; Chen, Chen; Wang, Yi-Ding
2013-11-01
A portable near-infrared (NIR) CH4 detection sensor based on a distributed feedback (DFB) laser modulated at 1.654 μm is experimentally demonstrated. Intelligent temperature controller with an accuracy of -0.07 to +0.09 °C as well as a scan and modulation module generating saw-wave and cosine-wave signals are developed to drive the DFB laser, and a cost effective lock-in amplifier used to extract the second harmonic signal is integrated. Thorough experiments are carried out to obtain detection performances, including detection range, accuracy, stability and the minimum detection limit (MDL). Measurement results show that the absolute detection error relative to the standard value is less than 7% within the range of 0-100%, and the MDL is estimated to be about 11 ppm under an absorption length of 0.2 m and a noise level of 2 mVpp. Twenty-four hours monitoring on two gas samples (0.1% and 20%) indicates that the absolute errors are less than 7% and 2.5%, respectively, suggesting good long term stability. The sensor reveals competitive characteristics compared with other reported portable or handheld sensors. The developed sensor can also be used for the detection of other gases by adopting other DFB lasers with different center-wavelength using the same hardware and slightly modified software.
Ponce, Ninez; Shimkhada, Riti; Raub, Amy; Daoud, Adel; Nandi, Arijit; Richter, Linda; Heymann, Jody
2017-08-02
There is recognition that social protection policies such as raising the minimum wage can favourably impact health, but little evidence links minimum wage increases to child health outcomes. We used multi-year data (2003-2012) on national minimum wages linked to individual-level data from the Demographic and Health Surveys (DHS) from 23 low- and middle-income countries (LMICs) that had least two DHS surveys to establish pre- and post-observation periods. Over a pre- and post-interval ranging from 4 to 8 years, we examined minimum wage growth and four nutritional status outcomes among children under 5 years: stunting, wasting, underweight, and anthropometric failure. Using a differences-in-differences framework with country and time-fixed effects, a 10% increase in minimum wage growth over time was associated with a 0.5 percentage point decline in stunting (-0.054, 95% CI (-0.084,-0.025)), and a 0.3 percentage point decline in failure (-0.031, 95% CI (-0.057,-0.005)). We did not observe statistically significant associations between minimum wage growth and underweight or wasting. We found similar results for the poorest households working in non-agricultural and non-professional jobs, where minimum wage growth may have the most leverage. Modest increases in minimum wage over a 4- to 8-year period might be effective in reducing child undernutrition in LMICs.
Identifying hidden voice and video streams
NASA Astrophysics Data System (ADS)
Fan, Jieyan; Wu, Dapeng; Nucci, Antonio; Keralapura, Ram; Gao, Lixin
2009-04-01
Given the rising popularity of voice and video services over the Internet, accurately identifying voice and video traffic that traverse their networks has become a critical task for Internet service providers (ISPs). As the number of proprietary applications that deliver voice and video services to end users increases over time, the search for the one methodology that can accurately detect such services while being application independent still remains open. This problem becomes even more complicated when voice and video service providers like Skype, Microsoft, and Google bundle their voice and video services with other services like file transfer and chat. For example, a bundled Skype session can contain both voice stream and file transfer stream in the same layer-3/layer-4 flow. In this context, traditional techniques to identify voice and video streams do not work. In this paper, we propose a novel self-learning classifier, called VVS-I , that detects the presence of voice and video streams in flows with minimum manual intervention. Our classifier works in two phases: training phase and detection phase. In the training phase, VVS-I first extracts the relevant features, and subsequently constructs a fingerprint of a flow using the power spectral density (PSD) analysis. In the detection phase, it compares the fingerprint of a flow to the existing fingerprints learned during the training phase, and subsequently classifies the flow. Our classifier is not only capable of detecting voice and video streams that are hidden in different flows, but is also capable of detecting different applications (like Skype, MSN, etc.) that generate these voice/video streams. We show that our classifier can achieve close to 100% detection rate while keeping the false positive rate to less that 1%.
Natural gas pipeline leak detector based on NIR diode laser absorption spectroscopy.
Gao, Xiaoming; Fan, Hong; Huang, Teng; Wang, Xia; Bao, Jian; Li, Xiaoyun; Huang, Wei; Zhang, Weijun
2006-09-01
The paper reports on the development of an integrated natural gas pipeline leak detector based on diode laser absorption spectroscopy. The detector transmits a 1.653 microm DFB diode laser with 10 mW and detects a fraction of the backscatter reflected from the topographic targets. To eliminate the effect of topographic scatter targets, a ratio detection technique was used. Wavelength modulation and harmonic detection were used to improve the detection sensitivity. The experimental detection limit is 50 ppmm, remote detection for a distance up to 20 m away topographic scatter target is demonstrated. Using a known simulative leak pipe, minimum detectable pipe leak flux is less than 10 ml/min.
Wei, Fang-Fei; Li, Yan; Zhang, Lu; Xu, Ting-Yan; Ding, Feng-Hua; Wang, Ji-Guang; Staessen, Jan A
2014-04-01
Whether target organ damage is associated with blood pressure (BP) variability independent of level remains debated. We assessed these associations from 10-minute beat-to-beat, 24-hour ambulatory, and 7-day home BP recordings in 256 untreated subjects referred to a hypertension clinic. BP variability indices were variability independent of the mean, maximum-minimum difference, and average real variability. Effect sizes (standardized β) were computed using multivariable regression models. In beat-to-beat recordings, left ventricular mass index (n=128) was not (P≥0.18) associated with systolic BP but increased with all 3 systolic variability indices (+2.97-3.53 g/m(2); P<0.04); the urinary albumin-to-creatinine ratio increased (P≤0.03) with systolic BP (+1.14-1.17 mg/mmol) and maximum-minimum difference (+1.18 mg/mmol); and pulse wave velocity increased with systolic BP (+0.69 m/s; P<0.001). In 24-hour recordings, all 3 indices of organ damage increased (P<0.03) with systolic BP, whereas the associations with BP variability were nonsignificant (P≥0.15) except for increases in pulse wave velocity (P<0.05) with variability independent of the mean (+0.16 m/s) and maximum-minimum difference (+0.17 m/s). In home recordings, the urinary albumin-to-creatinine ratio (+1.27-1.30 mg/mmol) and pulse wave velocity (+0.36-0.40 m/s) increased (P<0.05) with systolic BP, whereas all associations of target organ damage with the variability indices were nonsignificant (P≥0.07). In conclusion, while accounting for BP level, associations of target organ damage with BP variability were readily detectable in beat-to-beat recordings, least noticeable in home recordings, with 24-hour ambulatory monitoring being informative only for pulse wave velocity.
NASA Astrophysics Data System (ADS)
Yang, Xue; Sun, Hao; Fu, Kun; Yang, Jirui; Sun, Xian; Yan, Menglong; Guo, Zhi
2018-01-01
Ship detection has been playing a significant role in the field of remote sensing for a long time but it is still full of challenges. The main limitations of traditional ship detection methods usually lie in the complexity of application scenarios, the difficulty of intensive object detection and the redundancy of detection region. In order to solve such problems above, we propose a framework called Rotation Dense Feature Pyramid Networks (R-DFPN) which can effectively detect ship in different scenes including ocean and port. Specifically, we put forward the Dense Feature Pyramid Network (DFPN), which is aimed at solving the problem resulted from the narrow width of the ship. Compared with previous multi-scale detectors such as Feature Pyramid Network (FPN), DFPN builds the high-level semantic feature-maps for all scales by means of dense connections, through which enhances the feature propagation and encourages the feature reuse. Additionally, in the case of ship rotation and dense arrangement, we design a rotation anchor strategy to predict the minimum circumscribed rectangle of the object so as to reduce the redundant detection region and improve the recall. Furthermore, we also propose multi-scale ROI Align for the purpose of maintaining the completeness of semantic and spatial information. Experiments based on remote sensing images from Google Earth for ship detection show that our detection method based on R-DFPN representation has a state-of-the-art performance.
Analysing generator matrices G of similar state but varying minimum determinants
NASA Astrophysics Data System (ADS)
Harun, H.; Razali, M. F.; Rahman, N. A. Abdul
2016-10-01
Since Tarokh discovered Space-Time Trellis Code (STTC) in 1998, a considerable effort has been done to improve the performance of the original STTC. One way of achieving enhancement is by focusing on the generator matrix G, which represents the encoder structure for STTC. Until now, researchers have only concentrated on STTCs of different states in analyzing the performance of generator matrix G. No effort has been made on different generator matrices G of similar state. The reason being, it is difficult to produce a wide variety of generator matrices G with diverse minimum determinants. In this paper a number of generator matrices G with minimum determinant of four (4), eight (8) and sixteen (16) of the same state (i.e., 4-PSK) have been successfully produced. The performance of different generator matrices G in term of their bit error rate and signal-to-noise ratio for a Rayleigh fading environment are compared and evaluated. It is found from the MATLAB simulation that at low SNR (<8), the BER of generator matrices G with smaller minimum determinant is comparatively lower than those of higher minimum determinant. However, at high SNR (>14) there is no significant difference between the BER of these generator matrices G.
NASA Astrophysics Data System (ADS)
Walker, Gary E.
2015-01-01
We observe the long period (5.6 years) Eclipsing Binary Variable Star EE Cep during it's 2014 eclipse. It was observed on every clear night from the Maria Mitchell Observatory as well as remote sites for a total of 25 nights. Each night consisted of a detailed time series in BVRI looking for short term variations for a total of >9000 observations. The data was transformed to the Standard System. In addition, a time series was captured during the night of the eclipse. This data provides an alternate method to determine Time of Minimum than traditionally performed. The TOM varied with color. Several strong correlations are seen between colors substantiating the detection of variations on a time scale of hours. The long term light curve shows 5 interesting and different Phases with different characteristics.
Nordanstig, J; Pettersson, M; Morgan, M; Falkenberg, M; Kumlien, C
2017-09-01
Patient reported outcomes are increasingly used to assess outcomes after peripheral arterial disease (PAD) interventions. VascuQoL-6 (VQ-6) is a PAD specific health-related quality of life (HRQoL) instrument for routine clinical practice and clinical research. This study assessed the minimum important difference for the VQ-6 and determined thresholds for the minimum important difference and substantial clinical benefit following PAD revascularisation. This was a population-based observational cohort study. VQ-6 data from the Swedvasc Registry (January 2014 to September 2016) was analysed for revascularised PAD patients. The minimum important difference was determined using a combination of a distribution based and an anchor-based method, while receiver operating characteristic curve analysis (ROC) was used to determine optimal thresholds for a substantial clinical benefit following revascularisation. A total of 3194 revascularised PAD patients with complete VQ-6 baseline recordings (intermittent claudication (IC) n = 1622 and critical limb ischaemia (CLI) n = 1572) were studied, of which 2996 had complete VQ-6 recordings 30 days and 1092 a year after the vascular intervention. The minimum important difference 1 year after revascularisation for IC patients ranged from 1.7 to 2.2 scale steps, depending on the method of analysis. Among CLI patients, the minimum important difference after 1 year was 1.9 scale steps. ROC analyses demonstrated that the VQ-6 discriminative properties for a substantial clinical benefit was excellent for IC patients (area under curve (AUC) 0.87, sensitivity 0.81, specificity 0.76) and acceptable in CLI (AUC 0.736, sensitivity 0.63, specificity 0.72). An optimal VQ-6 threshold for a substantial clinical benefit was determined at 3.5 scale steps among IC patients and 4.5 in CLI patients. The suggested thresholds for minimum important difference and substantial clinical benefit could be used when evaluating VQ-6 outcomes following different interventions in PAD and in the design of clinical trials. Copyright © 2017 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.
Yuan, Yuan-Yuan; Zhou, Yu-Bi; Sun, Jing; Deng, Juan; Bai, Ying; Wang, Jie; Lu, Xue-Feng
2017-06-01
The content of elements in fifteen different regions of Nitraria roborowskii samples were determined by inductively coupled plasma-atomic emission spectrometry(ICP-OES), and its elemental characteristics were analyzed by principal component analysis. The results indicated that 18 mineral elements were detected in N. roborowskii of which V cannot be detected. In addition, contents of Na, K and Ca showed high concentration. Ti showed maximum content variance, while K is minimum. Four principal components were gained from the original data. The cumulative variance contribution rate is 81.542% and the variance contribution of the first principal component was 44.997%, indicating that Cr, Fe, P and Ca were the characteristic elements of N. roborowskii.Thus, the established method was simple, precise and can be used for determination of mineral elements in N.roborowskii Kom. fruits. The elemental distribution characteristics among N.roborowskii fruits are related to geographical origins which were clearly revealed by PCA. All the results will provide good basis for comprehensive utilization of N.roborowskii. Copyright© by the Chinese Pharmaceutical Association.
NASA Astrophysics Data System (ADS)
Omidi, Parsa; Diop, Mamadou; Carson, Jeffrey; Nasiriavanaki, Mohammadreza
2017-03-01
Linear-array-based photoacoustic computed tomography is a popular methodology for deep and high resolution imaging. However, issues such as phase aberration, side-lobe effects, and propagation limitations deteriorate the resolution. The effect of phase aberration due to acoustic attenuation and constant assumption of the speed of sound (SoS) can be reduced by applying an adaptive weighting method such as the coherence factor (CF). Utilizing an adaptive beamforming algorithm such as the minimum variance (MV) can improve the resolution at the focal point by eliminating the side-lobes. Moreover, invisibility of directional objects emitting parallel to the detection plane, such as vessels and other absorbing structures stretched in the direction perpendicular to the detection plane can degrade resolution. In this study, we propose a full-view array level weighting algorithm in which different weighs are assigned to different positions of the linear array based on an orientation algorithm which uses the histogram of oriented gradient (HOG). Simulation results obtained from a synthetic phantom show the superior performance of the proposed method over the existing reconstruction methods.
Mirmohseni, A; Abdollahi, H; Rostamizadeh, K
2007-02-28
Net analyte signal (NAS)-based method called HLA/GO was applied for the selectively determination of binary mixture of ethanol and water by quartz crystal nanobalance (QCN) sensor. A full factorial design was applied for the formation of calibration and prediction sets in the concentration ranges 5.5-22.2 microg mL(-1) for ethanol and 7.01-28.07 microg mL(-1) for water. An optimal time range was selected by procedure which was based on the calculation of the net analyte signal regression plot in any considered time window for each test sample. A moving window strategy was used for searching the region with maximum linearity of NAS regression plot (minimum error indicator) and minimum of PRESS value. On the base of obtained results, the differences on the adsorption profiles in the time range between 1 and 600 s were used to determine mixtures of both compounds by HLA/GO method. The calculation of the net analytical signal using HLA/GO method allows determination of several figures of merit like selectivity, sensitivity, analytical sensitivity and limit of detection, for each component. To check the ability of the proposed method in the selection of linear regions of adsorption profile, a test for detecting non-linear regions of adsorption profile data in the presence of methanol was also described. The results showed that the method was successfully applied for the determination of ethanol and water.
Metagenomic analysis of nitrogen and methane cycling in the Arabian Sea oxygen minimum zone.
Lüke, Claudia; Speth, Daan R; Kox, Martine A R; Villanueva, Laura; Jetten, Mike S M
2016-01-01
Oxygen minimum zones (OMZ) are areas in the global ocean where oxygen concentrations drop to below one percent. Low oxygen concentrations allow alternative respiration with nitrate and nitrite as electron acceptor to become prevalent in these areas, making them main contributors to oceanic nitrogen loss. The contribution of anammox and denitrification to nitrogen loss seems to vary in different OMZs. In the Arabian Sea, both processes were reported. Here, we performed a metagenomics study of the upper and core zone of the Arabian Sea OMZ, to provide a comprehensive overview of the genetic potential for nitrogen and methane cycling. We propose that aerobic ammonium oxidation is carried out by a diverse community of Thaumarchaeota in the upper zone of the OMZ, whereas a low diversity of Scalindua-like anammox bacteria contribute significantly to nitrogen loss in the core zone. Aerobic nitrite oxidation in the OMZ seems to be performed by Nitrospina spp. and a novel lineage of nitrite oxidizing organisms that is present in roughly equal abundance as Nitrospina. Dissimilatory nitrate reduction to ammonia (DNRA) can be carried out by yet unknown microorganisms harbouring a divergent nrfA gene. The metagenomes do not provide conclusive evidence for active methane cycling; however, a low abundance of novel alkane monooxygenase diversity was detected. Taken together, our approach confirmed the genomic potential for an active nitrogen cycle in the Arabian Sea and allowed detection of hitherto overlooked lineages of carbon and nitrogen cycle bacteria.
Metagenomic analysis of nitrogen and methane cycling in the Arabian Sea oxygen minimum zone
Kox, Martine A.R.; Villanueva, Laura; Jetten, Mike S.M.
2016-01-01
Oxygen minimum zones (OMZ) are areas in the global ocean where oxygen concentrations drop to below one percent. Low oxygen concentrations allow alternative respiration with nitrate and nitrite as electron acceptor to become prevalent in these areas, making them main contributors to oceanic nitrogen loss. The contribution of anammox and denitrification to nitrogen loss seems to vary in different OMZs. In the Arabian Sea, both processes were reported. Here, we performed a metagenomics study of the upper and core zone of the Arabian Sea OMZ, to provide a comprehensive overview of the genetic potential for nitrogen and methane cycling. We propose that aerobic ammonium oxidation is carried out by a diverse community of Thaumarchaeota in the upper zone of the OMZ, whereas a low diversity of Scalindua-like anammox bacteria contribute significantly to nitrogen loss in the core zone. Aerobic nitrite oxidation in the OMZ seems to be performed by Nitrospina spp. and a novel lineage of nitrite oxidizing organisms that is present in roughly equal abundance as Nitrospina. Dissimilatory nitrate reduction to ammonia (DNRA) can be carried out by yet unknown microorganisms harbouring a divergent nrfA gene. The metagenomes do not provide conclusive evidence for active methane cycling; however, a low abundance of novel alkane monooxygenase diversity was detected. Taken together, our approach confirmed the genomic potential for an active nitrogen cycle in the Arabian Sea and allowed detection of hitherto overlooked lineages of carbon and nitrogen cycle bacteria. PMID:27077014
Theoretical detection threshold of the proton-acoustic range verification technique.
Ahmad, Moiz; Xiang, Liangzhong; Yousefi, Siavash; Xing, Lei
2015-10-01
Range verification in proton therapy using the proton-acoustic signal induced in the Bragg peak was investigated for typical clinical scenarios. The signal generation and detection processes were simulated in order to determine the signal-to-noise limits. An analytical model was used to calculate the dose distribution and local pressure rise (per proton) for beams of different energy (100 and 160 MeV) and spot widths (1, 5, and 10 mm) in a water phantom. In this method, the acoustic waves propagating from the Bragg peak were generated by the general 3D pressure wave equation implemented using a finite element method. Various beam pulse widths (0.1-10 μs) were simulated by convolving the acoustic waves with Gaussian kernels. A realistic PZT ultrasound transducer (5 cm diameter) was simulated with a Butterworth bandpass filter with consideration of random noise based on a model of thermal noise in the transducer. The signal-to-noise ratio on a per-proton basis was calculated, determining the minimum number of protons required to generate a detectable pulse. The maximum spatial resolution of the proton-acoustic imaging modality was also estimated from the signal spectrum. The calculated noise in the transducer was 12-28 mPa, depending on the transducer central frequency (70-380 kHz). The minimum number of protons detectable by the technique was on the order of 3-30 × 10(6) per pulse, with 30-800 mGy dose per pulse at the Bragg peak. Wider pulses produced signal with lower acoustic frequencies, with 10 μs pulses producing signals with frequency less than 100 kHz. The proton-acoustic process was simulated using a realistic model and the minimal detection limit was established for proton-acoustic range validation. These limits correspond to a best case scenario with a single large detector with no losses and detector thermal noise as the sensitivity limiting factor. Our study indicated practical proton-acoustic range verification may be feasible with approximately 5 × 10(6) protons/pulse and beam current.
Theoretical detection threshold of the proton-acoustic range verification technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmad, Moiz; Yousefi, Siavash; Xing, Lei, E-mail: lei@stanford.edu
2015-10-15
Purpose: Range verification in proton therapy using the proton-acoustic signal induced in the Bragg peak was investigated for typical clinical scenarios. The signal generation and detection processes were simulated in order to determine the signal-to-noise limits. Methods: An analytical model was used to calculate the dose distribution and local pressure rise (per proton) for beams of different energy (100 and 160 MeV) and spot widths (1, 5, and 10 mm) in a water phantom. In this method, the acoustic waves propagating from the Bragg peak were generated by the general 3D pressure wave equation implemented using a finite element method.more » Various beam pulse widths (0.1–10 μs) were simulated by convolving the acoustic waves with Gaussian kernels. A realistic PZT ultrasound transducer (5 cm diameter) was simulated with a Butterworth bandpass filter with consideration of random noise based on a model of thermal noise in the transducer. The signal-to-noise ratio on a per-proton basis was calculated, determining the minimum number of protons required to generate a detectable pulse. The maximum spatial resolution of the proton-acoustic imaging modality was also estimated from the signal spectrum. Results: The calculated noise in the transducer was 12–28 mPa, depending on the transducer central frequency (70–380 kHz). The minimum number of protons detectable by the technique was on the order of 3–30 × 10{sup 6} per pulse, with 30–800 mGy dose per pulse at the Bragg peak. Wider pulses produced signal with lower acoustic frequencies, with 10 μs pulses producing signals with frequency less than 100 kHz. Conclusions: The proton-acoustic process was simulated using a realistic model and the minimal detection limit was established for proton-acoustic range validation. These limits correspond to a best case scenario with a single large detector with no losses and detector thermal noise as the sensitivity limiting factor. Our study indicated practical proton-acoustic range verification may be feasible with approximately 5 × 10{sup 6} protons/pulse and beam current.« less
Theoretical detection threshold of the proton-acoustic range verification technique
Ahmad, Moiz; Xiang, Liangzhong; Yousefi, Siavash; Xing, Lei
2015-01-01
Purpose: Range verification in proton therapy using the proton-acoustic signal induced in the Bragg peak was investigated for typical clinical scenarios. The signal generation and detection processes were simulated in order to determine the signal-to-noise limits. Methods: An analytical model was used to calculate the dose distribution and local pressure rise (per proton) for beams of different energy (100 and 160 MeV) and spot widths (1, 5, and 10 mm) in a water phantom. In this method, the acoustic waves propagating from the Bragg peak were generated by the general 3D pressure wave equation implemented using a finite element method. Various beam pulse widths (0.1–10 μs) were simulated by convolving the acoustic waves with Gaussian kernels. A realistic PZT ultrasound transducer (5 cm diameter) was simulated with a Butterworth bandpass filter with consideration of random noise based on a model of thermal noise in the transducer. The signal-to-noise ratio on a per-proton basis was calculated, determining the minimum number of protons required to generate a detectable pulse. The maximum spatial resolution of the proton-acoustic imaging modality was also estimated from the signal spectrum. Results: The calculated noise in the transducer was 12–28 mPa, depending on the transducer central frequency (70–380 kHz). The minimum number of protons detectable by the technique was on the order of 3–30 × 106 per pulse, with 30–800 mGy dose per pulse at the Bragg peak. Wider pulses produced signal with lower acoustic frequencies, with 10 μs pulses producing signals with frequency less than 100 kHz. Conclusions: The proton-acoustic process was simulated using a realistic model and the minimal detection limit was established for proton-acoustic range validation. These limits correspond to a best case scenario with a single large detector with no losses and detector thermal noise as the sensitivity limiting factor. Our study indicated practical proton-acoustic range verification may be feasible with approximately 5 × 106 protons/pulse and beam current. PMID:26429247
Higher order spectra and their use in digital communication signal estimation
NASA Astrophysics Data System (ADS)
Yayci, Cihat
1995-03-01
This thesis compared the detection ability of the spectrogram, the 1-1/2D instantaneous power spectrum (l-1/2D(sub ips)), the bispectrum, and outer product (dyadic) representation for digitally modulated signals corrupted by additive white Gaussian noise. Four detection schemes were tried on noise free BPSK, QPSK, FSK, and OOK signals using different transform lengths. After determining the optimum transform length, each test signal is corrupted by additive white Gaussian noise. Different SNR levels were used to determine the lowest SNR level at which the message or the modulation type could be extracted. The optimal transform length was found to be the symbol duration when processing BPSK, OOK, and FSK via the spectrogram, the 1-1/2D(sub ips) or the bispectrum method. The best transform size for QPSK was half of the symbol length. For the outer product (dyadic) spectral representation, the best transform size was four times larger than the symbol length. For all processing techniques, with the exception of the other product representation, the minimum detectable SNR is about 15 dB for BPSK, FSK, and OOK signals and about 20 dB for QPSK signals. For the outer product spectral method, these values tend to be about 10 dB lower.
Figure-ground segregation can rely on differences in motion direction.
Kandil, Farid I; Fahle, Manfred
2004-12-01
If the elements within a figure move synchronously while those in the surround move at a different time, the figure is easily segregated from the surround and thus perceived. Lee and Blake (1999) [Visual form created solely from temporal structure. Science, 284, 1165-1168] demonstrated that this figure-ground separation may be based not only on time differences between motion onsets, but also on the differences between reversals of motion direction. However, Farid and Adelson (2001) [Synchrony does not promote grouping in temporally structured displays. Nature Neuroscience, 4, 875-876] argued that figure-ground segregation in the motion-reversal experiment might have been based on a contrast artefact and concluded that (a)synchrony as such was 'not responsible for the perception of form in these or earlier displays'. Here, we present experiments that avoid contrast artefacts but still produce figure-ground segregation based on purely temporal cues. Our results show that subjects can segregate figure from ground even though being unable to use motion reversals as such. Subjects detect the figure when either (i) motion stops (leading to contrast artefacts), or (ii) motion directions differ between figure and ground. Segregation requires minimum delays of about 15 ms. We argue that whatever the underlying cues and mechanisms, a second stage beyond motion detection is required to globally compare the outputs of local motion detectors and to segregate figure from ground. Since analogous changes take place in both figure and ground in rapid succession, this second stage has to detect the asynchrony with high temporal precision.
Chen, Yi-Ting; Sarangadharan, Indu; Sukesan, Revathi; Hseih, Ching-Yen; Lee, Geng-Yen; Chyi, Jen-Inn; Wang, Yu-Lin
2018-05-29
Lead ion selective membrane (Pb-ISM) coated AlGaN/GaN high electron mobility transistors (HEMT) was used to demonstrate a whole new methodology for ion-selective FET sensors, which can create ultra-high sensitivity (-36 mV/log [Pb 2+ ]) surpassing the limit of ideal sensitivity (-29.58 mV/log [Pb 2+ ]) in a typical Nernst equation for lead ion. The largely improved sensitivity has tremendously reduced the detection limit (10 -10 M) for several orders of magnitude of lead ion concentration compared to typical ion-selective electrode (ISE) (10 -7 M). The high sensitivity was obtained by creating a strong filed between the gate electrode and the HEMT channel. Systematical investigation was done by measuring different design of the sensor and gate bias, indicating ultra-high sensitivity and ultra-low detection limit obtained only in sufficiently strong field. Theoretical study in the sensitivity consistently agrees with the experimental finding and predicts the maximum and minimum sensitivity. The detection limit of our sensor is comparable to that of Inductively-Coupled-Plasma Mass Spectrum (ICP-MS), which also has detection limit near 10 -10 M.
Ellipsoids for anomaly detection in remote sensing imagery
NASA Astrophysics Data System (ADS)
Grosklos, Guenchik; Theiler, James
2015-05-01
For many target and anomaly detection algorithms, a key step is the estimation of a centroid (relatively easy) and a covariance matrix (somewhat harder) that characterize the background clutter. For a background that can be modeled as a multivariate Gaussian, the centroid and covariance lead to an explicit probability density function that can be used in likelihood ratio tests for optimal detection statistics. But ellipsoidal contours can characterize a much larger class of multivariate density function, and the ellipsoids that characterize the outer periphery of the distribution are most appropriate for detection in the low false alarm rate regime. Traditionally the sample mean and sample covariance are used to estimate ellipsoid location and shape, but these quantities are confounded both by large lever-arm outliers and non-Gaussian distributions within the ellipsoid of interest. This paper compares a variety of centroid and covariance estimation schemes with the aim of characterizing the periphery of the background distribution. In particular, we will consider a robust variant of the Khachiyan algorithm for minimum-volume enclosing ellipsoid. The performance of these different approaches is evaluated on multispectral and hyperspectral remote sensing imagery using coverage plots of ellipsoid volume versus false alarm rate.
1992-05-31
Samples were preserved (27) stemmed from a loss in titer that occurred when viruses in 1% glutaraldehyde before application to carbon-coated were added to...counting a minimum of 100 VLPs from 5 random fields at a magnification of 10,000 x. For most applications counting a minimum of 20 fields containing at...biological applications , Vol. 4, Hayat, M.A., editor, Van Nostrom Reinhold, N.Y., 1974, pp 79-104. 15. Proctor, L.M., and Fuhrman, J.A., Viral mortality
Monitoring and localization of buried plastic natural gas pipes using passive RF tags
NASA Astrophysics Data System (ADS)
Mondal, Saikat; Kumar, Deepak; Ghazali, Mohd. Ifwat; Chahal, Prem; Udpa, Lalita; Deng, Yiming
2018-04-01
A passive harmonic radio frequency (RF) tag on the pipe with added sensing capabilities is proposed in this paper. Radio frequency identification (RFID) based tagging has already emerged as a potential solution for chemical sensing, location detection, animal tagging, etc. Harmonic transponders are already quite popular compared to conventional RFIDs due to their improved signal to noise ratio (SNR). However, the operating frequency, transmitted power and tag efficiency become critical issues for underground RFIDs. In this paper, a comprehensive on-tag sensing, power budget and frequency analyses is performed for buried harmonic tag design. Accurate tracking of infrastructure burial depth is proposed to reduce the probability of failure of underground pipelines. Burial depth is estimated using phase information of received signals at different frequencies calculated using genetic algorithm (GA) based optimization for post processing. Suitable frequency range is determined for a variety of soil with different moisture content for small tag-antenna size. Different types of harmonic tags such as 1) Schottky diode, 2) Non-linear Transmission Line (NLTL) were compared for underground applications. In this study, the power, frequency and tag design have been optimized to achieve small antenna size, minimum signal loss and simple reader circuit for underground detection at up to 5 feet depth in different soil medium and moisture contents.
Ammonia Optical Sensing by Microring Resonators.
Passaro, Vittorio M N; Dell'Olio, Francesco; De Leonardis, Francesco
2007-11-15
A very compact (device area around 40 μm²) optical ammonia sensor based on amicroring resonator is presented in this work. Silicon-on-insulator technology is used insensor design and a dye doped polymer is adopted as sensing material. The sensor exhibitsa very good linearity and a minimum detectable refractive index shift of sensing materialas low as 8x10 -5 , with a detection limit around 4 ‰.
Target Glint Suppression Technology.
1980-09-01
report is organized into two principal sections. Section 2 addresses the impact of target effects on the noncoherent detection problem associated with...zero pulse-to-pulse correlation. Results are presented for a scanning search radar which is assumed to noncoherently integrate N pulses. Generally...speaking, detection performance is shown to be a maximum when the pulse-to-pulse correlation is a minimum. As a result noncoherent search radars should
Sashidhar, R B
1993-10-01
Aflatoxin contamination of food and feed have gained global significance due to its deleterious effect on human and animal health and its importance in the international trade. The potential of aflatoxin as a carcinogen, mutagen, teratogen, and immunosuppressive agent is well documented. The problem of aflatoxin contamination of food and feed has led to the enactment of various legislation. However, meaningful strategies for implementation of this legislation is limited by nonavailability of simple, cost-effective method for screening and detection of aflatoxin under field conditions. Keeping in mind the analytical constraints in developing countries, a simple-to-operate, rapid, reliable, and cost-effective portable aflatoxin detection kit has been developed. The important components of the kit include a hand-held UV lamp (365 nm, 4 W output), a solvent blender (12,000 rpm) for toxin extraction, and adsorbent-coated dip-strips (polyester film) for detecting and quantifying aflatoxin. Analysis of variance indicates that there were no significant differences between various batches of dip-strips (p > 0.05). The minimum detection limit for aflatoxin B1 was 10 ppb per spot. The kit may find wide application as a research tool in public health laboratories, environmental monitoring agencies, and in the poultry industry.
Sashidhar, R B
1993-01-01
Aflatoxin contamination of food and feed have gained global significance due to its deleterious effect on human and animal health and its importance in the international trade. The potential of aflatoxin as a carcinogen, mutagen, teratogen, and immunosuppressive agent is well documented. The problem of aflatoxin contamination of food and feed has led to the enactment of various legislation. However, meaningful strategies for implementation of this legislation is limited by nonavailability of simple, cost-effective method for screening and detection of aflatoxin under field conditions. Keeping in mind the analytical constraints in developing countries, a simple-to-operate, rapid, reliable, and cost-effective portable aflatoxin detection kit has been developed. The important components of the kit include a hand-held UV lamp (365 nm, 4 W output), a solvent blender (12,000 rpm) for toxin extraction, and adsorbent-coated dip-strips (polyester film) for detecting and quantifying aflatoxin. Analysis of variance indicates that there were no significant differences between various batches of dip-strips (p > 0.05). The minimum detection limit for aflatoxin B1 was 10 ppb per spot. The kit may find wide application as a research tool in public health laboratories, environmental monitoring agencies, and in the poultry industry. Images FIGURE 1. PMID:8143644
Automatic detection of kidney in 3D pediatric ultrasound images using deep neural networks
NASA Astrophysics Data System (ADS)
Tabrizi, Pooneh R.; Mansoor, Awais; Biggs, Elijah; Jago, James; Linguraru, Marius George
2018-02-01
Ultrasound (US) imaging is the routine and safe diagnostic modality for detecting pediatric urology problems, such as hydronephrosis in the kidney. Hydronephrosis is the swelling of one or both kidneys because of the build-up of urine. Early detection of hydronephrosis can lead to a substantial improvement in kidney health outcomes. Generally, US imaging is a challenging modality for the evaluation of pediatric kidneys with different shape, size, and texture characteristics. The aim of this study is to present an automatic detection method to help kidney analysis in pediatric 3DUS images. The method localizes the kidney based on its minimum volume oriented bounding box) using deep neural networks. Separate deep neural networks are trained to estimate the kidney position, orientation, and scale, making the method computationally efficient by avoiding full parameter training. The performance of the method was evaluated using a dataset of 45 kidneys (18 normal and 27 diseased kidneys diagnosed with hydronephrosis) through the leave-one-out cross validation method. Quantitative results show the proposed detection method could extract the kidney position, orientation, and scale ratio with root mean square values of 1.3 +/- 0.9 mm, 6.34 +/- 4.32 degrees, and 1.73 +/- 0.04, respectively. This method could be helpful in automating kidney segmentation for routine clinical evaluation.
A fluorescence-based centrifugal microfluidic system for parallel detection of multiple allergens
NASA Astrophysics Data System (ADS)
Chen, Q. L.; Ho, H. P.; Cheung, K. L.; Kong, S. K.; Suen, Y. K.; Kwan, Y. W.; Li, W. J.; Wong, C. K.
2010-02-01
This paper reports a robust polymer based centrifugal microfluidic analysis system that can provide parallel detection of multiple allergens in vitro. Many commercial food products (milk, bean, pollen, etc.) may introduce allergy to people. A low-cost device for rapid detection of allergens is highly desirable. With this as the objective, we have studied the feasibility of using a rotating disk device incorporating centrifugal microfluidics for performing actuationfree and multi-analyte detection of different allergen species with minimum sample usage and fast response time. Degranulation in basophils or mast cells is an indicator to demonstrate allergic reaction. In this connection, we used acridine orange (AO) to demonstrate degranulation in KU812 human basophils. It was found that the AO was released from granules when cells were stimulated by ionomycin, thus signifying the release of histamine which accounts for allergy symptoms [1-2]. Within this rotating optical platform, major microfluidic components including sample reservoirs, reaction chambers, microchannel and flow-control compartments are integrated into a single bio-compatible polydimethylsiloxane (PDMS) substrate. The flow sequence and reaction time can be controlled precisely. Sequentially through varying the spinning speed, the disk may perform a variety of steps on sample loading, reaction and detection. Our work demonstrates the feasibility of using centrifugation as a possible immunoassay system in the future.
The Gap Detection Test: Can It Be Used to Diagnose Tinnitus?
Boyen, Kris; Başkent, Deniz
2015-01-01
Objectives: Animals with induced tinnitus showed difficulties in detecting silent gaps in sounds, suggesting that the tinnitus percept may be filling the gap. The main purpose of this study was to evaluate the applicability of this approach to detect tinnitus in human patients. The authors first hypothesized that gap detection would be impaired in patients with tinnitus, and second, that gap detection would be more impaired at frequencies close to the tinnitus frequency of the patient. Design: Twenty-two adults with bilateral tinnitus, 20 age-matched and hearing loss–matched subjects without tinnitus, and 10 young normal-hearing subjects participated in the study. To determine the characteristics of the tinnitus, subjects matched an external sound to their perceived tinnitus in pitch and loudness. To determine the minimum detectable gap, the gap threshold, an adaptive psychoacoustic test was performed three times by each subject. In this gap detection test, four different stimuli, with various frequencies and bandwidths, were presented at three intensity levels each. Results: Similar to previous reports of gap detection, increasing sensation level yielded shorter gap thresholds for all stimuli in all groups. Interestingly, the tinnitus group did not display elevated gap thresholds in any of the four stimuli. Moreover, visual inspection of the data revealed no relation between gap detection performance and perceived tinnitus pitch. Conclusions: These findings show that tinnitus in humans has no effect on the ability to detect gaps in auditory stimuli. Thus, the testing procedure in its present form is not suitable for clinical detection of tinnitus in humans. PMID:25822647
The Effect of an Increased Minimum Wage on Infant Mortality and Birth Weight
Livingston, Melvin D.; Markowitz, Sara; Wagenaar, Alexander C.
2016-01-01
Objectives. To investigate the effects of state minimum wage laws on low birth weight and infant mortality in the United States. Methods. We estimated the effects of state-level minimum wage laws using a difference-in-differences approach on rates of low birth weight (< 2500 g) and postneonatal mortality (28–364 days) by state and month from 1980 through 2011. All models included state and year fixed effects as well as state-specific covariates. Results. Across all models, a dollar increase in the minimum wage above the federal level was associated with a 1% to 2% decrease in low birth weight births and a 4% decrease in postneonatal mortality. Conclusions. If all states in 2014 had increased their minimum wages by 1 dollar, there would likely have been 2790 fewer low birth weight births and 518 fewer postneonatal deaths for the year. PMID:27310355
The Effect of an Increased Minimum Wage on Infant Mortality and Birth Weight.
Komro, Kelli A; Livingston, Melvin D; Markowitz, Sara; Wagenaar, Alexander C
2016-08-01
To investigate the effects of state minimum wage laws on low birth weight and infant mortality in the United States. We estimated the effects of state-level minimum wage laws using a difference-in-differences approach on rates of low birth weight (< 2500 g) and postneonatal mortality (28-364 days) by state and month from 1980 through 2011. All models included state and year fixed effects as well as state-specific covariates. Across all models, a dollar increase in the minimum wage above the federal level was associated with a 1% to 2% decrease in low birth weight births and a 4% decrease in postneonatal mortality. If all states in 2014 had increased their minimum wages by 1 dollar, there would likely have been 2790 fewer low birth weight births and 518 fewer postneonatal deaths for the year.
The Minimum Data Set Depression Quality Indicator: Does It Reflect Differences in Care Processes?
ERIC Educational Resources Information Center
Simmons, S.F.; Cadogan, M.P.; Cabrera, G.R.; Al-Samarrai, N.R.; Jorge, J.S.; Levy-Storms, L.; Osterweil, D.; Schnelle, J.F.
2004-01-01
Purpose. The objective of this work was to determine if nursing homes that score differently on prevalence of depression, according to the Minimum Data Set (MDS) quality indicator, also provide different processes of care related to depression. Design and Methods. A cross-sectional study with 396 long-term residents in 14 skilled nursing…
NASA Astrophysics Data System (ADS)
Loboda, I. P.; Bogachev, S. A.
2015-07-01
We employ an automated detection algorithm to perform a global study of solar prominence characteristics. We process four months of TESIS observations in the He II 304Å line taken close to the solar minimum of 2008-2009 and mainly focus on quiescent and quiescent-eruptive prominences. We detect a total of 389 individual features ranging from 25×25 to 150×500 Mm2 in size and obtain distributions of many of their spatial characteristics, such as latitudinal position, height, size, and shape. To study their dynamics, we classify prominences as either stable or eruptive and calculate their average centroid velocities, which are found to rarely exceed 3 km/s. In addition, we give rough estimates of mass and gravitational energy for every detected prominence and use these values to estimate the total mass and gravitational energy of all simultaneously existing prominences (1012 - 1014 kg and 1029 - 1031 erg). Finally, we investigate the form of the gravitational energy spectrum of prominences and derive it to be a power-law of index -1.1 ± 0.2.
Bader, Chris; Jesudoss Chelladurai, Jeba; Starling, David E; Jones, Douglas E; Brewer, Matthew T
2017-10-01
Control of parasitic infections may be achieved by eliminating developmental stages present within intermediate hosts, thereby disrupting the parasite life cycle. For several trematodes relevant to human and veterinary medicine, this involves targeting the metacercarial stage found in fish intermediate hosts. Treatment of fish with praziquantel is one potential approach for targeting the metacercaria stage. To date, studies investigating praziquantel-induced metacercarial death in fish rely on counting parasites and visually assessing morphology or movement. In this study, we investigate quantitative methods for detecting praziquantel-induced death using a Posthodiplostomum minimum model. Our results revealed that propidium iodide staining accurately identified praziquantel-induced death and the level of staining was proportional to the concentration of praziquantel. In contrast, detection of ATP, resazurin metabolism, and trypan blue staining were poor indicators of metacercarial death. The propidium iodide method offers an advantage over simple visualization of parasite movement and could be used to determine EC 50 values relevant for comparison of praziquantel sensitivity or resistance. Copyright © 2017 Elsevier Inc. All rights reserved.
3D facial expression recognition using maximum relevance minimum redundancy geometrical features
NASA Astrophysics Data System (ADS)
Rabiu, Habibu; Saripan, M. Iqbal; Mashohor, Syamsiah; Marhaban, Mohd Hamiruce
2012-12-01
In recent years, facial expression recognition (FER) has become an attractive research area, which besides the fundamental challenges, it poses, finds application in areas, such as human-computer interaction, clinical psychology, lie detection, pain assessment, and neurology. Generally the approaches to FER consist of three main steps: face detection, feature extraction and expression recognition. The recognition accuracy of FER hinges immensely on the relevance of the selected features in representing the target expressions. In this article, we present a person and gender independent 3D facial expression recognition method, using maximum relevance minimum redundancy geometrical features. The aim is to detect a compact set of features that sufficiently represents the most discriminative features between the target classes. Multi-class one-against-one SVM classifier was employed to recognize the seven facial expressions; neutral, happy, sad, angry, fear, disgust, and surprise. The average recognition accuracy of 92.2% was recorded. Furthermore, inter database homogeneity was investigated between two independent databases the BU-3DFE and UPM-3DFE the results showed a strong homogeneity between the two databases.
The wind-wind collision hole in eta Car
NASA Astrophysics Data System (ADS)
Damineli, A.; Teodoro, M.; Richardson, N. D.; Gull, T. R.; Corcoran, M. F.; Hamaguchi, K.; Groh, J. H.; Weigelt, G.; Hillier, D. J.; Russell, C.; Moffat, A.; Pollard, K. R.; Madura, T. I.
2017-11-01
Eta Carinae is one of the most massive observable binaries. Yet determination of its orbital and physical parameters is hampered by obscuring winds. However the effects of the strong, colliding winds changes with phase due to the high orbital eccentricity. We wanted to improve measures of the orbital parameters and to determine the mechanisms that produce the relatively brief, phase-locked minimum as detected throughout the electromagnetic spectrum. We conducted intense monitoring of the He ii λ4686 line in η Carinae for 10 months in the year 2014, gathering ~300 high S/N spectra with ground- and space-based telescopes. We also used published spectra at the FOS4 SE polar region of the Homunculus, which views the minimum from a different direction. We used a model in which the He ii λ4686 emission is produced by two mechanisms: a) one linked to the intensity of the wind-wind collision which occurs along the whole orbit and is proportional to the inverse square of the separation between the companion stars; and b) the other produced by the `bore hole' effect which occurs at phases across the periastron passage. The opacity (computed from 3D SPH simulations) as convolved with the emission reproduces the behavior of equivalent widths both for direct and reflected light. Our main results are: a) a demonstration that the He ii λ4686 light curve is exquisitely repeatable from cycle to cycle, contrary to previous claims for large changes; b) an accurate determination of the longitude of periastron, indicating that the secondary star is `behind' the primary at periastron, a dispute extended over the past decade; c) a determination of the time of periastron passage, at ~4 days after the onset of the deep light curve minimum; and d) show that the minimum is simultaneous for observers at different lines of sight, indicating that it is not caused by an eclipse of the secondary star, but rather by the immersion of the wind-wind collision interior to the inner wind of the primary.
NASA Astrophysics Data System (ADS)
Ventrillard-Courtillot, Irene; Gonthiez, Thierry; Clerici, Christine; Romanini, Daniel
2009-11-01
We demonstrate a first application, of optical-feedback cavity-enhanced absorption spectroscopy (OF-CEAS) to breath analysis in a medical environment. Noninvasive monitoring of trace species in exhaled air was performed simultaneous to spirometric measurements on patients at Bichat Hospital (Paris). The high selectivity of the OF-CEAS spectrometer and a time response of 0.3 s (limited by sample flow rate) allowed following the evolution of carbon monoxide and methane concentrations during individual respiratory cycles, and resolving variations among different ventilatory patterns. The minimum detectable absorption on this time scale is about 3×10-10 cm-1. At the working wavelength of the instrument (2.326 μm), this translates to concentration detection limits of ~1 ppbv (45 picomolar, or ~1.25 μg/m3) for CO and 25 ppbv for CH4, well below concentration values found in exhaled air. This same instrument is also able to provide measurement of NH3 concentrations with a detection limit of ~10 ppbv however, at present, memory effects do not allow its measurement on fast time scales.
Fault Detection of Roller-Bearings Using Signal Processing and Optimization Algorithms
Kwak, Dae-Ho; Lee, Dong-Han; Ahn, Jong-Hyo; Koh, Bong-Hwan
2014-01-01
This study presents a fault detection of roller bearings through signal processing and optimization techniques. After the occurrence of scratch-type defects on the inner race of bearings, variations of kurtosis values are investigated in terms of two different data processing techniques: minimum entropy deconvolution (MED), and the Teager-Kaiser Energy Operator (TKEO). MED and the TKEO are employed to qualitatively enhance the discrimination of defect-induced repeating peaks on bearing vibration data with measurement noise. Given the perspective of the execution sequence of MED and the TKEO, the study found that the kurtosis sensitivity towards a defect on bearings could be highly improved. Also, the vibration signal from both healthy and damaged bearings is decomposed into multiple intrinsic mode functions (IMFs), through empirical mode decomposition (EMD). The weight vectors of IMFs become design variables for a genetic algorithm (GA). The weights of each IMF can be optimized through the genetic algorithm, to enhance the sensitivity of kurtosis on damaged bearing signals. Experimental results show that the EMD-GA approach successfully improved the resolution of detectability between a roller bearing with defect, and an intact system. PMID:24368701
Sensing of Substrate Vibrations in the Adult Cicada Okanagana rimosa (Hemiptera: Cicadidae).
Alt, Joscha A; Lakes-Harlan, Reinhard
2018-05-01
Detection of substrate vibrations is an evolutionarily old sensory modality and is important for predator detection as well as for intraspecific communication. In insects, substrate vibrations are detected mainly by scolopidial (chordotonal) sense organs found at different sites in the legs. Among these sense organs, the tibial subgenual organ (SGO) is one of the most sensitive sensors. The neuroanatomy and physiology of vibratory sense organs of cicadas is not well known. Here, we investigated the leg nerve by neuronal tracing and summed nerve recordings. Tracing with Neurobiotin revealed that the cicada Okanagana rimosa (Say) (Hemiptera: Cicadidae) has a femoral chordotonal organ with about 20 sensory cells and a tibial SGO with two sensory cells. Recordings from the leg nerve show that the vibrational response is broadly tuned with a threshold of about 1 m/s2 and a minimum latency of about 6 ms. The vibratory sense of cicadas might be used in predator avoidance and intraspecific communication, although no tuning to the peak frequency of the calling song (9 kHz) could be found.
Toward development of mobile application for hand arthritis screening.
Akhbardeh, Farhad; Vasefi, Fartash; Tavakolian, Kouhyar; Bradley, David; Fazel-Rezai, Reza
2015-01-01
Arthritis is one of the most common health problems affecting people throughout the world. The goal of the work presented in this paper is to provide individuals, who may be developing or have developed arthritis, with a mobile application to assess and monitor the progress of their disease using their smartphone. The image processing algorithm includes finger border detection algorithm to monitor joint thickness and angular deviation abnormalities, which are common symptoms of arthritis. In this work, we have analyzed and compared gradient, thresholding and Canny algorithms for border detection. The effect of image spatial resolution (down-sampling) is also investigated. The results calculated based on 36 joint measurements show that the mean errors for gradient, thresholding, and Canny methods are 0.20, 2.13, and 2.03 mm, respectively. In addition, the average error for different image resolutions is analyzed and the minimum required resolution is determined for each method. The results confirm that recent smartphone imaging capabilities can provide enough accuracy for hand border detection and finger joint analysis based on gradient method.
NASA Astrophysics Data System (ADS)
Zhang, Xiumei; Xu, Shicai; Jiang, Shouzhen; Wang, Jihua; Wei, Jie; Xu, Shida; Gao, Shoubao; Liu, Hanping; Qiu, Hengwei; Li, Zhen; Liu, Huilan; Li, Zhenhua; Li, Hongsheng
2015-10-01
We present a graphene/silver-copper nanoparticle hybrid system (G/SCNPs) to be used as a high-performance surface-enhanced Raman scattering (SERS) substrate. The silver-copper nanoparticles wrapped by a monolayer graphene layer are directly synthesized on SiO2/Si substrate by chemical vapor deposition in a mixture of methane and hydrogen. The G/SCNPs shows excellent SERS enhancement activity and high reproducibility. The minimum detected concentration of R6G is as low as 10-10 M and the calibration curve shows a good linear response from 10-6 to 10-10 M. The date fluctuations from 20 positions of one SERS substrate are less than 8% and from 20 different substrates are less than 10%. The high reproducibility of the enhanced Raman signals could be due to the presence of an ultrathin graphene layer and uniform morphology of silver-copper nanoparticles. The use of G/SCNPs for detection of nucleosides extracted from human urine demonstrates great potential for the practical applications on a variety of detection in medicine and biotechnology field.
Continuous-wave cavity ringdown spectroscopy based on the control of cavity reflection.
Li, Zhixin; Ma, Weiguang; Fu, Xiaofang; Tan, Wei; Zhao, Gang; Dong, Lei; Zhang, Lei; Yin, Wangbao; Jia, Suotang
2013-07-29
A new type of continuous-wave cavity ringdown spectrometer based on the control of cavity reflection for trace gas detection was designed and evaluated. The technique separated the acquisitions of the ringdown event and the trigger signal to optical switch by detecting the cavity reflection and transmission, respectively. A detailed description of the time sequence of the measurement process was presented. In order to avoid the wrong extraction of ringdown time encountered accidentally in fitting procedure, the laser frequency and cavity length were scanned synchronously. Based on the statistical analysis of measured ringdown times, the frequency normalized minimum detectable absorption in the reflection control mode was 1.7 × 10(-9)cm(-1)Hz(-1/2), which was 5.4 times smaller than that in the transmission control mode. However the signal-to-noise ratio of the absorption spectrum was only 3 times improved since the etalon effect existed. Finally, the peak absorption coefficients of the C(2)H(2) transition near 1530.9nm under different pressures showed a good agreement with the theoretical values.
Yaslioglu, Erkan; Simsek, Ercan; Kilic, Ilker
2007-04-15
In the study, 10 different dairy cattle barns with natural ventilation system were investigated in terms of structural aspects. VENTGRAPH software package was used to estimate minimum ventilation requirements for three different outdoor design temperatures (-3, 0 and 1.7 degrees C). Variation in indoor temperatures was also determined according to the above-mentioned conditions. In the investigated dairy cattle barns, on condition that minimum ventilation requirement to be achieved for -3, 0 and 1.7 degrees C outdoor design temperature and 70, 80% Indoor Relative Humidity (IRH), estimated indoor temperature were ranged from 2.2 to 12.2 degrees C for 70% IRH, 4.3 to 15.0 degrees C for 80% IRH. Barn type, outdoor design temperature and indoor relative humidity significantly (p < 0.01) affect the indoor temperature. The highest ventilation requirement was calculated for straw yard (13879 m3 h(-1)) while the lowest was estimated for tie-stall (6169.20 m3 h(-1)). Estimated minimum ventilation requirements per animal were significantly (p < 0.01) different according to the barn types. Effect of outdoor esign temperatures on minimum ventilation requirements and minimum ventilation requirements per animal was found to be significant (p < 0.05, p < 0.01). Estimated indoor temperatures were in thermoneutral zone (-2 to 20 degrees C). Therefore, one can be said that use of naturally ventilated cold dairy barns in the region will not lead to problems associated with animal comfort in winter.
Pendyala, Brahmaiah; Chaganti, Subba Rao; Lalman, Jerald A; Heath, Daniel D
2016-03-01
The objective of this study was to establish the impact of different steam exploded organic fractions in municipal solid waste (MSW) on electricity production using microbial fuel cells (MFCs). In particular, the influence of individual steam exploded liquefied waste components (food waste (FW), paper-cardboard waste (PCW) and garden waste (GW)) and their blends on chemical oxygen demand (COD) removal, columbic efficiency (CE) and microbial diversity was examined using a mixture design. Maximum power densities from 0.56 to 0.83 W m(-2) were observed for MFCs fed with different feedstocks. The maximum COD removed and minimum CE were observed for a GW feed. However, a reverse trend (minimum COD removed and maximum CE) was observed for the FW feed. A maximum COD removal (78%) accompanied with a maximum CE (24%) was observed for a combined feed of FW, PCW plus GW in a 1:1:1 ratio. Lactate, the major byproduct detected, was unutilized by the anodic biofilm community. The organic fraction of municipal solid waste (OFMSW) could serve as a potential feedstock for electricity generation in MFCs; however, elevated protein levels will lead to reduced COD removal. The microbial communities in cultures fed FW and PCW was highly diversified; however, the communities in cultures fed FW or a feed mixture containing high FW levels were similar and dominated by Bacteroidetes and β-proteobacteria. Copyright © 2016 Elsevier Ltd. All rights reserved.
Effect of nonideal square-law detection on static calibration in noise-injection radiometers
NASA Technical Reports Server (NTRS)
Hearn, C. P.
1984-01-01
The effect of nonideal square-law detection on the static calibration for a class of Dicke radiometers is examined. It is shown that fourth-order curvature in the detection characteristic adds a nonlinear term to the linear calibration relationship normally ascribed to noise-injection, balanced Dicke radiometers. The minimum error, based on an optimum straight-line fit to the calibration curve, is derived in terms of the power series coefficients describing the input-output characteristics of the detector. These coefficients can be determined by simple measurements, and detection nonlinearity is, therefore, quantitatively related to radiometric measurement error.
Assessment of health-related quality of life in spine treatment: conversion from SF-36 to VR-12.
Gornet, Matthew F; Copay, Anne G; Sorensen, Katrine M; Schranck, Francine W
2018-07-01
Health-related quality-of-life outcomes have been collected with the Medical Outcomes Study (MOS) Short Form 36 (SF-36) survey. Boston University School of Public Health has developed algorithms for the conversion of SF-36 to Veterans RAND 12-Item Health Survey (VR-12) Physical Component Summary (PCS) and Mental Component Summary (MCS) scores. The purpose of the present study is to investigate the conversion of the SF-36 to VR-12 PCS and MCS scores. Preoperative and postoperative SF-36 were collected from patients who underwent lumbar or cervical surgery from a single surgeon between August 1998 and January 2013. Short Form 36 PCS and MCS scores were calculated following their original instructions. The SF-36 answers were then converted to VR-12 PCS and MCS scores following the algorithm provided by the Boston University School of Public Health. The mean score, preoperative to postoperative change, and proportions of patients who reach the minimum detectable change were compared between SF-36 and VR-12. A total of 1,968 patients (1,559 lumbar and 409 cervical) had completed preoperative and postoperative SF-36. The values of the SF-36 and VR-12 mean scores were extremely similar, with score differences ranging from 0.77 to 1.82. The preoperative to postoperative improvement was highly significant (p<.001) for both SF-36 and VR-12 scores. The mean change scores were similar, with a difference of up to 0.93 for PCS and up to 0.37 for MCS. Minimum detectable change (MDC) values were almost identical for SF-36 and VR-12, with a difference of 0.12 for PCS and up to 0.41 for MCS. The proportions of patients whose change in score reached MDC were also nearly identical for SF-36 and VR-12. About 90% of the patients above SF-36 MDC were also above VR-12 MDC. The converted VR-12 scores, similar to the SF-36 scores, detect a significant postoperative improvement in PCS and MCS scores. The calculated MDC values and the proportions of patients whose score improvement reach MDC are similar for both SF-36 and VR-12. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Salinas Solé, Celia; Peña Angulo, Dhais; Gonzalez Hidalgo, Jose Carlos; Brunetti, Michele
2017-04-01
In this poster we applied the moving window approach (see Poster I of this collection) to analyze trends of spring and its corresponding months (March, April, May) temperature mean values of maximum (Tmax) and minimum (Tmin) in Spanish mainland to detect the effects of length period and starting year. Monthly series belong to Monthly Temperature dataset of Spanish mainland (MOTEDAS). Database contains in its grid format of 5236 pixels of monthly series (10x10 km). The threshold used in spatial analyses considers 20% of land under significant trend (p<0.05). The most striking results are as follow: • Seasonal Tmax shows that global trend was positive and significant until the mid 80's with higher values than 75% from between 1954-2010 to 1979-2010, being reduced after to the north region. So, from 1985-2010 no significant trend have been detected. Monthly analyses show differences. March trend is not significant (<20% of area) since 1974-2010, while significant trend in April and May varies between 1961-2010/1979-2010 and 1965-2010/1980-2010 respectively, clearly located in northern midland and Mediterranean coastland. • Spring Tmin trend analyses is significantly (>20%) during all temporal windows, notwithstanding NW do not show global significant trend, and in the most recent temporal windows only affect significantly SE. Monthly analyses also differ. Not significant trend is detected in March from 1979-2010, and from 1985-2010 in May, being April the month in any temporal windows with more than 20% of land affected by significant trend. • Spatial differences are detected between windows (South-North in March, East-West in April-May. We can conclude Tmax trend varies accordingly temporal windows dramatically in spring and no significance has been detected in the recent decades. Northern areas and Mediterranean coastland seems to be the most affected. Monthy Tmax trend spatial analyses confirm the heterogeneity of diurnal temperatures; different spatial gradients in windows have been detected between months. Seasonal Tmin show a more global temporal pattern. Spatial gradients of significance between months have been detected, in some sense contraries to the observed in Tmax.
NASA Astrophysics Data System (ADS)
Guillong, M.; Günther, D.
2001-07-01
A homogenized 193 nm excimer laser with a flat-top beam profile was used to study the capabilities of LA-ICP-MS for 'quasi' non-destructive fingerprinting and sourcing of sapphires from different locations. Sapphires contain 97-99% of Al 2O 3 (corundum), with the remainder composed of several trace elements, which can be used to distinguish the origin of these gemstones. The ablation behavior of sapphires, as well as the minimum quantity of sample removal that is required to determine these trace elements, was investigated. The optimum ablation conditions were a fluency of 6 J cm -2, a crater diameter of 120 μm, and a laser repetition rate of 10 Hz. The optimum time for the ablation was determined to be 2 s, equivalent to 20 laser pulses. The mean sample removal was 60 nm per pulse (approx. 3 ng per pulse). This allowed satisfactory trace element determination, and was found to cause the minimum amount of damage, while allowing for the fingerprinting of sapphires. More than 40 isotopes were measured using different spatial resolutions (20-120 μm) and eight elements were reproducibly detected in 25 sapphire samples from five different locations. The reproducibility of the trace element distribution is limited by the heterogeneity of the sample. The mean of five or more replicate analyses per sample was used. Calibration was carried out using NIST 612 glass reference material as external standard. The linear dynamic range of the ICP-MS (nine orders of magnitude) allowed the use of Al, the major element in sapphire, as an internal standard. The limits of detection for most of the light elements were in the μg g -1 range and were better for heavier elements (mass >85), being in the 0.1 μg g -1 range. The accuracy of the determinations was demonstrated by comparison with XRF analyses of the same set of samples. Using the quantitative analyses obtained using LA-ICP-MS, natural sapphires from five different origins were statistically classified using ternary plots and principal multi-component analysis.
42 CFR 84.205 - Facepiece test; minimum requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
...; (ii) Two minutes, calisthenic arm movements; (iii) Two minutes, running in place; and (iv) Two minutes, pumping with a tire pump into a 28-liter (1 cubic-foot) container. (4) Each wearer shall not detect the...
Detection of severe Midwest thunderstorms using geosynchronous satellite data
NASA Technical Reports Server (NTRS)
Adler, R. F.; Markus, M. J.; Fenn, D. D.
1985-01-01
In the present exploration of the effectiveness of severe thunderstorm detection in the Midwestern region of the U.S. by means of approximately 5-min interval geosynchronous satellite data, thunderstorms are defined in IR data as points of relative minimum in brightness temperature T(B) having good time continuity and exhibiting a period of rapid growth. The four parameters of rate of T(B) decrease in the upper troposphere and stratosphere, isotherm expansion, and storm lifetime minimum T(B), are shown to be statistically related to the occurrence of severe weather on four case study days and are combined into a Thunderstorm Index which varies among values from 1 to 9. Storms rating higher than 6 have a much higher probability of severe weather reports, yielding a warning time lead of 15 min for hail and 30 min for the first tornado report.
Production, fixation, and staining of cells on slides for maximum photometric sensitivity
NASA Astrophysics Data System (ADS)
Leif, Robert C.; Harlow, Patrick M.; Vallarino, Lidia M.
1994-07-01
The need to detect increasingly low levels of antigens or polynucleotides in cells requires improvements in both the preparation and the staining of samples. The combination of centrifugal cytology with the use of glyoxal as cross-linking fixative produces monolayers of cells having minimum background fluorescence. Detection can be further improved by the use of a recently developed type of luminescent tag containing a lanthanide(III) ion as the light- emitting center. These novel tags are macrocyclic complexes functionalized with an isothiocyanate group to allow covalent coupling to a biosubstrate. The Eu(III) complex possesses a set of properties -- water solubility, inertness to metal release over a wide pH range, ligand-sensitized narrow-band luminescence, large Stoke's shift, and long excited-state lifetime -- that provides ease of staining as well as maximum signal with minimum interference from background autofluorescence. Luminescence efficiency studies indicate significant solvent effects.
Thermoluminescence response of flat optical fiber subjected to 9 MeV electron irradiations
NASA Astrophysics Data System (ADS)
Hashim, S.; Omar, S. S. Che; Ibrahim, S. A.; Hassan, W. M. S. Wan; Ung, N. M.; Mahdiraji, G. A.; Bradley, D. A.; Alzimami, K.
2015-01-01
We describe the efforts of finding a new thermoluminescent (TL) media using pure silica flat optical fiber (FF). The present study investigates the dose response, sensitivity, minimum detectable dose and glow curve of FF subjected to 9 MeV electron irradiations with various dose ranges from 0 Gy to 2.5 Gy. The above-mentioned TL properties of the FF are compared with commercially available TLD-100 rods. The TL measurements of the TL media exhibit a linear dose response over the delivered dose using a linear accelerator. We found that the sensitivity of TLD-100 is markedly 6 times greater than that of FF optical fiber. The minimum detectable dose was found to be 0.09 mGy for TLD-100 and 8.22 mGy for FF. Our work may contribute towards the development of a new dosimeter for personal monitoring purposes.
NASA Astrophysics Data System (ADS)
Szu, Harold H.; Buss, James R.; Kopriva, Ivica
2004-04-01
We proposed the physics approach to solve a physical inverse problem, namely to choose the unique equilibrium solution (at the minimum free energy: H= E - ToS, including the Wiener, l.m.s E, and ICA, Max S, as special cases). The "unsupervised classification" presumes that required information must be learned and derived directly and solely from the data alone, in consistence with the classical Duda-Hart ATR definition of the "unlabelled data". Such truly unsupervised methodology is presented for space-variant imaging processing for a single pixel in the real world case of remote sensing, early tumor detections and SARS. The indeterminacy of the multiple solutions of the inverse problem is regulated or selected by means of the absolute minimum of isothermal free energy as the ground truth of local equilibrium condition at the single-pixel foot print.
NASA Astrophysics Data System (ADS)
McClain, Bobbi J.; Porter, William F.
2000-11-01
Satellite imagery is a useful tool for large-scale habitat analysis; however, its limitations need to be tested. We tested these limitations by varying the methods of a habitat evaluation for white-tailed deer ( Odocoileus virginianus) in the Adirondack Park, New York, USA, utilizing harvest data to create and validate the assessment models. We used two classified images, one with a large minimum mapping unit but high accuracy and one with no minimum mapping unit but slightly lower accuracy, to test the sensitivity of the evaluation to these differences. We tested the utility of two methods of assessment, habitat suitability index modeling, and pattern recognition modeling. We varied the scale at which the models were applied by using five separate sizes of analysis windows. Results showed that the presence of a large minimum mapping unit eliminates important details of the habitat. Window size is relatively unimportant if the data are averaged to a large resolution (i.e., township), but if the data are used at the smaller resolution, then the window size is an important consideration. In the Adirondacks, the proportion of hardwood and softwood in an area is most important to the spatial dynamics of deer populations. The low occurrence of open area in all parts of the park either limits the effect of this cover type on the population or limits our ability to detect the effect. The arrangement and interspersion of cover types were not significant to deer populations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, H; Guerrero, M; Prado, K
Purpose: Building up a TG-71 based electron monitor-unit (MU) calculation protocol usually involves massive measurements. This work investigates a minimum data set of measurements and its calculation accuracy and measurement time. Methods: For 6, 9, 12, 16, and 20 MeV of our Varian Clinac-Series linear accelerators, the complete measurements were performed at different depth using 5 square applicators (6, 10, 15, 20 and 25 cm) with different cutouts (2, 3, 4, 6, 10, 15 and 20 cm up to applicator size) for 5 different SSD’s. For each energy, there were 8 PDD scans and 150 point measurements for applicator factors,more » cutout factors and effective SSDs that were then converted to air-gap factors for SSD 99–110cm. The dependence of each dosimetric quantity on field size and SSD was examined to determine the minimum data set of measurements as a subset of the complete measurements. The “missing” data excluded in the minimum data set were approximated by linear or polynomial fitting functions based on the included data. The total measurement time and the calculated electron MU using the minimum and the complete data sets were compared. Results: The minimum data set includes 4 or 5 PDD’s and 51 to 66 point measurements for each electron energy, and more PDD’s and fewer point measurements are generally needed as energy increases. Using only <50% of complete measurement time, the minimum data set generates acceptable MU calculation results compared to those with the complete data set. The PDD difference is within 1 mm and the calculated MU difference is less than 1.5%. Conclusion: Data set measurement for TG-71 electron MU calculations can be minimized based on the knowledge of how each dosimetric quantity depends on various setup parameters. The suggested minimum data set allows acceptable MU calculation accuracy and shortens measurement time by a few hours.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Wenfang; Du, Jinjin; Wen, Ruijuan
We have investigated the transmission spectra of a Fabry-Perot interferometer (FPI) with squeezed vacuum state injection and non-Gaussian detection, including photon number resolving detection and parity detection. In order to show the suitability of the system, parallel studies were made of the performance of two other light sources: coherent state of light and Fock state of light either with classical mean intensity detection or with non-Gaussian detection. This shows that by using the squeezed vacuum state and non-Gaussian detection simultaneously, the resolution of the FPI can go far beyond the cavity standard bandwidth limit based on the current techniques. Themore » sensitivity of the scheme has also been explored and it shows that the minimum detectable sensitivity is better than that of the other schemes.« less
Uddin, Rokon; Burger, Robert; Donolato, Marco; Fock, Jeppe; Creagh, Michael; Hansen, Mikkel Fougt; Boisen, Anja
2016-11-15
We present a biosensing platform for the detection of proteins based on agglutination of aptamer coated magnetic nano- or microbeads. The assay, from sample to answer, is integrated on an automated, low-cost microfluidic disc platform. This ensures fast and reliable results due to a minimum of manual steps involved. The detection of the target protein was achieved in two ways: (1) optomagnetic readout using magnetic nanobeads (MNBs); (2) optical imaging using magnetic microbeads (MMBs). The optomagnetic readout of agglutination is based on optical measurement of the dynamics of MNB aggregates whereas the imaging method is based on direct visualization and quantification of the average size of MMB aggregates. By enhancing magnetic particle agglutination via application of strong magnetic field pulses, we obtained identical limits of detection of 25pM with the same sample-to-answer time (15min 30s) using the two differently sized beads for the two detection methods. In both cases a sample volume of only 10µl is required. The demonstrated automation, low sample-to-answer time and portability of both detection instruments as well as integration of the assay on a low-cost disc are important steps for the implementation of these as portable tools in an out-of-lab setting. Copyright © 2016 Elsevier B.V. All rights reserved.
Rosotti, Giovanni P; Juhasz, Attila; Booth, Richard A; Clarke, Cathie J
2016-07-01
We investigate the minimum planet mass that produces observable signatures in infrared scattered light and submillimetre (submm) continuum images and demonstrate how these images can be used to measure planet masses to within a factor of about 2. To this end, we perform multi-fluid gas and dust simulations of discs containing low-mass planets, generating simulated observations at 1.65, 10 and 850 μm. We show that the minimum planet mass that produces a detectable signature is ∼15 M ⊕ : this value is strongly dependent on disc temperature and changes slightly with wavelength (favouring the submm). We also confirm previous results that there is a minimum planet mass of ∼20 M ⊕ that produces a pressure maximum in the disc: only planets above this threshold mass generate a dust trap that can eventually create a hole in the submm dust. Below this mass, planets produce annular enhancements in dust outwards of the planet and a reduction in the vicinity of the planet. These features are in steady state and can be understood in terms of variations in the dust radial velocity, imposed by the perturbed gas pressure radial profile, analogous to a traffic jam. We also show how planet masses can be derived from structure in scattered light and submm images. We emphasize that simulations with dust need to be run over thousands of planetary orbits so as to allow the gas profile to achieve a steady state and caution against the estimation of planet masses using gas-only simulations.
Rodríguez, Rogelio; Borràs, Antoni; Leal, Luz; Cerdà, Víctor; Ferrer, Laura
2016-03-10
An automatic system based on multisyringe flow injection analysis (MSFIA) and lab-on-valve (LOV) flow techniques for separation and pre-concentration of (226)Ra from drinking and natural water samples has been developed. The analytical protocol combines two different procedures: the Ra adsorption on MnO2 and the BaSO4 co-precipitation, achieving more selectivity especially in water samples with low radium levels. Radium is adsorbed on MnO2 deposited on macroporous of bead cellulose. Then, it is eluted with hydroxylamine to transform insoluble MnO2 to soluble Mn(II) thus freeing Ra, which is then coprecipitated with BaSO4. The (226)Ra can be directly detected in off-line mode using a low background proportional counter (LBPC) or through a liquid scintillation counter (LSC), after performing an on-line coprecipitate dissolution. Thus, the versatility of the proposed system allows the selection of the radiometric detection technique depending on the detector availability or the required response efficiency (sample number vs. response time and limit of detection). The MSFIA-LOV system improves the precision (1.7% RSD), and the extraction frequency (up to 3 h(-1)). Besides, it has been satisfactorily applied to different types of water matrices (tap, mineral, well and sea water). The (226)Ra minimum detectable activities (LSC: 0.004 Bq L(-1); LBPC: 0.02 Bq L(-1)) attained by this system allow to reach the guidance values proposed by the relevant international agencies e.g. WHO, EPA and EC. Copyright © 2016 Elsevier B.V. All rights reserved.
The impact of the minimum wage on health.
Andreyeva, Elena; Ukert, Benjamin
2018-03-07
This study evaluates the effect of minimum wage on risky health behaviors, healthcare access, and self-reported health. We use data from the 1993-2015 Behavioral Risk Factor Surveillance System, and employ a difference-in-differences strategy that utilizes time variation in new minimum wage laws across U.S. states. Results suggest that the minimum wage increases the probability of being obese and decreases daily fruit and vegetable intake, but also decreases days with functional limitations while having no impact on healthcare access. Subsample analyses reveal that the increase in weight and decrease in fruit and vegetable intake are driven by the older population, married, and whites. The improvement in self-reported health is especially strong among non-whites, females, and married.
Secondary electric power generation with minimum engine bleed
NASA Technical Reports Server (NTRS)
Tagge, G. E.
1983-01-01
Secondary electric power generation with minimum engine bleed is discussed. Present and future jet engine systems are compared. The role of auxiliary power units is evaluated. Details of secondary electric power generation systems with and without auxiliary power units are given. Advanced bleed systems are compared with minimum bleed systems. A cost model of ownership is given. The difference in the cost of ownership between a minimum bleed system and an advanced bleed system is given.
NASA Astrophysics Data System (ADS)
Nightingale, James; Wang, Qi; Grecos, Christos
2015-02-01
In recent years video traffic has become the dominant application on the Internet with global year-on-year increases in video-oriented consumer services. Driven by improved bandwidth in both mobile and fixed networks, steadily reducing hardware costs and the development of new technologies, many existing and new classes of commercial and industrial video applications are now being upgraded or emerging. Some of the use cases for these applications include areas such as public and private security monitoring for loss prevention or intruder detection, industrial process monitoring and critical infrastructure monitoring. The use of video is becoming commonplace in defence, security, commercial, industrial, educational and health contexts. Towards optimal performances, the design or optimisation in each of these applications should be context aware and task oriented with the characteristics of the video stream (frame rate, spatial resolution, bandwidth etc.) chosen to match the use case requirements. For example, in the security domain, a task-oriented consideration may be that higher resolution video would be required to identify an intruder than to simply detect his presence. Whilst in the same case, contextual factors such as the requirement to transmit over a resource-limited wireless link, may impose constraints on the selection of optimum task-oriented parameters. This paper presents a novel, conceptually simple and easily implemented method of assessing video quality relative to its suitability for a particular task and dynamically adapting videos streams during transmission to ensure that the task can be successfully completed. Firstly we defined two principle classes of tasks: recognition tasks and event detection tasks. These task classes are further subdivided into a set of task-related profiles, each of which is associated with a set of taskoriented attributes (minimum spatial resolution, minimum frame rate etc.). For example, in the detection class, profiles for intruder detection will require different temporal characteristics (frame rate) from those used for detection of high motion objects such as vehicles or aircrafts. We also define a set of contextual attributes that are associated with each instance of a running application that include resource constraints imposed by the transmission system employed and the hardware platforms used as source and destination of the video stream. Empirical results are presented and analysed to demonstrate the advantages of the proposed schemes.
Foreign body detection in food materials using compton scattered x-rays
NASA Astrophysics Data System (ADS)
McFarlane, Nigel James Bruce
This thesis investigated the application of X-ray Compton scattering to the problem of foreign body detection in food. The methods used were analytical modelling, simulation and experiment. A criterion was defined for detectability, and a model was developed for predicting the minimum time required for detection. The model was used to predict the smallest detectable cubes of air, glass, plastic and steel. Simulations and experiments were performed on voids and glass in polystyrene phantoms, water, coffee and muesli. Backscatter was used to detect bones in chicken meat. The effects of geometry and multiple scatter on contrast, signal-to-noise, and detection time were simulated. Compton scatter was compared with transmission, and the effect of inhomogeneity was modelled. Spectral shape was investigated as a means of foreign body detection. A signal-to-noise ratio of 7.4 was required for foreign body detection in food. A 0.46 cm cube of glass or a 1.19 cm cube of polystyrene were detectable in a 10 cm cube of water in one second. The minimum time to scan a whole sample varied as the 7th power of the foreign body size, and the 5th power of the sample size. Compton scatter inspection produced higher contrasts than transmission, but required longer measurement times because of the low number of photon counts. Compton scatter inspection of whole samples was very slow compared to production line speeds in the food industry. There was potential for Compton scatter in applications which did not require whole-sample scanning, such as surface inspection. There was also potential in the inspection of inhomogeneous samples. The multiple scatter fraction varied from 25% to 55% for 2 to 10 cm cubes of water, but did not have a large effect on the detection time. The spectral shape gave good contrasts and signal-to-noise ratios in the detection of chicken bones.
Not Available
1981-01-29
Temperature profiles at elevated temperature conditions are monitored by use of an elongated device having two conductors spaced by the minimum distance required to normally maintain an open circuit between them. The melting point of one conductor is selected at the elevated temperature being detected, while the melting point of the other is higher. As the preselected temperature is reached, liquid metal will flow between the conductors creating short circuits which are detectable as to location.
Tokarz, Richard D.
1983-01-01
Temperature profiles at elevated temperature conditions are monitored by use of an elongated device having two conductors spaced by the minimum distance required to normally maintain an open circuit between them. The melting point of one conductor is selected at the elevated temperature being detected, while the melting point of the other is higher. As the preselected temperature is reached, liquid metal will flow between the conductors, creating short circuits which are detectable as to location.
High-quality JPEG compression history detection for fake uncompressed images
NASA Astrophysics Data System (ADS)
Zhang, Rong; Wang, Rang-Ding; Guo, Li-Jun; Jiang, Bao-Chuan
2017-05-01
Authenticity is one of the most important evaluation factors of images for photography competitions or journalism. Unusual compression history of an image often implies the illicit intent of its author. Our work aims at distinguishing real uncompressed images from fake uncompressed images that are saved in uncompressed formats but have been previously compressed. To detect the potential image JPEG compression, we analyze the JPEG compression artifacts based on the tetrolet covering, which corresponds to the local image geometrical structure. Since the compression can alter the structure information, the tetrolet covering indexes may be changed if a compression is performed on the test image. Such changes can provide valuable clues about the image compression history. To be specific, the test image is first compressed with different quality factors to generate a set of temporary images. Then, the test image is compared with each temporary image block-by-block to investigate whether the tetrolet covering index of each 4×4 block is different between them. The percentages of the changed tetrolet covering indexes corresponding to the quality factors (from low to high) are computed and used to form the p-curve, the local minimum of which may indicate the potential compression. Our experimental results demonstrate the advantage of our method to detect JPEG compressions of high quality, even the highest quality factors such as 98, 99, or 100 of the standard JPEG compression, from uncompressed-format images. At the same time, our detection algorithm can accurately identify the corresponding compression quality factor.
Chappell, A; Li, Y; Yu, H Q; Zhang, Y Z; Li, X Y
2015-06-01
The caesium-137 ((137)Cs) technique for estimating net, time-integrated soil redistribution by the processes of wind, water and tillage is increasingly being used with repeated sampling to form a baseline to evaluate change over small (years to decades) timeframes. This interest stems from knowledge that since the 1950s soil redistribution has responded dynamically to different phases of land use change and management. Currently, there is no standard approach to detect change in (137)Cs-derived net soil redistribution and thereby identify the driving forces responsible for change. We outline recent advances in space-time sampling in the soil monitoring literature which provide a rigorous statistical and pragmatic approach to estimating the change over time in the spatial mean of environmental properties. We apply the space-time sampling framework, estimate the minimum detectable change of net soil redistribution and consider the information content and cost implications of different sampling designs for a study area in the Chinese Loess Plateau. Three phases (1954-1996, 1954-2012 and 1996-2012) of net soil erosion were detectable and attributed to well-documented historical change in land use and management practices in the study area and across the region. We recommend that the design for space-time sampling is considered carefully alongside cost-effective use of the spatial mean to detect and correctly attribute cause of change over time particularly across spatial scales of variation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Liu, Anne; Fong, Amie; Becket, Elinne; Yuan, Jessica; Tamae, Cindy; Medrano, Leah; Maiz, Maria; Wahba, Christine; Lee, Catherine; Lee, Kim; Tran, Katherine P; Yang, Hanjing; Hoffman, Robert M; Salih, Anya; Miller, Jeffrey H
2011-03-01
Many studies have examined the evolution of bacterial mutants that are resistant to specific antibiotics, and many of these focus on concentrations at and above the MIC. Here we ask for the minimum concentration at which existing resistant mutants can outgrow sensitive wild-type strains in competition experiments at antibiotic levels significantly below the MIC, and we define a minimum selective concentration (MSC) in Escherichia coli for two antibiotics, which is near 1/5 of the MIC for ciprofloxacin and 1/20 of the MIC for tetracycline. Because of the prevalence of resistant mutants already in the human microbiome, allowable levels of antibiotics to which we are exposed should be below the MSC. Since this concentration often corresponds to low or trace levels of antibiotics, it is helpful to have simple tests to detect such trace levels. We describe a simple ultrasensitive test for detecting the presence of antibiotics and genotoxic agents. The test is based on the use of chromogenic proteins as color markers and the use of single and multiple mutants of Escherichia coli that have greatly increased sensitivity to either a wide range of antibiotics or specific antibiotics, antibiotic families, and genotoxic agents. This test can detect ciprofloxacin at 1/75 of the MIC.
Time and frequency constrained sonar signal design for optimal detection of elastic objects.
Hamschin, Brandon; Loughlin, Patrick J
2013-04-01
In this paper, the task of model-based transmit signal design for optimizing detection is considered. Building on past work that designs the spectral magnitude for optimizing detection, two methods for synthesizing minimum duration signals with this spectral magnitude are developed. The methods are applied to the design of signals that are optimal for detecting elastic objects in the presence of additive noise and self-noise. Elastic objects are modeled as linear time-invariant systems with known impulse responses, while additive noise (e.g., ocean noise or receiver noise) and acoustic self-noise (e.g., reverberation or clutter) are modeled as stationary Gaussian random processes with known power spectral densities. The first approach finds the waveform that preserves the optimal spectral magnitude while achieving the minimum temporal duration. The second approach yields a finite-length time-domain sequence by maximizing temporal energy concentration, subject to the constraint that the spectral magnitude is close (in a least-squares sense) to the optimal spectral magnitude. The two approaches are then connected analytically, showing the former is a limiting case of the latter. Simulation examples that illustrate the theory are accompanied by discussions that address practical applicability and how one might satisfy the need for target and environmental models in the real-world.
Farley, Carlton; Kassu, Aschalew; Bose, Nayana; Jackson-Davis, Armitra; Boateng, Judith; Ruffin, Paul; Sharma, Anup
2017-06-01
A short distance standoff Raman technique is demonstrated for detecting economically motivated adulteration (EMA) in extra virgin olive oil (EVOO). Using a portable Raman spectrometer operating with a 785 nm laser and a 2-in. refracting telescope, adulteration of olive oil with grapeseed oil and canola oil is detected between 1% and 100% at a minimum concentration of 2.5% from a distance of 15 cm and at a minimum concentration of 5% from a distance of 1 m. The technique involves correlating the intensity ratios of prominent Raman bands of edible oils at 1254, 1657, and 1441 cm -1 to the degree of adulteration. As a novel variation in the data analysis technique, integrated intensities over a spectral range of 100 cm -1 around the Raman line were used, making it possible to increase the sensitivity of the technique. The technique is demonstrated by detecting adulteration of EVOO with grapeseed and canola oils at 0-100%. Due to the potential of this technique for making measurements from a convenient distance, the short distance standoff Raman technique has the promise to be used for routine applications in food industry such as identifying food items and monitoring EMA at various checkpoints in the food supply chain and storage facilities.
Ma, Chi; Yu, Lifeng; Chen, Baiyu; Favazza, Christopher; Leng, Shuai; McCollough, Cynthia
2016-04-01
Channelized Hotelling observer (CHO) models have been shown to correlate well with human observers for several phantom-based detection/classification tasks in clinical computed tomography (CT). A large number of repeated scans were used to achieve an accurate estimate of the model's template. The purpose of this study is to investigate how the experimental and CHO model parameters affect the minimum required number of repeated scans. A phantom containing 21 low-contrast objects was scanned on a 128-slice CT scanner at three dose levels. Each scan was repeated 100 times. For each experimental configuration, the low-contrast detectability, quantified as the area under receiver operating characteristic curve, [Formula: see text], was calculated using a previously validated CHO with randomly selected subsets of scans, ranging from 10 to 100. Using [Formula: see text] from the 100 scans as the reference, the accuracy from a smaller number of scans was determined. Our results demonstrated that the minimum number of repeated scans increased when the radiation dose level decreased, object size and contrast level decreased, and the number of channels increased. As a general trend, it increased as the low-contrast detectability decreased. This study provides a basis for the experimental design of task-based image quality assessment in clinical CT using CHO.
Ma, Chi; Yu, Lifeng; Chen, Baiyu; Favazza, Christopher; Leng, Shuai; McCollough, Cynthia
2016-01-01
Abstract. Channelized Hotelling observer (CHO) models have been shown to correlate well with human observers for several phantom-based detection/classification tasks in clinical computed tomography (CT). A large number of repeated scans were used to achieve an accurate estimate of the model’s template. The purpose of this study is to investigate how the experimental and CHO model parameters affect the minimum required number of repeated scans. A phantom containing 21 low-contrast objects was scanned on a 128-slice CT scanner at three dose levels. Each scan was repeated 100 times. For each experimental configuration, the low-contrast detectability, quantified as the area under receiver operating characteristic curve, Az, was calculated using a previously validated CHO with randomly selected subsets of scans, ranging from 10 to 100. Using Az from the 100 scans as the reference, the accuracy from a smaller number of scans was determined. Our results demonstrated that the minimum number of repeated scans increased when the radiation dose level decreased, object size and contrast level decreased, and the number of channels increased. As a general trend, it increased as the low-contrast detectability decreased. This study provides a basis for the experimental design of task-based image quality assessment in clinical CT using CHO. PMID:27284547
NASA Astrophysics Data System (ADS)
Silverman, N. L.; Maneta, M. P.
2016-06-01
Detecting long-term change in seasonal precipitation using ground observations is dependent on the representativity of the point measurement to the surrounding landscape. In mountainous regions, representativity can be poor and lead to large uncertainties in precipitation estimates at high elevations or in areas where observations are sparse. If the uncertainty in the estimate is large compared to the long-term shifts in precipitation, then the change will likely go undetected. In this analysis, we examine the minimum detectable change across mountainous terrain in western Montana, USA. We ask the question: What is the minimum amount of change that is necessary to be detected using our best estimates of precipitation in complex terrain? We evaluate the spatial uncertainty in the precipitation estimates by conditioning historic regional climate model simulations to ground observations using Bayesian inference. By using this uncertainty as a null hypothesis, we test for detectability across the study region. To provide context for the detectability calculations, we look at a range of future scenarios from the Coupled Model Intercomparison Project 5 (CMIP5) multimodel ensemble downscaled to 4 km resolution using the MACAv2-METDATA data set. When using the ensemble averages we find that approximately 65% of the significant increases in winter precipitation go undetected at midelevations. At high elevation, approximately 75% of significant increases in winter precipitation are undetectable. Areas where change can be detected are largely controlled by topographic features. Elevation and aspect are key characteristics that determine whether or not changes in winter precipitation can be detected. Furthermore, we find that undetected increases in winter precipitation at high elevation will likely remain as snow under climate change scenarios. Therefore, there is potential for these areas to offset snowpack loss at lower elevations and confound the effects of climate change on water resources.
Spinal focal lesion detection in multiple myeloma using multimodal image features
NASA Astrophysics Data System (ADS)
Fränzle, Andrea; Hillengass, Jens; Bendl, Rolf
2015-03-01
Multiple myeloma is a tumor disease in the bone marrow that affects the skeleton systemically, i.e. multiple lesions can occur in different sites in the skeleton. To quantify overall tumor mass for determining degree of disease and for analysis of therapy response, volumetry of all lesions is needed. Since the large amount of lesions in one patient impedes manual segmentation of all lesions, quantification of overall tumor volume is not possible until now. Therefore development of automatic lesion detection and segmentation methods is necessary. Since focal tumors in multiple myeloma show different characteristics in different modalities (changes in bone structure in CT images, hypointensity in T1 weighted MR images and hyperintensity in T2 weighted MR images), multimodal image analysis is necessary for the detection of focal tumors. In this paper a pattern recognition approach is presented that identifies focal lesions in lumbar vertebrae based on features from T1 and T2 weighted MR images. Image voxels within bone are classified using random forests based on plain intensities and intensity value derived features (maximum, minimum, mean, median) in a 5 x 5 neighborhood around a voxel from both T1 and T2 weighted MR images. A test data sample of lesions in 8 lumbar vertebrae from 4 multiple myeloma patients can be classified at an accuracy of 95% (using a leave-one-patient-out test). The approach provides a reasonable delineation of the example lesions. This is an important step towards automatic tumor volume quantification in multiple myeloma.
Doxon, Andrew J; Johnson, David E; Tan, Hong Z; Provancher, William R
2013-01-01
Many of the devices used in haptics research are over-engineered for the task and are designed with capabilities that go far beyond human perception levels. Designing devices that more closely match the limits of human perception will make them smaller, less expensive, and more useful. However, many device-centric perception thresholds have yet to be evaluated. To this end, three experiments were conducted, using one degree-of-freedom contact location feedback device in combination with a kinesthetic display, to provide a more explicit set of specifications for similar tactile-kinesthetic haptic devices. The first of these experiments evaluated the ability of humans to repeatedly localize tactile cues across the fingerpad. Subjects could localize cues to within 1.3 mm and showed bias toward the center of the fingerpad. The second experiment evaluated the minimum perceptible difference of backlash at the tactile element. Subjects were able to discriminate device backlash in excess of 0.46 mm on low-curvature models and 0.93 mm on high-curvature models. The last experiment evaluated the minimum perceptible difference of system delay between user action and device reaction. Subjects were able to discriminate delays in excess of 61 ms. The results from these studies can serve as the maximum (i.e., most demanding) device specifications for most tactile-kinesthetic haptic systems.
Khandoker, Ahsan H; Karmakar, Chandan K; Begg, Rezaul K; Palaniswami, Marimuthu
2007-01-01
As humans age or are influenced by pathology of the neuromuscular system, gait patterns are known to adjust, accommodating for reduced function in the balance control system. The aim of this study was to investigate the effectiveness of a wavelet based multiscale analysis of a gait variable [minimum toe clearance (MTC)] in deriving indexes for understanding age-related declines in gait performance and screening of balance impairments in the elderly. MTC during walking on a treadmill for 30 healthy young, 27 healthy elderly and 10 falls risk elderly subjects with a history of tripping falls were analyzed. The MTC signal from each subject was decomposed to eight detailed signals at different wavelet scales by using the discrete wavelet transform. The variances of detailed signals at scales 8 to 1 were calculated. The multiscale exponent (beta) was then estimated from the slope of the variance progression at successive scales. The variance at scale 5 was significantly (p<0.01) different between young and healthy elderly group. Results also suggest that the Beta between scales 1 to 2 are effective for recognizing falls risk gait patterns. Results have implication for quantifying gait dynamics in normal, ageing and pathological conditions. Early detection of gait pattern changes due to ageing and balance impairments using wavelet-based multiscale analysis might provide the opportunity to initiate preemptive measures to be undertaken to avoid injurious falls.
EE Cep Winks in Full Color (Abstract)
NASA Astrophysics Data System (ADS)
Walker, G.
2015-06-01
(Abstract only) We observe the long period (5.6 years) Eclipsing Binary Variable Star EE Cep during its 2014 eclipse. It was observed on every clear night from the Maria Mitchell Observatory as well as remote sites for a total of 25 nights. Each night consisted of a detailed time series in BVRI looking for short term variations for a total of >10,000 observations. The data was transformed to the Standard System. In addition, a time series was captured during the night of the eclipse. This data provides an alternate method to determine Time of Minimum than traditionally performed. The TOM varied with color. Several strong correlations are seen between colors substantiating the detection of variations on a time scale of hours. The long term light curve shows five interesting and different Phases with different characteristics.
Ma, Yufei; Yu, Guang; Zhang, Jingbo; Yu, Xin; Sun, Rui; Tittel, Frank K
2015-03-27
A sensitive trace gas sensor platform based on quartz-enhanced photoacoustic spectroscopy (QEPAS) is reported. A 1.395 μm continuous wave (CW), distributed feedback pigtailed diode laser was used as the excitation source and H2O was selected as the target analyte. Two kinds of quartz tuning forks (QTFs) with a resonant frequency (f0) of 30.72 kHz and 38 kHz were employed for the first time as an acoustic wave transducer, respectively for QEPAS instead of a standard QTF with a f0 of 32.768 kHz. The QEPAS sensor performance using the three different QTFs was experimentally investigated and theoretically analyzed. A minimum detection limit of 5.9 ppmv and 4.3 ppmv was achieved for f0 of 32.768 kHz and 30.72 kHz, respectively.
Chen, Xiao; Xu, Rong-Qing; Chen, Jian-Ping; Shen, Zhong-Hua; Jian, Lu; Ni, Xiao-Wu
2004-06-01
A highly sensitive fiber-optic sensor based on optical beam deflection is applied for investigating the propagation of a laser-induced plasma shock wave, the oscillation of a cavitation bubble diameter, and the development of a bubble-collapse-induced shock wave when a Nd:YAG laser pulse is focused upon an aluminum surface in water. By the sequence of experimental waveforms detected at different distances, the attenuation properties of the plasma shock wave and of the bubble-collapse-induced shock wave are obtained. Besides, based on characteristic signals, both the maximum and the minimum bubble radii at each oscillation cycle are determined, as are the corresponding oscillating periods.
NASA Technical Reports Server (NTRS)
Kosterev, A. A.; Tittel, F. K.; Durante, W.; Allen, M.; Kohler, R.; Gmachl, C.; Capasso, F.; Sivco, D. L.; Cho, A. Y.
2002-01-01
We report the first application of pulsed, near-room-temperature quantum cascade laser technology to the continuous detection of biogenic CO production rates above viable cultures of vascular smooth muscle cells. A computer-controlled sequence of measurements over a 9-h period was obtained, resulting in a minimum detectable CO production of 20 ppb in a 1-m optical path above a standard cell-culture flask. Data-processing procedures for real-time monitoring of both biogenic and ambient atmospheric CO concentrations are described.
13CO Survey of Northern Intermediate-Mass Star-Forming Regions
NASA Astrophysics Data System (ADS)
Lundquist, Michael J.; Kobulnicky, H. A.; Kerton, C. R.
2014-01-01
We conducted a survey of 13CO with the OSO 20-m telescope toward 68 intermediate-mass star-forming regions (IM SFRs) visible in the northern hemisphere. These regions have mostly been excluded from previous CO surveys and were selected from IRAS colors that specify cool dust and large PAH contribution. These regions are known to host stars up to, but not exceeding, about 8 solar masses. We detect 13CO in 57 of the 68 IM SFRs down to a typical RMS of ~50 mK. We present kinematic distances, minimum column densities, and minimum masses for these IM SFRs.
Improving Bandwidth Utilization in a 1 Tbps Airborne MIMO Communications Downlink
2013-03-21
number of transmitters). C = log2 ∣∣∣∣∣INr + EsNtN0 HHH ∣∣∣∣∣ (2.32) In the signal to noise ratio, Es represents the total energy from all transmitters...channel matrix pseudo-inverse is computed by (2.36) [6, p. 970] 31 H+ = ( HHH )−1HH. (2.36) 2.6.5 Minimum Mean-Squared Error Detection. Minimum Mean Squared...H† = ( HHH + Nt SNR I )−1 HH . (3.14) Equation (3.14) was defined in [2] as an implementation of a MMSE equalizer, and was applied to the received
Minimum entropy density method for the time series analysis
NASA Astrophysics Data System (ADS)
Lee, Jeong Won; Park, Joongwoo Brian; Jo, Hang-Hyun; Yang, Jae-Suk; Moon, Hie-Tae
2009-01-01
The entropy density is an intuitive and powerful concept to study the complicated nonlinear processes derived from physical systems. We develop the minimum entropy density method (MEDM) to detect the structure scale of a given time series, which is defined as the scale in which the uncertainty is minimized, hence the pattern is revealed most. The MEDM is applied to the financial time series of Standard and Poor’s 500 index from February 1983 to April 2006. Then the temporal behavior of structure scale is obtained and analyzed in relation to the information delivery time and efficient market hypothesis.
NASA Technical Reports Server (NTRS)
Abe, K.; Fuke, H.; Haino, S.; Hams, T.; Hasegawa, M.; Horikoshi, A.; Kim, K. C.; Kusumoto, A.; Lee, M. H.; Makida, Y.;
2012-01-01
The energy spectrum of cosmic-ray antiprotons (p-bar's) from 0.17 to 3.5 GeV has been measured using 7886 p-bar's detected by BESS-Polar II during a long-duration flight over Antarctica near solar minimum in December 2007 and January 2008. This shows good consistency with secondary p-bar calculations. Cosmologically primary p-bar's have been investigated by comparing measured and calculated p-bar spectra. BESS-Polar II data.show no evidence of primary p-bar's from the evaporation of primordial black holes.
Entanglement witnesses in spin models
NASA Astrophysics Data System (ADS)
Tóth, Géza
2005-01-01
We construct entanglement witnesses using fundamental quantum operators of spin models which contain two-particle interactions and have a certain symmetry. By choosing the Hamiltonian as such an operator, our method can be used for detecting entanglement by energy measurement. We apply this method to the Heisenberg model in a cubic lattice with a magnetic field, the XY model, and other familiar spin systems. Our method provides a temperature bound for separable states for systems in thermal equilibrium. We also study the Bose-Hubbard model and relate its energy minimum for separable states to the minimum obtained from the Gutzwiller ansatz.
Morgaz, Juan; Granados, María del Mar; Domínguez, Juan Manuel; Navarrete, Rocío; Fernández, Andrés; Galán, Alba; Muñoz, Pilar; Gómez-Villamandos, Rafael J
2011-06-01
The use of spectral entropy to determine anaesthetic depth and antinociception was evaluated in sevoflurane-anaesthetised Beagle dogs. Dogs were anaesthetised at each of five multiples of their individual minimum alveolar concentrations (MAC; 0.75, 1, 1.25, 1.5 and 1.75 MAC), and response entropy (RE), state entropy (SE), RE-SE difference, burst suppression rate (BSR) and cardiorespiratory parameters were recorded before and after a painful stimulus. RE, SE and RE-SE difference did not change significantly after the stimuli. The correlation between MAC-entropy parameters was weak, but these values increased when 1.75 MAC results were excluded from the analysis. BSR was different to zero at 1.5 and 1.75 MAC. It was concluded that RE and RE-SE differences were not adequate indicators of antinociception and SE and RE were unable to detect deep planes of anaesthesia in dogs, although they both distinguished the awake and unconscious states. Copyright © 2010 Elsevier Ltd. All rights reserved.
Sousa, F A; da Silva, J A
2000-04-01
The purpose of this study was to verify the relationship between professional prestige scaled through estimations and the professional prestige scaled through estimation of the number of minimum salaries attributed to professions in function of their prestige in society. Results showed: 1--the relationship between the estimation of magnitudes and the estimation of the number of minimum salaries attributed to the professions in function of their prestige is characterized by a function of potence with an exponent lower than 1,0,2--the orders of degrees of prestige of the professions resultant from different experiments involving different samples of subjects are highly concordant (W = 0.85; p < 0.001), considering the modality used as a number (estimation of magnitudes of minimum salaries).
Advancing the detection of steady-state visual evoked potentials in brain-computer interfaces
NASA Astrophysics Data System (ADS)
Abu-Alqumsan, Mohammad; Peer, Angelika
2016-06-01
Objective. Spatial filtering has proved to be a powerful pre-processing step in detection of steady-state visual evoked potentials and boosted typical detection rates both in offline analysis and online SSVEP-based brain-computer interface applications. State-of-the-art detection methods and the spatial filters used thereby share many common foundations as they all build upon the second order statistics of the acquired Electroencephalographic (EEG) data, that is, its spatial autocovariance and cross-covariance with what is assumed to be a pure SSVEP response. The present study aims at highlighting the similarities and differences between these methods. Approach. We consider the canonical correlation analysis (CCA) method as a basis for the theoretical and empirical (with real EEG data) analysis of the state-of-the-art detection methods and the spatial filters used thereby. We build upon the findings of this analysis and prior research and propose a new detection method (CVARS) that combines the power of the canonical variates and that of the autoregressive spectral analysis in estimating the signal and noise power levels. Main results. We found that the multivariate synchronization index method and the maximum contrast combination method are variations of the CCA method. All three methods were found to provide relatively unreliable detections in low signal-to-noise ratio (SNR) regimes. CVARS and the minimum energy combination methods were found to provide better estimates for different SNR levels. Significance. Our theoretical and empirical results demonstrate that the proposed CVARS method outperforms other state-of-the-art detection methods when used in an unsupervised fashion. Furthermore, when used in a supervised fashion, a linear classifier learned from a short training session is able to estimate the hidden user intention, including the idle state (when the user is not attending to any stimulus), rapidly, accurately and reliably.
Fate of thiamethoxam in mesocosms and response of the zooplankton community.
Lobson, C; Luong, K; Seburn, D; White, M; Hann, B; Prosser, R S; Wong, C S; Hanson, M L
2018-05-14
Thiamethoxam is a neonicotinoid insecticide that can reach wetlands in agro-ecosystems through runoff. The fate and effects of thiamethoxam on non-target organisms in shallow wetland ecosystems have not been well characterized. To this end, a mesocosm study was conducted with a focus on characterizing zooplankton community responses. A single pulse application of thiamethoxam (0, 25, 50, 100, 250, and 500 μg/L; n = 3) was applied to experimental systems and monitored for 8 weeks. The mean half-life of thiamethoxam among the different treatments was 3.7 days in the water column with concentrations of <0.8 μg/L in the majority of mesocosms by 56 days. Principal response curve analysis did not show any significant concentration-dependent differences in the zooplankton community among treatments over the course of the study. The minimum detectable difference (MDD%) values for abundance of potentially sensitive arthropod taxa (nauplius larvae, cyclopoid copepods) allowed the detections from controls as low as 42 and 59% effect, respectively. The MDD% values for total abundance of zooplankton (including the potentially less sensitive taxonomic group of Rotifera) allowed the detection from controls as low as 41% effect. There were no statistically significant differences in zooplankton abundance or diversity between control and treated mesocosms at the end of the study. There were also no statistically significant differences for individual taxa that were sustained between sampling points, or manifested as a concentration-response. We conclude that acute exposure to thiamethoxam at environmentally relevant concentrations (typically ng/L) likely does not represent a significant adverse ecological risk to wetland zooplankton community abundance and structure. Copyright © 2018 Elsevier B.V. All rights reserved.
Gebler, J.B.
2004-01-01
The related topics of spatial variability of aquatic invertebrate community metrics, implications of spatial patterns of metric values to distributions of aquatic invertebrate communities, and ramifications of natural variability to the detection of human perturbations were investigated. Four metrics commonly used for stream assessment were computed for 9 stream reaches within a fairly homogeneous, minimally impaired stream segment of the San Pedro River, Arizona. Metric variability was assessed for differing sampling scenarios using simple permutation procedures. Spatial patterns of metric values suggest that aquatic invertebrate communities are patchily distributed on subsegment and segment scales, which causes metric variability. Wide ranges of metric values resulted in wide ranges of metric coefficients of variation (CVs) and minimum detectable differences (MDDs), and both CVs and MDDs often increased as sample size (number of reaches) increased, suggesting that any particular set of sampling reaches could yield misleading estimates of population parameters and effects that can be detected. Mean metric variabilities were substantial, with the result that only fairly large differences in metrics would be declared significant at ?? = 0.05 and ?? = 0.20. The number of reaches required to obtain MDDs of 10% and 20% varied with significance level and power, and differed for different metrics, but were generally large, ranging into tens and hundreds of reaches. Study results suggest that metric values from one or a small number of stream reach(es) may not be adequate to represent a stream segment, depending on effect sizes of interest, and that larger sample sizes are necessary to obtain reasonable estimates of metrics and sample statistics. For bioassessment to progress, spatial variability may need to be investigated in many systems and should be considered when designing studies and interpreting data.
Trends in annual minimum exposed snow and ice cover in High Mountain Asia from MODIS
NASA Astrophysics Data System (ADS)
Rittger, Karl; Brodzik, Mary J.; Painter, Thomas H.; Racoviteanu, Adina; Armstrong, Richard; Dozier, Jeff
2016-04-01
Though a relatively short record on climatological scales, data from the Moderate Resolution Imaging Spectroradiometer (MODIS) from 2000-2014 can be used to evaluate changes in the cryosphere and provide a robust baseline for future observations from space. We use the MODIS Snow Covered Area and Grain size (MODSCAG) algorithm, based on spectral mixture analysis, to estimate daily fractional snow and ice cover and the MODICE Persistent Ice (MODICE) algorithm to estimate the annual minimum snow and ice fraction (fSCA) for each year from 2000 to 2014 in High Mountain Asia. We have found that MODSCAG performs better than other algorithms, such as the Normalized Difference Index (NDSI), at detecting snow. We use MODICE because it minimizes false positives (compared to maximum extents), for example, when bright soils or clouds are incorrectly classified as snow, a common problem with optical satellite snow mapping. We analyze changes in area using the annual MODICE maps of minimum snow and ice cover for over 15,000 individual glaciers as defined by the Randolph Glacier Inventory (RGI) Version 5, focusing on the Amu Darya, Syr Darya, Upper Indus, Ganges, and Brahmaputra River basins. For each glacier with an area of at least 1 km2 as defined by RGI, we sum the total minimum snow and ice covered area for each year from 2000 to 2014 and estimate the trends in area loss or gain. We find the largest loss in annual minimum snow and ice extent for 2000-2014 in the Brahmaputra and Ganges with 57% and 40%, respectively, of analyzed glaciers with significant losses (p-value<0.05). In the Upper Indus River basin, we see both gains and losses in minimum snow and ice extent, but more glaciers with losses than gains. Our analysis shows that a smaller proportion of glaciers in the Amu Darya and Syr Darya are experiencing significant changes in minimum snow and ice extent (3.5% and 12.2%), possibly because more of the glaciers in this region are smaller than 1 km2 than in the Indus, Ganges, and Brahmaputra making analysis from MODIS (pixel area ~0.25 km2) difficult. Overall, we see 23% of the glaciers in the 5 river basins with significant trends (in either direction). We relate these changes in area to topography and climate to understand the driving processes related to these changes. In addition to annual minimum snow and ice cover, the MODICE algorithm also provides the date of minimum fSCA for each pixel. To determine whether the surface was snow or ice we use the date of minimum fSCA from MODICE to index daily maps of snow on ice (SOI), or exposed glacier ice (EGI) and systematically derive an equilibrium line altitude (ELA) for each year from 2000-2014. We test this new algorithm in the Upper Indus basin and produce annual estimates of ELA. For the Upper Indus basin we are deriving annual ELAs that range from 5350 m to 5450 m which is slightly higher than published values of 5200 m for this region.
Optical detection of radon decay in air
Sand, Johan; Ihantola, Sakari; Peräjärvi, Kari; Toivonen, Harri; Toivonen, Juha
2016-01-01
An optical radon detection method is presented. Radon decay is directly measured by observing the secondary radiolumines cence light that alpha particles excite in air, and the selectivity of coincident photon detection is further enhanced with online pulse-shape analysis. The sensitivity of a demonstration device was 6.5 cps/Bq/l and the minimum detectable concentration was 12 Bq/m3 with a 1 h integration time. The presented technique paves the way for optical approaches in rapid radon detec tion, and it can be applied beyond radon to the analysis of any alpha-active sample which can be placed in the measurement chamber. PMID:26867800
Loftin, Keith A.; Dietze, Julie E.; Meyer, Michael T.; Graham, Jennifer L.; Maksimowicz, Megan M.; Toyne, Kathryn D.
2016-05-26
At least one microcystin congener was detected by LC/MS/MS in 52 percent of the 27 samples analyzed at a concentration greater than the LC/MS/MS minimum reporting level (MRL) of 0.010 μg/L and included detections for microcystin-LA, microcystin-LR, microcystin-LY, microcystin-RR, and microcystin-YR. Anatoxin-a, cylindrospermopsin, and nodularin-R were detected in 15 percent, 7 percent, and 4 percent of samples, respectively, at concentrations above 0.010 μg/L. Deoxycylindrospermopsin, domoic acid, lyngbyatoxin-a, microcystin-LF, microcystin-LW, and okadaic acid were not detected in the LC/MS/MS subset.
Minimum Requirements for Accurate and Efficient Real-Time On-Chip Spike Sorting
Navajas, Joaquin; Barsakcioglu, Deren Y.; Eftekhar, Amir; Jackson, Andrew; Constandinou, Timothy G.; Quiroga, Rodrigo Quian
2014-01-01
Background Extracellular recordings are performed by inserting electrodes in the brain, relaying the signals to external power-demanding devices, where spikes are detected and sorted in order to identify the firing activity of different putative neurons. A main caveat of these recordings is the necessity of wires passing through the scalp and skin in order to connect intracortical electrodes to external amplifiers. The aim of this paper is to evaluate the feasibility of an implantable platform (i.e. a chip) with the capability to wirelessly transmit the neural signals and perform real-time on-site spike sorting. New Method We computationally modelled a two-stage implementation for online, robust, and efficient spike sorting. In the first stage, spikes are detected on-chip and streamed to an external computer where mean templates are created and sent back to the chip. In the second stage, spikes are sorted in real-time through template matching. Results We evaluated this procedure using realistic simulations of extracellular recordings and describe a set of specifications that optimise performance while keeping to a minimum the signal requirements and the complexity of the calculations. Comparison with Existing Methods A key bottleneck for the development of long-term BMIs is to find an inexpensive method for real-time spike sorting. Here, we simulated a solution to this problem that uses both offline and online processing of the data. Conclusions Hardware implementations of this method therefore enable low-power long-term wireless transmission of multiple site extracellular recordings, with application to wireless BMIs or closed-loop stimulation designs. PMID:24769170
NASA Astrophysics Data System (ADS)
Bayrak, Ergin; Çağlayan, Akın; Konukman, Alp Er S.
2017-10-01
Finned tube evaporators are used in a wide range of applications such as commercial and industrial cold/freezed storage rooms with high traffic loading under frosting conditions. In this case study, an evaporator with an integrated fan was manufactured and tested under frosting conditions by only changing the air flow rate in an ambient balanced type test laboratory compared to testing in a wind tunnel with a more uniform flow distribution in order to detect the effect of air flow rate on frosting. During the test, operation was performed separately based on three different air flow rates. The parameters concerning test operation such as the changes of air temperature, air relative humidity, surface temperature, air-side pressure drop and refrigerant side capacity etc. were followed in detail for each air flow rate. At the same time, digital images were captured in front of the evaporator; thus, frost thicknesses and blockage ratios at the course of fan stall were determined by using an image-processing technique. Consequently, the test and visual results showed that the trendline of air-side pressure drop increased slowly at the first stage of test operations, then increased linearly up to a top point and then the linearity was disrupted instantly. This point speculated the beginning of defrost operation for each case. In addition, despite detecting a velocity that needs to be avoided, a test applied at minimum air velocity is superior to providing minimum capacity in terms of loss of capacity during test operations.
Spatio-temporal Trends of Climate Variability in North Carolina
NASA Astrophysics Data System (ADS)
Sayemuzzaman, Mohammad
Climatic trends in spatial and temporal variability of maximum temperature (Tmax), minimum temperature (Tmin), mean temperature (Tmean) and precipitation were evaluated for 249 ground-based stations in North Carolina for 1950-2009. The Mann-Kendall (MK), the Theil-Sen Approach (TSA) and the Sequential Mann-Kendall (SQMK) tests were applied to quantify the significance of trend, magnitude of trend and the trend shift, respectively. The lag-1 serial correlation and double mass curve techniques were used to address the data independency and homogeneity. The pre-whitening technique was used to eliminate the effect of auto correlation of the data series. The difference between minimum and maximum temperatures, and so the diurnal temperature range (DTR), at some stations was found to be decreasing on both an annual and a seasonal basis, with an overall increasing trend in the mean temperature. For precipitation, a statewide increasing trend in fall (highest in November) and decreasing trend in winter (highest in February) were detected. No pronounced increasing/decreasing trends were detected in annual, spring, and summer precipitation time series. Trend analysis on a spatial scale (for three physiographic regions: mountain, piedmont and coastal) revealed mixed results. Coastal zone exhibited increasing mean temperature (warming) trend as compared to other locations whereas mountain zone showed decreasing trend (cooling). Three main moisture components (precipitation, total cloud cover, and soil moisture) and the two major atmospheric circulation modes (North Atlantic Oscillation and Southern Oscillation) were used for correlative analysis purposes with the temperature (specifically with DTR) and precipitation trends. It appears that the moisture components are associated with DTR more than the circulation modes in North Carolina.
John R. Squires; Lucretia E. Olson; David L. Turner; Nicholas J. DeCesare; Jay A. Kolbe
2012-01-01
We used snow-tracking surveys to determine the probability of detecting Canada lynx Lynx canadensis in known areas of lynx presence in the northern Rocky Mountains, Montana, USA during the winters of 2006 and 2007. We used this information to determine the minimum number of survey replicates necessary to infer the presence and absence of lynx in areas of similar lynx...
Ammonia Optical Sensing by Microring Resonators
Passaro, Vittorio M. N.; Dell'Olio, Francesco; De Leonardis, Francesco
2007-01-01
A very compact (device area around 40 μm2) optical ammonia sensor based on a microring resonator is presented in this work. Silicon-on-insulator technology is used in sensor design and a dye doped polymer is adopted as sensing material. The sensor exhibits a very good linearity and a minimum detectable refractive index shift of sensing material as low as 8×10-5, with a detection limit around 4 ‰. PMID:28903258
2017-09-01
this project, we launched at Esperanza pier (Figure 5-4), which required a minimum of 2 hours of travel time , including transit from Camp Garcia to the...concentrations of emerging contaminants by providing a time -integrated sample with low detection limits and in situ extraction. PSDs are fairly well...A continuous sampling approach allows detection and quantification of chemicals in an integrated manner, providing time - weighted average (TWA
2006-10-01
trial has provided a vast and valuable polarimetric data set that has and will be beneficial to the study of polarimetric signatures of ships. iv...following polarimetric issues are relevant to the Polar Epsilon CONOPS and will be studied further: • The effects of acquisition geometry, target...between minimum detectable ship size and area coverage rate. Therefore, vessel detection will be dependent upon beam mode selection. The vessel
The magnetic sense and its use in long-distance navigation by animals.
Walker, Michael M; Dennis, Todd E; Kirschvink, Joseph L
2002-12-01
True navigation by animals is likely to depend on events occurring in the individual cells that detect magnetic fields. Minimum thresholds of detection, perception and 'interpretation' of magnetic field stimuli must be met if animals are to use a magnetic sense to navigate. Recent technological advances in animal tracking devices now make it possible to test predictions from models of navigation based on the use of variations in magnetic intensity.
The Effect of Minimum Wages on Youth Employment in Canada: A Panel Study.
ERIC Educational Resources Information Center
Yuen, Terence
2003-01-01
Canadian panel data 1988-90 were used to compare estimates of minimum-wage effects based on a low-wage/high-worker sample and a low-wage-only sample. Minimum-wage effect for the latter is nearly zero. Different results for low-wage subgroups suggest a significant effect for those with longer low-wage histories. (Contains 26 references.) (SK)
Rate-compatible protograph LDPC code families with linear minimum distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J (Inventor); Jones, Christopher R. (Inventor)
2012-01-01
Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds, and families of such codes of different rates can be decoded efficiently using a common decoding architecture.
Printability and inspectability of programmed pit defects on teh masks in EUV lithography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, I.-Y.; Seo, H.-S.; Ahn, B.-S.
2010-03-12
Printability and inspectability of phase defects in ELlVL mask originated from substrate pit were investigated. For this purpose, PDMs with programmed pits on substrate were fabricated using different ML sources from several suppliers. Simulations with 32-nm HP L/S show that substrate pits with below {approx}20 nm in depth would not be printed on the wafer if they could be smoothed by ML process down to {approx}1 nm in depth on ML surface. Through the investigation of inspectability for programmed pits, minimum pit sizes detected by KLA6xx, AIT, and M7360 depend on ML smoothing performance. Furthermore, printability results for pit defectsmore » also correlate with smoothed pit sizes. AIT results for pattemed mask with 32-nm HP L/S represents that minimum printable size of pits could be {approx}28.3 nm of SEVD. In addition, printability of pits became more printable as defocus moves to (-) directions. Consequently, printability of phase defects strongly depends on their locations with respect to those of absorber patterns. This indicates that defect compensation by pattern shift could be a key technique to realize zero printable phase defects in EUVL masks.« less
GIS-based niche modeling for mapping species' habitats
Rotenberry, J.T.; Preston, K.L.; Knick, S.
2006-01-01
Ecological a??niche modelinga?? using presence-only locality data and large-scale environmental variables provides a powerful tool for identifying and mapping suitable habitat for species over large spatial extents. We describe a niche modeling approach that identifies a minimum (rather than an optimum) set of basic habitat requirements for a species, based on the assumption that constant environmental relationships in a species' distribution (i.e., variables that maintain a consistent value where the species occurs) are most likely to be associated with limiting factors. Environmental variables that take on a wide range of values where a species occurs are less informative because they do not limit a species' distribution, at least over the range of variation sampled. This approach is operationalized by partitioning Mahalanobis D2 (standardized difference between values of a set of environmental variables for any point and mean values for those same variables calculated from all points at which a species was detected) into independent components. The smallest of these components represents the linear combination of variables with minimum variance; increasingly larger components represent larger variances and are increasingly less limiting. We illustrate this approach using the California Gnatcatcher (Polioptila californica Brewster) and provide SAS code to implement it.
An attempt to detect lameness in galloping horses by use of body-mounted inertial sensors.
Lopes, Marco A F; Dearo, Antonio C O; Lee, Allen; Reed, Shannon K; Kramer, Joanne; Pai, P Frank; Yonezawa, Yoshiharu; Maki, Hiromitchi; Morgan, Terry L; Wilson, David A; Keegan, Kevin G
2016-10-01
OBJECTIVE To evaluate head, pelvic, and limb movement to detect lameness in galloping horses. ANIMALS 12 Thoroughbreds. PROCEDURES Movement data were collected with inertial sensors mounted on the head, pelvis, and limbs of horses trotting and galloping in a straight line before and after induction of forelimb and hind limb lameness by use of sole pressure. Successful induction of lameness was determined by measurement of asymmetric vertical head and pelvic movement during trotting. Differences in gallop strides before and after induction of lameness were evaluated with paired-sample statistical analysis and neural network training and testing. Variables included maximum, minimum, range, and time indices of vertical head and pelvic acceleration, head rotation in the sagittal plane, pelvic rotation in the frontal plane, limb contact intervals, stride durations, and limb lead preference. Difference between median standardized gallop strides for each limb lead before and after induction of lameness was calculated as the sum of squared differences at each time index and assessed with a 2-way ANOVA. RESULTS Head and pelvic acceleration and rotation, limb timing, stride duration measurements, and limb lead preference during galloping were not significantly different before and after induction of lameness in the forelimb or hind limb. Differences between limb leads before induction of lameness were similar to or greater than differences within limb leads before and after lameness induction. CONCLUSIONS AND CLINICAL RELEVANCE Galloping horses maintained asymmetry of head, pelvic, and limb motion between limb leads that was unrelated to lameness.
Mitochondrial DNA transfer to the nucleus generates extensive insertion site variation in maize.
Lough, Ashley N; Roark, Leah M; Kato, Akio; Ream, Thomas S; Lamb, Jonathan C; Birchler, James A; Newton, Kathleen J
2008-01-01
Mitochondrial DNA (mtDNA) insertions into nuclear chromosomes have been documented in a number of eukaryotes. We used fluorescence in situ hybridization (FISH) to examine the variation of mtDNA insertions in maize. Twenty overlapping cosmids, representing the 570-kb maize mitochondrial genome, were individually labeled and hybridized to root tip metaphase chromosomes from the B73 inbred line. A minimum of 15 mtDNA insertion sites on nine chromosomes were detectable using this method. One site near the centromere on chromosome arm 9L was identified by a majority of the cosmids. To examine variation in nuclear mitochondrial DNA sequences (NUMTs), a mixture of labeled cosmids was applied to chromosome spreads of ten diverse inbred lines: A188, A632, B37, B73, BMS, KYS, Mo17, Oh43, W22, and W23. The number of detectable NUMTs varied dramatically among the lines. None of the tested inbred lines other than B73 showed the strong hybridization signal on 9L, suggesting that there is a recent mtDNA insertion at this site in B73. Different sources of B73 and W23 were examined for NUMT variation within inbred lines. Differences were detectable, suggesting either that mtDNA is being incorporated or lost from the maize nuclear genome continuously. The results indicate that mtDNA insertions represent a major source of nuclear chromosomal variation.
Yamaguchi, H; Igari, J; Kume, H; Abe, M; Oguri, T; Kanno, H; Kawakami, S; Okuzumi, K; Fukayama, M; Ito, A; Kawata, K; Uchida, K
1997-09-01
The emergence of Candida albicans resistance to azole antifungal agents have been reported in the U. S. and Europe. We examined the in vitro antifungal activities of fluconazole against clinical isolates collected by seven investigators in three years to examine if a tendency existed toward the development of azole-resistance among fungal isolates in Japan. The following results were obtained: 1. Sensitivities to fluconazole (FLCZ) were determined for yeast-like fungi, including 113 strains isolated in 1993, 149 strains isolated in 1994 and 205 strains isolated in 1995. No significant differences in sensitivities in the three years were detected. 2. Minimum inhibitory concentrations of FLCZ were 0.1-0.78 microgram/ml for C. albicans and 3.13-25 micrograms/ml for C. glabrata. Strains with 25 micrograms/ml of FLCZ's MIC were detected; two strains of C. krusei and one strain each of C. krusei, Trichospron beigelii and Hansenula anomala. No strains with higher than 50 micrograms/ml MIC of FLCZ were detected. 3. In vitro activities of FLCZ were compared between clinical strains isolated between 1993 and 1995 and clinical strains isolated before the marketing of FLCZ (up to December 1987) or clinical yeasts isolated between 1991 and 1992. No significant differences were observed, suggesting that no tendency existed toward azole resistance among fungal strains examined.
NASA Astrophysics Data System (ADS)
Baumann, Sebastian; Robl, Jörg; Wendt, Lorenz; Willingshofer, Ernst; Hilberg, Sylke
2016-04-01
Automated lineament analysis on remotely sensed data requires two general process steps: The identification of neighboring pixels showing high contrast and the conversion of these domains into lines. The target output is the lineaments' position, extent and orientation. We developed a lineament extraction tool programmed in R using digital elevation models as input data to generate morphological lineaments defined as follows: A morphological lineament represents a zone of high relief roughness, whose length significantly exceeds the width. As relief roughness any deviation from a flat plane, defined by a roughness threshold, is considered. In our novel approach a multi-directional and multi-scale roughness filter uses moving windows of different neighborhood sizes to identify threshold limited rough domains on digital elevation models. Surface roughness is calculated as the vertical elevation difference between the center cell and the different orientated straight lines connecting two edge cells of a neighborhood, divided by the horizontal distance of the edge cells. Thus multiple roughness values depending on the neighborhood sizes and orientations of the edge connecting lines are generated for each cell and their maximum and minimum values are extracted. Thereby negative signs of the roughness parameter represent concave relief structures as valleys, positive signs convex relief structures as ridges. A threshold defines domains of high relief roughness. These domains are thinned to a representative point pattern by a 3x3 neighborhood filter, highlighting maximum and minimum roughness peaks, and representing the center points of lineament segments. The orientation and extent of the lineament segments are calculated within the roughness domains, generating a straight line segment in the direction of least roughness differences. We tested our algorithm on digital elevation models of multiple sources and scales and compared the results visually with shaded relief map of these digital elevation models. The lineament segments trace the relief structure to a great extent and the calculated roughness parameter represents the physical geometry of the digital elevation model. Modifying the threshold for the surface roughness value highlights different distinct relief structures. Also the neighborhood size at which lineament segments are detected correspond with the width of the surface structure and may be a useful additional parameter for further analysis. The discrimination of concave and convex relief structures perfectly matches with valleys and ridges of the surface.
Changes in heat waves indices in Romania over the period 1961-2015
NASA Astrophysics Data System (ADS)
Croitoru, Adina-Eliza; Piticar, Adrian; Ciupertea, Antoniu-Flavius; Roşca, Cristina Florina
2016-11-01
In the last two decades many climate change studies have focused on extreme temperatures as they have a significant impact on environment and society. Among the weather events generated by extreme temperatures, heat waves are some of the most harmful. The main objective of this study was to detect and analyze changes in heat waves in Romania based on daily observation data (maximum and minimum temperature) over the extended summer period (May-Sept) using a set of 10 indices and to explore the spatial patterns of changes. Heat wave data series were derived from daily maximum and minimum temperature data sets recorded in 29 weather stations across Romania over a 55-year period (1961-2015). In this study, the threshold chosen was the 90th percentile calculated based on a 15-day window centered on each calendar day, and for three baseline periods (1961-1990, 1971-2000, and 1981-2010). Two heat wave definitions were considered: at least three consecutive days when maximum temperature exceeds 90th percentile, and at least three consecutive days when minimum temperature exceeds 90th percentile. For each of them, five variables were calculated: amplitude, magnitude, number of events, duration, and frequency. Finally, 10 indices resulted for further analysis. The main results are: most of the indices have statistically significant increasing trends; only one index for one weather station indicated statistically significant decreasing trend; the changes are more intense in case of heat waves detected based on maximum temperature compared to those obtained for heat waves identified based on minimum temperature; western and central regions of Romania are the most exposed to increasing heat waves.