A multi-threshold sampling method for TOF-PET signal processing
NASA Astrophysics Data System (ADS)
Kim, H.; Kao, C. M.; Xie, Q.; Chen, C. T.; Zhou, L.; Tang, F.; Frisch, H.; Moses, W. W.; Choong, W. S.
2009-04-01
As an approach to realizing all-digital data acquisition for positron emission tomography (PET), we have previously proposed and studied a multi-threshold sampling method to generate samples of a PET event waveform with respect to a few user-defined amplitudes. In this sampling scheme, one can extract both the energy and timing information for an event. In this paper, we report our prototype implementation of this sampling method and the performance results obtained with this prototype. The prototype consists of two multi-threshold discriminator boards and a time-to-digital converter (TDC) board. Each of the multi-threshold discriminator boards takes one input and provides up to eight threshold levels, which can be defined by users, for sampling the input signal. The TDC board employs the CERN HPTDC chip that determines the digitized times of the leading and falling edges of the discriminator output pulses. We connect our prototype electronics to the outputs of two Hamamatsu R9800 photomultiplier tubes (PMTs) that are individually coupled to a 6.25×6.25×25 mm3 LSO crystal. By analyzing waveform samples generated by using four thresholds, we obtain a coincidence timing resolution of about 340 ps and an ˜18% energy resolution at 511 keV. We are also able to estimate the decay-time constant from the resulting samples and obtain a mean value of 44 ns with an ˜9 ns FWHM. In comparison, using digitized waveforms obtained at a 20 GSps sampling rate for the same LSO/PMT modules we obtain ˜300 ps coincidence timing resolution, ˜14% energy resolution at 511 keV, and ˜5 ns FWHM for the estimated decay-time constant. Details of the results on the timing and energy resolutions by using the multi-threshold method indicate that it is a promising approach for implementing digital PET data acquisition.
A newly identified calculation discrepancy of the Sunset semi-continuous carbon analyzer
NASA Astrophysics Data System (ADS)
Zheng, G.; Cheng, Y.; He, K.; Duan, F.; Ma, Y.
2014-01-01
Sunset Semi-Continuous Carbon Analyzer (SCCA) is an instrument widely used for carbonaceous aerosol measurement. Despite previous validation work, here we identified a new type of SCCA calculation discrepancy caused by the default multi-point baseline correction method. When exceeding a certain threshold carbon load, multi-point correction could cause significant Total Carbon (TC) underestimation. This calculation discrepancy was characterized for both sucrose and ambient samples with three temperature protocols. For ambient samples, 22%, 36% and 12% TC was underestimated by the three protocols, respectively, with corresponding threshold being ~0, 20 and 25 μg C. For sucrose, however, such discrepancy was observed with only one of these protocols, indicating the need of more refractory SCCA calibration substance. The discrepancy was less significant for the NIOSH (National Institute for Occupational Safety and Health)-like protocol compared with the other two protocols based on IMPROVE (Interagency Monitoring of PROtected Visual Environments). Although the calculation discrepancy could be largely reduced by the single-point baseline correction method, the instrumental blanks of single-point method were higher. Proposed correction method was to use multi-point corrected data when below the determined threshold, while use single-point results when beyond that threshold. The effectiveness of this correction method was supported by correlation with optical data.
Multi-channel detector readout method and integrated circuit
Moses, William W.; Beuville, Eric; Pedrali-Noy, Marzio
2006-12-12
An integrated circuit which provides multi-channel detector readout from a detector array. The circuit receives multiple signals from the elements of a detector array and compares the sampled amplitudes of these signals against a noise-floor threshold and against one another. A digital signal is generated which corresponds to the location of the highest of these signal amplitudes which exceeds the noise floor threshold. The digital signal is received by a multiplexing circuit which outputs an analog signal corresponding the highest of the input signal amplitudes. In addition a digital control section provides for programmatic control of the multiplexer circuit, amplifier gain, amplifier reset, masking selection, and test circuit functionality on each input thereof.
Multi-channel detector readout method and integrated circuit
Moses, William W.; Beuville, Eric; Pedrali-Noy, Marzio
2004-05-18
An integrated circuit which provides multi-channel detector readout from a detector array. The circuit receives multiple signals from the elements of a detector array and compares the sampled amplitudes of these signals against a noise-floor threshold and against one another. A digital signal is generated which corresponds to the location of the highest of these signal amplitudes which exceeds the noise floor threshold. The digital signal is received by a multiplexing circuit which outputs an analog signal corresponding the highest of the input signal amplitudes. In addition a digital control section provides for programmatic control of the multiplexer circuit, amplifier gain, amplifier reset, masking selection, and test circuit functionality on each input thereof.
Correlations of stock price fluctuations under multi-scale and multi-threshold scenarios
NASA Astrophysics Data System (ADS)
Sui, Guo; Li, Huajiao; Feng, Sida; Liu, Xueyong; Jiang, Meihui
2018-01-01
The multi-scale method is widely used in analyzing time series of financial markets and it can provide market information for different economic entities who focus on different periods. Through constructing multi-scale networks of price fluctuation correlation in the stock market, we can detect the topological relationship between each time series. Previous research has not addressed the problem that the original fluctuation correlation networks are fully connected networks and more information exists within these networks that is currently being utilized. Here we use listed coal companies as a case study. First, we decompose the original stock price fluctuation series into different time scales. Second, we construct the stock price fluctuation correlation networks at different time scales. Third, we delete the edges of the network based on thresholds and analyze the network indicators. Through combining the multi-scale method with the multi-threshold method, we bring to light the implicit information of fully connected networks.
Wang, Wei; Song, Wei-Guo; Liu, Shi-Xing; Zhang, Yong-Ming; Zheng, Hong-Yang; Tian, Wei
2011-04-01
An improved method for detecting cloud combining Kmeans clustering and the multi-spectral threshold approach is described. On the basis of landmark spectrum analysis, MODIS data is categorized into two major types initially by Kmeans method. The first class includes clouds, smoke and snow, and the second class includes vegetation, water and land. Then a multi-spectral threshold detection is applied to eliminate interference such as smoke and snow for the first class. The method is tested with MODIS data at different time under different underlying surface conditions. By visual method to test the performance of the algorithm, it was found that the algorithm can effectively detect smaller area of cloud pixels and exclude the interference of underlying surface, which provides a good foundation for the next fire detection approach.
Dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization
NASA Astrophysics Data System (ADS)
Li, Li
2018-03-01
In order to extract target from complex background more quickly and accurately, and to further improve the detection effect of defects, a method of dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization was proposed. Firstly, the method of single-threshold selection based on Arimoto entropy was extended to dual-threshold selection in order to separate the target from the background more accurately. Then intermediate variables in formulae of Arimoto entropy dual-threshold selection was calculated by recursion to eliminate redundant computation effectively and to reduce the amount of calculation. Finally, the local search phase of artificial bee colony algorithm was improved by chaotic sequence based on tent mapping. The fast search for two optimal thresholds was achieved using the improved bee colony optimization algorithm, thus the search could be accelerated obviously. A large number of experimental results show that, compared with the existing segmentation methods such as multi-threshold segmentation method using maximum Shannon entropy, two-dimensional Shannon entropy segmentation method, two-dimensional Tsallis gray entropy segmentation method and multi-threshold segmentation method using reciprocal gray entropy, the proposed method can segment target more quickly and accurately with superior segmentation effect. It proves to be an instant and effective method for image segmentation.
NASA Astrophysics Data System (ADS)
Qin, Y.; Lu, P.; Li, Z.
2018-04-01
Landslide inventory mapping is essential for hazard assessment and mitigation. In most previous studies, landslide mapping was achieved by visual interpretation of aerial photos and remote sensing images. However, such method is labor-intensive and time-consuming, especially over large areas. Although a number of semi-automatic landslide mapping methods have been proposed over the past few years, limitations remain in terms of their applicability over different study areas and data, and there is large room for improvement in terms of the accuracy and automation degree. For these reasons, we developed a change detection-based Markov Random Field (CDMRF) method for landslide inventory mapping. The proposed method mainly includes two steps: 1) change detection-based multi-threshold for training samples generation and 2) MRF for landslide inventory mapping. Compared with the previous methods, the proposed method in this study has three advantages: 1) it combines multiple image difference techniques with multi-threshold method to generate reliable training samples; 2) it takes the spectral characteristics of landslides into account; and 3) it is highly automatic with little parameter tuning. The proposed method was applied for regional landslides mapping from 10 m Sentinel-2 images in Western China. Results corroborated the effectiveness and applicability of the proposed method especially the capability of rapid landslide mapping. Some directions for future research are offered. This study to our knowledge is the first attempt to map landslides from free and medium resolution satellite (i.e., Sentinel-2) images in China.
Improved Sparse Multi-Class SVM and Its Application for Gene Selection in Cancer Classification
Huang, Lingkang; Zhang, Hao Helen; Zeng, Zhao-Bang; Bushel, Pierre R.
2013-01-01
Background Microarray techniques provide promising tools for cancer diagnosis using gene expression profiles. However, molecular diagnosis based on high-throughput platforms presents great challenges due to the overwhelming number of variables versus the small sample size and the complex nature of multi-type tumors. Support vector machines (SVMs) have shown superior performance in cancer classification due to their ability to handle high dimensional low sample size data. The multi-class SVM algorithm of Crammer and Singer provides a natural framework for multi-class learning. Despite its effective performance, the procedure utilizes all variables without selection. In this paper, we propose to improve the procedure by imposing shrinkage penalties in learning to enforce solution sparsity. Results The original multi-class SVM of Crammer and Singer is effective for multi-class classification but does not conduct variable selection. We improved the method by introducing soft-thresholding type penalties to incorporate variable selection into multi-class classification for high dimensional data. The new methods were applied to simulated data and two cancer gene expression data sets. The results demonstrate that the new methods can select a small number of genes for building accurate multi-class classification rules. Furthermore, the important genes selected by the methods overlap significantly, suggesting general agreement among different variable selection schemes. Conclusions High accuracy and sparsity make the new methods attractive for cancer diagnostics with gene expression data and defining targets of therapeutic intervention. Availability: The source MATLAB code are available from http://math.arizona.edu/~hzhang/software.html. PMID:23966761
NASA Astrophysics Data System (ADS)
Zhou, Y.; Zhao, H.; Hao, H.; Wang, C.
2018-05-01
Accurate remote sensing water extraction is one of the primary tasks of watershed ecological environment study. Since the Yanhe water system has typical characteristics of a small water volume and narrow river channel, which leads to the difficulty for conventional water extraction methods such as Normalized Difference Water Index (NDWI). A new Multi-Spectral Threshold segmentation of the NDWI (MST-NDWI) water extraction method is proposed to achieve the accurate water extraction in Yanhe watershed. In the MST-NDWI method, the spectral characteristics of water bodies and typical backgrounds on the Landsat/TM images have been evaluated in Yanhe watershed. The multi-spectral thresholds (TM1, TM4, TM5) based on maximum-likelihood have been utilized before NDWI water extraction to realize segmentation for a division of built-up lands and small linear rivers. With the proposed method, a water map is extracted from the Landsat/TM images in 2010 in China. An accuracy assessment is conducted to compare the proposed method with the conventional water indexes such as NDWI, Modified NDWI (MNDWI), Enhanced Water Index (EWI), and Automated Water Extraction Index (AWEI). The result shows that the MST-NDWI method generates better water extraction accuracy in Yanhe watershed and can effectively diminish the confusing background objects compared to the conventional water indexes. The MST-NDWI method integrates NDWI and Multi-Spectral Threshold segmentation algorithms, with richer valuable information and remarkable results in accurate water extraction in Yanhe watershed.
Multi-threshold de-noising of electrical imaging logging data based on the wavelet packet transform
NASA Astrophysics Data System (ADS)
Xie, Fang; Xiao, Chengwen; Liu, Ruilin; Zhang, Lili
2017-08-01
A key problem of effectiveness evaluation for fractured-vuggy carbonatite reservoir is how to accurately extract fracture and vug information from electrical imaging logging data. Drill bits quaked during drilling and resulted in rugged surfaces of borehole walls and thus conductivity fluctuations in electrical imaging logging data. The occurrence of the conductivity fluctuations (formation background noise) directly affects the fracture/vug information extraction and reservoir effectiveness evaluation. We present a multi-threshold de-noising method based on wavelet packet transform to eliminate the influence of rugged borehole walls. The noise is present as fluctuations in button-electrode conductivity curves and as pockmarked responses in electrical imaging logging static images. The noise has responses in various scales and frequency ranges and has low conductivity compared with fractures or vugs. Our de-noising method is to decompose the data into coefficients with wavelet packet transform on a quadratic spline basis, then shrink high-frequency wavelet packet coefficients in different resolutions with minimax threshold and hard-threshold function, and finally reconstruct the thresholded coefficients. We use electrical imaging logging data collected from fractured-vuggy Ordovician carbonatite reservoir in Tarim Basin to verify the validity of the multi-threshold de-noising method. Segmentation results and extracted parameters are shown as well to prove the effectiveness of the de-noising procedure.
Advanced Mitigation Process (AMP) for Improving Laser Damage Threshold of Fused Silica Optics
NASA Astrophysics Data System (ADS)
Ye, Xin; Huang, Jin; Liu, Hongjie; Geng, Feng; Sun, Laixi; Jiang, Xiaodong; Wu, Weidong; Qiao, Liang; Zu, Xiaotao; Zheng, Wanguo
2016-08-01
The laser damage precursors in subsurface of fused silica (e.g. photosensitive impurities, scratches and redeposited silica compounds) were mitigated by mineral acid leaching and HF etching with multi-frequency ultrasonic agitation, respectively. The comparison of scratches morphology after static etching and high-frequency ultrasonic agitation etching was devoted in our case. And comparison of laser induce damage resistance of scratched and non-scratched fused silica surfaces after HF etching with high-frequency ultrasonic agitation were also investigated in this study. The global laser induce damage resistance was increased significantly after the laser damage precursors were mitigated in this case. The redeposition of reaction produce was avoided by involving multi-frequency ultrasonic and chemical leaching process. These methods made the increase of laser damage threshold more stable. In addition, there is no scratch related damage initiations found on the samples which were treated by Advanced Mitigation Process.
Advanced Mitigation Process (AMP) for Improving Laser Damage Threshold of Fused Silica Optics
Ye, Xin; Huang, Jin; Liu, Hongjie; Geng, Feng; Sun, Laixi; Jiang, Xiaodong; Wu, Weidong; Qiao, Liang; Zu, Xiaotao; Zheng, Wanguo
2016-01-01
The laser damage precursors in subsurface of fused silica (e.g. photosensitive impurities, scratches and redeposited silica compounds) were mitigated by mineral acid leaching and HF etching with multi-frequency ultrasonic agitation, respectively. The comparison of scratches morphology after static etching and high-frequency ultrasonic agitation etching was devoted in our case. And comparison of laser induce damage resistance of scratched and non-scratched fused silica surfaces after HF etching with high-frequency ultrasonic agitation were also investigated in this study. The global laser induce damage resistance was increased significantly after the laser damage precursors were mitigated in this case. The redeposition of reaction produce was avoided by involving multi-frequency ultrasonic and chemical leaching process. These methods made the increase of laser damage threshold more stable. In addition, there is no scratch related damage initiations found on the samples which were treated by Advanced Mitigation Process. PMID:27484188
Multi-mode ultrasonic welding control and optimization
Tang, Jason C.H.; Cai, Wayne W
2013-05-28
A system and method for providing multi-mode control of an ultrasonic welding system. In one embodiment, the control modes include the energy of the weld, the time of the welding process and the compression displacement of the parts being welded during the welding process. The method includes providing thresholds for each of the modes, and terminating the welding process after the threshold for each mode has been reached, the threshold for more than one mode has been reached or the threshold for one of the modes has been reached. The welding control can be either open-loop or closed-loop, where the open-loop process provides the mode thresholds and once one or more of those thresholds is reached the welding process is terminated. The closed-loop control provides feedback of the weld energy and/or the compression displacement so that the weld power and/or weld pressure can be increased or decreased accordingly.
Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou
2015-01-01
Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.
Population density estimated from locations of individuals on a passive detector array
Efford, Murray G.; Dawson, Deanna K.; Borchers, David L.
2009-01-01
The density of a closed population of animals occupying stable home ranges may be estimated from detections of individuals on an array of detectors, using newly developed methods for spatially explicit capture–recapture. Likelihood-based methods provide estimates for data from multi-catch traps or from devices that record presence without restricting animal movement ("proximity" detectors such as camera traps and hair snags). As originally proposed, these methods require multiple sampling intervals. We show that equally precise and unbiased estimates may be obtained from a single sampling interval, using only the spatial pattern of detections. This considerably extends the range of possible applications, and we illustrate the potential by estimating density from simulated detections of bird vocalizations on a microphone array. Acoustic detection can be defined as occurring when received signal strength exceeds a threshold. We suggest detection models for binary acoustic data, and for continuous data comprising measurements of all signals above the threshold. While binary data are often sufficient for density estimation, modeling signal strength improves precision when the microphone array is small.
Yin, X X; Ng, B W-H; Ramamohanarao, K; Baghai-Wadji, A; Abbott, D
2012-09-01
It has been shown that, magnetic resonance images (MRIs) with sparsity representation in a transformed domain, e.g. spatial finite-differences (FD), or discrete cosine transform (DCT), can be restored from undersampled k-space via applying current compressive sampling theory. The paper presents a model-based method for the restoration of MRIs. The reduced-order model, in which a full-system-response is projected onto a subspace of lower dimensionality, has been used to accelerate image reconstruction by reducing the size of the involved linear system. In this paper, the singular value threshold (SVT) technique is applied as a denoising scheme to reduce and select the model order of the inverse Fourier transform image, and to restore multi-slice breast MRIs that have been compressively sampled in k-space. The restored MRIs with SVT for denoising show reduced sampling errors compared to the direct MRI restoration methods via spatial FD, or DCT. Compressive sampling is a technique for finding sparse solutions to underdetermined linear systems. The sparsity that is implicit in MRIs is to explore the solution to MRI reconstruction after transformation from significantly undersampled k-space. The challenge, however, is that, since some incoherent artifacts result from the random undersampling, noise-like interference is added to the image with sparse representation. These recovery algorithms in the literature are not capable of fully removing the artifacts. It is necessary to introduce a denoising procedure to improve the quality of image recovery. This paper applies a singular value threshold algorithm to reduce the model order of image basis functions, which allows further improvement of the quality of image reconstruction with removal of noise artifacts. The principle of the denoising scheme is to reconstruct the sparse MRI matrices optimally with a lower rank via selecting smaller number of dominant singular values. The singular value threshold algorithm is performed by minimizing the nuclear norm of difference between the sampled image and the recovered image. It has been illustrated that this algorithm improves the ability of previous image reconstruction algorithms to remove noise artifacts while significantly improving the quality of MRI recovery.
THRESHOLD ELEMENTS AND THE DESIGN OF SEQUENTIAL SWITCHING NETWORKS.
The report covers research performed from March 1966 to March 1967. The major topics treated are: (1) methods for finding weight- threshold vectors...that realize a given switching function in multi- threshold linear logic; (2) synthesis of sequential machines by means of shift registers and simple
Threshold multi-secret sharing scheme based on phase-shifting interferometry
NASA Astrophysics Data System (ADS)
Deng, Xiaopeng; Wen, Wei; Shi, Zhengang
2017-03-01
A threshold multi-secret sharing scheme is proposed based on phase-shifting interferometry. The K secret images to be shared are firstly encoded by using Fourier transformation, respectively. Then, these encoded images are shared into many shadow images based on recording principle of the phase-shifting interferometry. In the recovering stage, the secret images can be restored by combining any 2 K + 1 or more shadow images, while any 2 K or fewer shadow images cannot obtain any information about the secret images. As a result, a (2 K + 1 , N) threshold multi-secret sharing scheme can be implemented. Simulation results are presented to demonstrate the feasibility of the proposed method.
Griffis, Joseph C; Allendorfer, Jane B; Szaflarski, Jerzy P
2016-01-15
Manual lesion delineation by an expert is the standard for lesion identification in MRI scans, but it is time-consuming and can introduce subjective bias. Alternative methods often require multi-modal MRI data, user interaction, scans from a control population, and/or arbitrary statistical thresholding. We present an approach for automatically identifying stroke lesions in individual T1-weighted MRI scans using naïve Bayes classification. Probabilistic tissue segmentation and image algebra were used to create feature maps encoding information about missing and abnormal tissue. Leave-one-case-out training and cross-validation was used to obtain out-of-sample predictions for each of 30 cases with left hemisphere stroke lesions. Our method correctly predicted lesion locations for 30/30 un-trained cases. Post-processing with smoothing (8mm FWHM) and cluster-extent thresholding (100 voxels) was found to improve performance. Quantitative evaluations of post-processed out-of-sample predictions on 30 cases revealed high spatial overlap (mean Dice similarity coefficient=0.66) and volume agreement (mean percent volume difference=28.91; Pearson's r=0.97) with manual lesion delineations. Our automated approach agrees with manual tracing. It provides an alternative to automated methods that require multi-modal MRI data, additional control scans, or user interaction to achieve optimal performance. Our fully trained classifier has applications in neuroimaging and clinical contexts. Copyright © 2015 Elsevier B.V. All rights reserved.
A Multi-Channel Method for Detecting Periodic Forced Oscillations in Power Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Follum, James D.; Tuffner, Francis K.
2016-11-14
Forced oscillations in electric power systems are often symptomatic of equipment malfunction or improper operation. Detecting and addressing the cause of the oscillations can improve overall system operation. In this paper, a multi-channel method of detecting forced oscillations and estimating their frequencies is proposed. The method operates by comparing the sum of scaled periodograms from various channels to a threshold. A method of setting the threshold to specify the detector's probability of false alarm while accounting for the correlation between channels is also presented. Results from simulated and measured power system data indicate that the method outperforms its single-channel counterpartmore » and is suitable for real-world applications.« less
Adaptive compressed sensing of multi-view videos based on the sparsity estimation
NASA Astrophysics Data System (ADS)
Yang, Senlin; Li, Xilong; Chong, Xin
2017-11-01
The conventional compressive sensing for videos based on the non-adaptive linear projections, and the measurement times is usually set empirically. As a result, the quality of videos reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was described. Then an estimation method for the sparsity of multi-view videos was proposed based on the two dimensional discrete wavelet transform (2D DWT). With an energy threshold given beforehand, the DWT coefficients were processed with both energy normalization and sorting by descending order, and the sparsity of the multi-view video can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of video frame effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparsity estimated with the energy threshold provided, the proposed method can ensure the reconstruction quality of multi-view videos.
Forecasting Solar Flares Using Magnetogram-based Predictors and Machine Learning
NASA Astrophysics Data System (ADS)
Florios, Kostas; Kontogiannis, Ioannis; Park, Sung-Hong; Guerra, Jordan A.; Benvenuto, Federico; Bloomfield, D. Shaun; Georgoulis, Manolis K.
2018-02-01
We propose a forecasting approach for solar flares based on data from Solar Cycle 24, taken by the Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory (SDO) mission. In particular, we use the Space-weather HMI Active Region Patches (SHARP) product that facilitates cut-out magnetograms of solar active regions (AR) in the Sun in near-realtime (NRT), taken over a five-year interval (2012 - 2016). Our approach utilizes a set of thirteen predictors, which are not included in the SHARP metadata, extracted from line-of-sight and vector photospheric magnetograms. We exploit several machine learning (ML) and conventional statistics techniques to predict flares of peak magnitude {>} M1 and {>} C1 within a 24 h forecast window. The ML methods used are multi-layer perceptrons (MLP), support vector machines (SVM), and random forests (RF). We conclude that random forests could be the prediction technique of choice for our sample, with the second-best method being multi-layer perceptrons, subject to an entropy objective function. A Monte Carlo simulation showed that the best-performing method gives accuracy ACC=0.93(0.00), true skill statistic TSS=0.74(0.02), and Heidke skill score HSS=0.49(0.01) for {>} M1 flare prediction with probability threshold 15% and ACC=0.84(0.00), TSS=0.60(0.01), and HSS=0.59(0.01) for {>} C1 flare prediction with probability threshold 35%.
Compressed-Sensing Multi-Spectral Imaging of the Post-Operative Spine
Worters, Pauline W.; Sung, Kyunghyun; Stevens, Kathryn J.; Koch, Kevin M.; Hargreaves, Brian A.
2012-01-01
Purpose To apply compressed sensing (CS) to in vivo multi-spectral imaging (MSI), which uses additional encoding to avoid MRI artifacts near metal, and demonstrate the feasibility of CS-MSI in post-operative spinal imaging. Materials and Methods Thirteen subjects referred for spinal MRI were examined using T2-weighted MSI. A CS undersampling factor was first determined using a structural similarity index as a metric for image quality. Next, these fully sampled datasets were retrospectively undersampled using a variable-density random sampling scheme and reconstructed using an iterative soft-thresholding method. The fully- and under-sampled images were compared by using a 5-point scale. Prospectively undersampled CS-MSI data were also acquired from two subjects to ensure that the prospective random sampling did not affect the image quality. Results A two-fold outer reduction factor was deemed feasible for the spinal datasets. CS-MSI images were shown to be equivalent or better than the original MSI images in all categories: nerve visualization: p = 0.00018; image artifact: p = 0.00031; image quality: p = 0.0030. No alteration of image quality and T2 contrast was observed from prospectively undersampled CS-MSI. Conclusion This study shows that the inherently sparse nature of MSI data allows modest undersampling followed by CS reconstruction with no loss of diagnostic quality. PMID:22791572
Improving Magnitude Detection Thresholds Using Multi-Station Multi-Event, and Multi-Phase Methods
2008-07-31
applied to different tectonic settings and for what percentage of the seismicity. 111 million correlations were performed on Lg-waves for the events in...x xi Acknowledgments We’d like to thank the operators of the Chinese Digital Seismograph Network, the U.S. Geological Survey, and...applicable correlation methods can be applied to different tectonic settings and for what percentage of the seismicity. 111 million correlations were
NASA Astrophysics Data System (ADS)
Amanda, A. R.; Widita, R.
2016-03-01
The aim of this research is to compare some image segmentation methods for lungs based on performance evaluation parameter (Mean Square Error (MSE) and Peak Signal Noise to Ratio (PSNR)). In this study, the methods compared were connected threshold, neighborhood connected, and the threshold level set segmentation on the image of the lungs. These three methods require one important parameter, i.e the threshold. The threshold interval was obtained from the histogram of the original image. The software used to segment the image here was InsightToolkit-4.7.0 (ITK). This research used 5 lung images to be analyzed. Then, the results were compared using the performance evaluation parameter determined by using MATLAB. The segmentation method is said to have a good quality if it has the smallest MSE value and the highest PSNR. The results show that four sample images match the criteria of connected threshold, while one sample refers to the threshold level set segmentation. Therefore, it can be concluded that connected threshold method is better than the other two methods for these cases.
Chorel, Marine; Lanternier, Thomas; Lavastre, Éric; Bonod, Nicolas; Bousquet, Bruno; Néauport, Jérôme
2018-04-30
We report on a numerical optimization of the laser induced damage threshold of multi-dielectric high reflection mirrors in the sub-picosecond regime. We highlight the interplay between the electric field distribution, refractive index and intrinsic laser induced damage threshold of the materials on the overall laser induced damage threshold (LIDT) of the multilayer. We describe an optimization method of the multilayer that minimizes the field enhancement in high refractive index materials while preserving a near perfect reflectivity. This method yields a significant improvement of the damage resistance since a maximum increase of 40% can be achieved on the overall LIDT of the multilayer.
Excitation-based and informational masking of a tonal signal in a four-tone masker.
Leibold, Lori J; Hitchens, Jack J; Buss, Emily; Neff, Donna L
2010-04-01
This study examined contributions of peripheral excitation and informational masking to the variability in masking effectiveness observed across samples of multi-tonal maskers. Detection thresholds were measured for a 1000-Hz signal presented simultaneously with each of 25, four-tone masker samples. Using a two-interval, forced-choice adaptive task, thresholds were measured with each sample fixed throughout trial blocks for ten listeners. Average thresholds differed by as much as 26 dB across samples. An excitation-based model of partial loudness [Moore, B. C. J. et al. (1997). J. Audio Eng. Soc. 45, 224-237] was used to predict thresholds. These predictions accounted for a significant portion of variance in the data of several listeners, but no relation between the model and data was observed for many listeners. Moreover, substantial individual differences, on the order of 41 dB, were observed for some maskers. The largest individual differences were found for maskers predicted to produce minimal excitation-based masking. In subsequent conditions, one of five maskers was randomly presented in each interval. The difference in performance for samples with low versus high predicted thresholds was reduced in random compared to fixed conditions. These findings are consistent with a trading relation whereby informational masking is largest for conditions in which excitation-based masking is smallest.
Quantification of pulmonary vessel diameter in low-dose CT images
NASA Astrophysics Data System (ADS)
Rudyanto, Rina D.; Ortiz de Solórzano, Carlos; Muñoz-Barrutia, Arrate
2015-03-01
Accurate quantification of vessel diameter in low-dose Computer Tomography (CT) images is important to study pulmonary diseases, in particular for the diagnosis of vascular diseases and the characterization of morphological vascular remodeling in Chronic Obstructive Pulmonary Disease (COPD). In this study, we objectively compare several vessel diameter estimation methods using a physical phantom. Five solid tubes of differing diameters (from 0.898 to 3.980 mm) were embedded in foam, simulating vessels in the lungs. To measure the diameters, we first extracted the vessels using either of two approaches: vessel enhancement using multi-scale Hessian matrix computation, or explicitly segmenting them using intensity threshold. We implemented six methods to quantify the diameter: three estimating diameter as a function of scale used to calculate the Hessian matrix; two calculating equivalent diameter from the crosssection area obtained by thresholding the intensity and vesselness response, respectively; and finally, estimating the diameter of the object using the Full Width Half Maximum (FWHM). We find that the accuracy of frequently used methods estimating vessel diameter from the multi-scale vesselness filter depends on the range and the number of scales used. Moreover, these methods still yield a significant error margin on the challenging estimation of the smallest diameter (on the order or below the size of the CT point spread function). Obviously, the performance of the thresholding-based methods depends on the value of the threshold. Finally, we observe that a simple adaptive thresholding approach can achieve a robust and accurate estimation of the smallest vessels diameter.
Experimental and environmental factors affect spurious detection of ecological thresholds
Daily, Jonathan P.; Hitt, Nathaniel P.; Smith, David; Snyder, Craig D.
2012-01-01
Threshold detection methods are increasingly popular for assessing nonlinear responses to environmental change, but their statistical performance remains poorly understood. We simulated linear change in stream benthic macroinvertebrate communities and evaluated the performance of commonly used threshold detection methods based on model fitting (piecewise quantile regression [PQR]), data partitioning (nonparametric change point analysis [NCPA]), and a hybrid approach (significant zero crossings [SiZer]). We demonstrated that false detection of ecological thresholds (type I errors) and inferences on threshold locations are influenced by sample size, rate of linear change, and frequency of observations across the environmental gradient (i.e., sample-environment distribution, SED). However, the relative importance of these factors varied among statistical methods and between inference types. False detection rates were influenced primarily by user-selected parameters for PQR (τ) and SiZer (bandwidth) and secondarily by sample size (for PQR) and SED (for SiZer). In contrast, the location of reported thresholds was influenced primarily by SED. Bootstrapped confidence intervals for NCPA threshold locations revealed strong correspondence to SED. We conclude that the choice of statistical methods for threshold detection should be matched to experimental and environmental constraints to minimize false detection rates and avoid spurious inferences regarding threshold location.
Ding, Jiule; Xing, Wei; Chen, Jie; Dai, Yongming; Sun, Jun; Li, Dengfa
2014-01-21
To explore the influence of signal noise ratio (SNR) on analysis of clear cell renal cell carcinoma (CCRCC) using DWI with multi-b values. The images of 17 cases with CCRCC were analyzed, including 17 masses and 9 pure cysts. The signal intensity of the cysts and masses was measured separately on DWI for each b value. The minimal SNR, as the threshold, was recorded when the signal curve manifest as the single exponential line. The SNR of the CCRCC was calculated on DWI for each b value, and compared with the threshold by independent Two-sample t Test. The signal decreased on DWI with increased b factors for both pure cysts and CCRCC. The threshold is 1.29 ± 0.17, and the signal intensity of the cysts on DWI with multi-b values shown as a single exponential line when b ≤ 800 s/mm(2). For the CCRCC, the SNR is similar to the threshold when b = 1 000 s/mm(2) (t = 0.40, P = 0.69), and is lower when b = 1 200 s/mm(2) (t = -2.38, P = 0.03). The SNR should be sufficient for quantitative analysis of DWI, and the maximal b value is 1000 s/mm(2) for CCRCC.
NASA Astrophysics Data System (ADS)
Navarrete, Álvaro; Wang, Wenyuan; Xu, Feihu; Curty, Marcos
2018-04-01
The experimental characterization of multi-photon quantum interference effects in optical networks is essential in many applications of photonic quantum technologies, which include quantum computing and quantum communication as two prominent examples. However, such characterization often requires technologies which are beyond our current experimental capabilities, and today's methods suffer from errors due to the use of imperfect sources and photodetectors. In this paper, we introduce a simple experimental technique to characterize multi-photon quantum interference by means of practical laser sources and threshold single-photon detectors. Our technique is based on well-known methods in quantum cryptography which use decoy settings to tightly estimate the statistics provided by perfect devices. As an illustration of its practicality, we use this technique to obtain a tight estimation of both the generalized Hong‑Ou‑Mandel dip in a beamsplitter with six input photons and the three-photon coincidence probability at the output of a tritter.
Research on growth and defects of 5 in. YCOB single crystal
NASA Astrophysics Data System (ADS)
Tu, Xiaoniu; Wang, Sheng; Xiong, Kainan; Zheng, Yanqing; Shi, Erwei
2018-04-01
YCa4O(BO3)3 (YCOB) is an important nonlinear optical crystal, which is a key optical element in the SHG and OPCPA process to obtain high repetition rate, multi-petawatt laser pulse. In this work, we have grown 5 in. YCOB crystals by Czochralski method and investigated phase separation, defects, as well as their formation mechanism. Laser induced damage threshold (LiDT), rocking curve and transmission spectrum is characterized using the sample without defects. It is believed that, based on this work, large-sized YCOB crystal without defects will be obtained in the near future.
Jackson, Rod
2017-01-01
Background Many national cardiovascular disease (CVD) risk factor management guidelines now recommend that drug treatment decisions should be informed primarily by patients’ multi-variable predicted risk of CVD, rather than on the basis of single risk factor thresholds. To investigate the potential impact of treatment guidelines based on CVD risk thresholds at a national level requires individual level data representing the multi-variable CVD risk factor profiles for a country’s total adult population. As these data are seldom, if ever, available, we aimed to create a synthetic population, representing the joint CVD risk factor distributions of the adult New Zealand population. Methods and results A synthetic population of 2,451,278 individuals, representing the actual age, gender, ethnicity and social deprivation composition of people aged 30–84 years who completed the 2013 New Zealand census was generated using Monte Carlo sampling. Each ‘synthetic’ person was then probabilistically assigned values of the remaining cardiovascular disease (CVD) risk factors required for predicting their CVD risk, based on data from the national census national hospitalisation and drug dispensing databases and a large regional cohort study, using Monte Carlo sampling and multiple imputation. Where possible, the synthetic population CVD risk distributions for each non-demographic risk factor were validated against independent New Zealand data sources. Conclusions We were able to develop a synthetic national population with realistic multi-variable CVD risk characteristics. The construction of this population is the first step in the development of a micro-simulation model intended to investigate the likely impact of a range of national CVD risk management strategies that will inform CVD risk management guideline updates in New Zealand and elsewhere. PMID:28384217
Li, Xiang; Arzhantsev, Sergey; Kauffman, John F; Spencer, John A
2011-04-05
Four portable NIR instruments from the same manufacturer that were nominally identical were programmed with a PLS model for the detection of diethylene glycol (DEG) contamination in propylene glycol (PG)-water mixtures. The model was developed on one spectrometer and used on other units after a calibration transfer procedure that used piecewise direct standardization. Although quantitative results were produced, in practice the instrument interface was programmed to report in Pass/Fail mode. The Pass/Fail determinations were made within 10s and were based on a threshold that passed a blank sample with 95% confidence. The detection limit was then established as the concentration at which a sample would fail with 95% confidence. For a 1% DEG threshold one false negative (Type II) and eight false positive (Type I) errors were found in over 500 samples measured. A representative test set produced standard errors of less than 2%. Since the range of diethylene glycol for economically motivated adulteration (EMA) is expected to be above 1%, the sensitivity of field calibrated portable NIR instruments is sufficient to rapidly screen out potentially problematic materials. Following method development, the instruments were shipped to different sites around the country for a collaborative study with a fixed protocol to be carried out by different analysts. NIR spectra of replicate sets of calibration transfer, system suitability and test samples were all processed with the same chemometric model on multiple instruments to determine the overall analytical precision of the method. The combined results collected for all participants were statistically analyzed to determine a limit of detection (2.0% DEG) and limit of quantitation (6.5%) that can be expected for a method distributed to multiple field laboratories. Published by Elsevier B.V.
Simplified pupal surveys of Aedes aegypti (L.) for entomologic surveillance and dengue control.
Barrera, Roberto
2009-07-01
Pupal surveys of Aedes aegypti (L.) are useful indicators of risk for dengue transmission, although sample sizes for reliable estimations can be large. This study explores two methods for making pupal surveys more practical yet reliable and used data from 10 pupal surveys conducted in Puerto Rico during 2004-2008. The number of pupae per person for each sampling followed a negative binomial distribution, thus showing aggregation. One method found a common aggregation parameter (k) for the negative binomial distribution, a finding that enabled the application of a sequential sampling method requiring few samples to determine whether the number of pupae/person was above a vector density threshold for dengue transmission. A second approach used the finding that the mean number of pupae/person is correlated with the proportion of pupa-infested households and calculated equivalent threshold proportions of pupa-positive households. A sequential sampling program was also developed for this method to determine whether observed proportions of infested households were above threshold levels. These methods can be used to validate entomological thresholds for dengue transmission.
Assembling Ordered Nanorod Superstructures and Their Application as Microcavity Lasers
NASA Astrophysics Data System (ADS)
Liu, Pai; Singh, Shalini; Guo, Yina; Wang, Jian-Jun; Xu, Hongxing; Silien, Christophe; Liu, Ning; Ryan, Kevin M.
2017-03-01
Herein we report the formation of multi-layered arrays of vertically aligned and close packed semiconductor nanorods in perfect registry at a substrate using electric field assisted assembly. The collective properties of these CdSexS1-x nanorod emitters are harnessed by demonstrating a relatively low amplified spontaneous emission (ASE) threshold and a high net optical gain at medium pump intensity. The importance of order in the system is highlighted where a lower ASE threshold is observed compared to disordered samples.
Edge detection based on adaptive threshold b-spline wavelet for optical sub-aperture measuring
NASA Astrophysics Data System (ADS)
Zhang, Shiqi; Hui, Mei; Liu, Ming; Zhao, Zhu; Dong, Liquan; Liu, Xiaohua; Zhao, Yuejin
2015-08-01
In the research of optical synthetic aperture imaging system, phase congruency is the main problem and it is necessary to detect sub-aperture phase. The edge of the sub-aperture system is more complex than that in the traditional optical imaging system. And with the existence of steep slope for large-aperture optical component, interference fringe may be quite dense when interference imaging. Deep phase gradient may cause a loss of phase information. Therefore, it's urgent to search for an efficient edge detection method. Wavelet analysis as a powerful tool is widely used in the fields of image processing. Based on its properties of multi-scale transform, edge region is detected with high precision in small scale. Longing with the increase of scale, noise is reduced in contrary. So it has a certain suppression effect on noise. Otherwise, adaptive threshold method which sets different thresholds in various regions can detect edge points from noise. Firstly, fringe pattern is obtained and cubic b-spline wavelet is adopted as the smoothing function. After the multi-scale wavelet decomposition of the whole image, we figure out the local modulus maxima in gradient directions. However, it also contains noise, and thus adaptive threshold method is used to select the modulus maxima. The point which greater than threshold value is boundary point. Finally, we use corrosion and expansion deal with the resulting image to get the consecutive boundary of image.
NASA Astrophysics Data System (ADS)
Li, S.; Zhang, S.; Yang, D.
2017-09-01
Remote sensing images are particularly well suited for analysis of land cover change. In this paper, we present a new framework for detection of changing land cover using satellite imagery. Morphological features and a multi-index are used to extract typical objects from the imagery, including vegetation, water, bare land, buildings, and roads. Our method, based on connected domains, is different from traditional methods; it uses image segmentation to extract morphological features, while the enhanced vegetation index (EVI), the differential water index (NDWI) are used to extract vegetation and water, and a fragmentation index is used to the correct extraction results of water. HSV transformation and threshold segmentation extract and remove the effects of shadows on extraction results. Change detection is performed on these results. One of the advantages of the proposed framework is that semantic information is extracted automatically using low-level morphological features and indexes. Another advantage is that the proposed method detects specific types of change without any training samples. A test on ZY-3 images demonstrates that our framework has a promising capability to detect change.
Liang, C Jason; Budoff, Matthew J; Kaufman, Joel D; Kronmal, Richard A; Brown, Elizabeth R
2012-07-02
Extent of atherosclerosis measured by amount of coronary artery calcium (CAC) in computed tomography (CT) has been traditionally assessed using thresholded scoring methods, such as the Agatston score (AS). These thresholded scores have value in clinical prediction, but important information might exist below the threshold, which would have important advantages for understanding genetic, environmental, and other risk factors in atherosclerosis. We developed a semi-automated threshold-free scoring method, the spatially weighted calcium score (SWCS) for CAC in the Multi-Ethnic Study of Atherosclerosis (MESA). Chest CT scans were obtained from 6814 participants in the Multi-Ethnic Study of Atherosclerosis (MESA). The SWCS and the AS were calculated for each of the scans. Cox proportional hazards models and linear regression models were used to evaluate the associations of the scores with CHD events and CHD risk factors. CHD risk factors were summarized using a linear predictor. Among all participants and participants with AS > 0, the SWCS and AS both showed similar strongly significant associations with CHD events (hazard ratios, 1.23 and 1.19 per doubling of SWCS and AS; 95% CI, 1.16 to 1.30 and 1.14 to 1.26) and CHD risk factors (slopes, 0.178 and 0.164; 95% CI, 0.162 to 0.195 and 0.149 to 0.179). Even among participants with AS = 0, an increase in the SWCS was still significantly associated with established CHD risk factors (slope, 0.181; 95% CI, 0.138 to 0.224). The SWCS appeared to be predictive of CHD events even in participants with AS = 0, though those events were rare as expected. The SWCS provides a valid, continuous measure of CAC suitable for quantifying the extent of atherosclerosis without a threshold, which will be useful for examining novel genetic and environmental risk factors for atherosclerosis.
Multi-observation PET image analysis for patient follow-up quantitation and therapy assessment
NASA Astrophysics Data System (ADS)
David, S.; Visvikis, D.; Roux, C.; Hatt, M.
2011-09-01
In positron emission tomography (PET) imaging, an early therapeutic response is usually characterized by variations of semi-quantitative parameters restricted to maximum SUV measured in PET scans during the treatment. Such measurements do not reflect overall tumor volume and radiotracer uptake variations. The proposed approach is based on multi-observation image analysis for merging several PET acquisitions to assess tumor metabolic volume and uptake variations. The fusion algorithm is based on iterative estimation using a stochastic expectation maximization (SEM) algorithm. The proposed method was applied to simulated and clinical follow-up PET images. We compared the multi-observation fusion performance to threshold-based methods, proposed for the assessment of the therapeutic response based on functional volumes. On simulated datasets the adaptive threshold applied independently on both images led to higher errors than the ASEM fusion and on clinical datasets it failed to provide coherent measurements for four patients out of seven due to aberrant delineations. The ASEM method demonstrated improved and more robust estimation of the evaluation leading to more pertinent measurements. Future work will consist in extending the methodology and applying it to clinical multi-tracer datasets in order to evaluate its potential impact on the biological tumor volume definition for radiotherapy applications.
A model-based spike sorting algorithm for removing correlation artifacts in multi-neuron recordings.
Pillow, Jonathan W; Shlens, Jonathon; Chichilnisky, E J; Simoncelli, Eero P
2013-01-01
We examine the problem of estimating the spike trains of multiple neurons from voltage traces recorded on one or more extracellular electrodes. Traditional spike-sorting methods rely on thresholding or clustering of recorded signals to identify spikes. While these methods can detect a large fraction of the spikes from a recording, they generally fail to identify synchronous or near-synchronous spikes: cases in which multiple spikes overlap. Here we investigate the geometry of failures in traditional sorting algorithms, and document the prevalence of such errors in multi-electrode recordings from primate retina. We then develop a method for multi-neuron spike sorting using a model that explicitly accounts for the superposition of spike waveforms. We model the recorded voltage traces as a linear combination of spike waveforms plus a stochastic background component of correlated Gaussian noise. Combining this measurement model with a Bernoulli prior over binary spike trains yields a posterior distribution for spikes given the recorded data. We introduce a greedy algorithm to maximize this posterior that we call "binary pursuit". The algorithm allows modest variability in spike waveforms and recovers spike times with higher precision than the voltage sampling rate. This method substantially corrects cross-correlation artifacts that arise with conventional methods, and substantially outperforms clustering methods on both real and simulated data. Finally, we develop diagnostic tools that can be used to assess errors in spike sorting in the absence of ground truth.
A Model-Based Spike Sorting Algorithm for Removing Correlation Artifacts in Multi-Neuron Recordings
Chichilnisky, E. J.; Simoncelli, Eero P.
2013-01-01
We examine the problem of estimating the spike trains of multiple neurons from voltage traces recorded on one or more extracellular electrodes. Traditional spike-sorting methods rely on thresholding or clustering of recorded signals to identify spikes. While these methods can detect a large fraction of the spikes from a recording, they generally fail to identify synchronous or near-synchronous spikes: cases in which multiple spikes overlap. Here we investigate the geometry of failures in traditional sorting algorithms, and document the prevalence of such errors in multi-electrode recordings from primate retina. We then develop a method for multi-neuron spike sorting using a model that explicitly accounts for the superposition of spike waveforms. We model the recorded voltage traces as a linear combination of spike waveforms plus a stochastic background component of correlated Gaussian noise. Combining this measurement model with a Bernoulli prior over binary spike trains yields a posterior distribution for spikes given the recorded data. We introduce a greedy algorithm to maximize this posterior that we call “binary pursuit”. The algorithm allows modest variability in spike waveforms and recovers spike times with higher precision than the voltage sampling rate. This method substantially corrects cross-correlation artifacts that arise with conventional methods, and substantially outperforms clustering methods on both real and simulated data. Finally, we develop diagnostic tools that can be used to assess errors in spike sorting in the absence of ground truth. PMID:23671583
Sampling Based Influence Maximization on Linear Threshold Model
NASA Astrophysics Data System (ADS)
Jia, Su; Chen, Ling
2018-04-01
A sampling based influence maximization on linear threshold (LT) model method is presented. The method samples the routes in the possible worlds in the social networks, and uses Chernoff bound to estimate the number of samples so that the error can be constrained within a given bound. Then the active possibilities of the routes in the possible worlds are calculated, and are used to compute the influence spread of each node in the network. Our experimental results show that our method can effectively select appropriate seed nodes set that spreads larger influence than other similar methods.
NASA Astrophysics Data System (ADS)
Huang, Jian; Liu, Gui-xiong
2016-09-01
The identification of targets varies in different surge tests. A multi-color space threshold segmentation and self-learning k-nearest neighbor algorithm ( k-NN) for equipment under test status identification was proposed after using feature matching to identify equipment status had to train new patterns every time before testing. First, color space (L*a*b*, hue saturation lightness (HSL), hue saturation value (HSV)) to segment was selected according to the high luminance points ratio and white luminance points ratio of the image. Second, the unknown class sample S r was classified by the k-NN algorithm with training set T z according to the feature vector, which was formed from number of pixels, eccentricity ratio, compactness ratio, and Euler's numbers. Last, while the classification confidence coefficient equaled k, made S r as one sample of pre-training set T z '. The training set T z increased to T z+1 by T z ' if T z ' was saturated. In nine series of illuminant, indicator light, screen, and disturbances samples (a total of 21600 frames), the algorithm had a 98.65%identification accuracy, also selected five groups of samples to enlarge the training set from T 0 to T 5 by itself.
Intelligent multi-spectral IR image segmentation
NASA Astrophysics Data System (ADS)
Lu, Thomas; Luong, Andrew; Heim, Stephen; Patel, Maharshi; Chen, Kang; Chao, Tien-Hsin; Chow, Edward; Torres, Gilbert
2017-05-01
This article presents a neural network based multi-spectral image segmentation method. A neural network is trained on the selected features of both the objects and background in the longwave (LW) Infrared (IR) images. Multiple iterations of training are performed until the accuracy of the segmentation reaches satisfactory level. The segmentation boundary of the LW image is used to segment the midwave (MW) and shortwave (SW) IR images. A second neural network detects the local discontinuities and refines the accuracy of the local boundaries. This article compares the neural network based segmentation method to the Wavelet-threshold and Grab-Cut methods. Test results have shown increased accuracy and robustness of this segmentation scheme for multi-spectral IR images.
Modulation frequency discrimination with single and multiple channels in cochlear implant users
Galvin, John J.; Oba, Sandy; Başkent, Deniz; Fu, Qian-Jie
2015-01-01
Temporal envelope cues convey important speech information for cochlear implant (CI) users. Many studies have explored CI users’ single-channel temporal envelope processing. However, in clinical CI speech processors, temporal envelope information is processed by multiple channels. Previous studies have shown that amplitude modulation frequency discrimination (AMFD) thresholds are better when temporal envelopes are delivered to multiple rather than single channels. In clinical fitting, current levels on single channels must often be reduced to accommodate multi-channel loudness summation. As such, it is unclear whether the multi-channel advantage in AMFD observed in previous studies was due to coherent envelope information distributed across the cochlea or to greater loudness associated with multi-channel stimulation. In this study, single- and multi-channel AMFD thresholds were measured in CI users. Multi-channel component electrodes were either widely or narrowly spaced to vary the degree of overlap between neural populations. The reference amplitude modulation (AM) frequency was 100 Hz, and coherent modulation was applied to all channels. In Experiment 1, single- and multi-channel AMFD thresholds were measured at similar loudness. In this case, current levels on component channels were higher for single- than for multi-channel AM stimuli, and the modulation depth was approximately 100% of the perceptual dynamic range (i.e., between threshold and maximum acceptable loudness). Results showed no significant difference in AMFD thresholds between similarly loud single- and multi-channel modulated stimuli. In Experiment 2, single- and multi-channel AMFD thresholds were compared at substantially different loudness. In this case, current levels on component channels were the same for single-and multi-channel stimuli (“summation-adjusted” current levels) and the same range of modulation (in dB) was applied to the component channels for both single- and multi-channel testing. With the summation-adjusted current levels, loudness was lower with single than with multiple channels and the AM depth resulted in substantial stimulation below single-channel audibility, thereby reducing the perceptual range of AM. Results showed that AMFD thresholds were significantly better with multiple channels than with any of the single component channels. There was no significant effect of the distribution of electrodes on multi-channel AMFD thresholds. The results suggest that increased loudness due to multi-channel summation may contribute to the multi-channel advantage in AMFD, and that that overall loudness may matter more than the distribution of envelope information in the cochlea. PMID:25746914
Thresher: an improved algorithm for peak height thresholding of microbial community profiles.
Starke, Verena; Steele, Andrew
2014-11-15
This article presents Thresher, an improved technique for finding peak height thresholds for automated rRNA intergenic spacer analysis (ARISA) profiles. We argue that thresholds must be sample dependent, taking community richness into account. In most previous fragment analyses, a common threshold is applied to all samples simultaneously, ignoring richness variations among samples and thereby compromising cross-sample comparison. Our technique solves this problem, and at the same time provides a robust method for outlier rejection, selecting for removal any replicate pairs that are not valid replicates. Thresholds are calculated individually for each replicate in a pair, and separately for each sample. The thresholds are selected to be the ones that minimize the dissimilarity between the replicates after thresholding. If a choice of threshold results in the two replicates in a pair failing a quantitative test of similarity, either that threshold or that sample must be rejected. We compare thresholded ARISA results with sequencing results, and demonstrate that the Thresher algorithm outperforms conventional thresholding techniques. The software is implemented in R, and the code is available at http://verenastarke.wordpress.com or by contacting the author. vstarke@ciw.edu or http://verenastarke.wordpress.com Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
A New DEM Generalization Method Based on Watershed and Tree Structure
Chen, Yonggang; Ma, Tianwu; Chen, Xiaoyin; Chen, Zhende; Yang, Chunju; Lin, Chenzhi; Shan, Ligang
2016-01-01
The DEM generalization is the basis of multi-dimensional observation, the basis of expressing and analyzing the terrain. DEM is also the core of building the Multi-Scale Geographic Database. Thus, many researchers have studied both the theory and the method of DEM generalization. This paper proposed a new method of generalizing terrain, which extracts feature points based on the tree model construction which considering the nested relationship of watershed characteristics. The paper used the 5 m resolution DEM of the Jiuyuan gully watersheds in the Loess Plateau as the original data and extracted the feature points in every single watershed to reconstruct the DEM. The paper has achieved generalization from 1:10000 DEM to 1:50000 DEM by computing the best threshold. The best threshold is 0.06. In the last part of the paper, the height accuracy of the generalized DEM is analyzed by comparing it with some other classic methods, such as aggregation, resample, and VIP based on the original 1:50000 DEM. The outcome shows that the method performed well. The method can choose the best threshold according to the target generalization scale to decide the density of the feature points in the watershed. Meanwhile, this method can reserve the skeleton of the terrain, which can meet the needs of different levels of generalization. Additionally, through overlapped contour contrast, elevation statistical parameters and slope and aspect analysis, we found out that the W8D algorithm performed well and effectively in terrain representation. PMID:27517296
Orion MPCV Touchdown Detection Threshold Development and Testing
NASA Technical Reports Server (NTRS)
Daum, Jared; Gay, Robert
2013-01-01
A robust method of detecting Orion Multi-Purpose Crew Vehicle (MPCV) splashdown is necessary to ensure crew and hardware safety during descent and after touchdown. The proposed method uses a triple redundant system to inhibit Reaction Control System (RCS) thruster firings, detach parachute risers from the vehicle, and transition to the post-landing segment of the Flight Software (FSW). An in-depth trade study was completed to determine optimal characteristics of the touchdown detection method resulting in an algorithm monitoring filtered, lever-arm corrected, 200 Hz Inertial Measurement Unit (IMU) vehicle acceleration magnitude data against a tunable threshold using persistence counter logic. Following the design of the algorithm, high fidelity environment and vehicle simulations, coupled with the actual vehicle FSW, were used to tune the acceleration threshold and persistence counter value to result in adequate performance in detecting touchdown and sufficient safety margin against early detection while descending under parachutes. An analytical approach including Kriging and adaptive sampling allowed for a sufficient number of finite element analysis (FEA) impact simulations to be completed using minimal computation time. The combination of a persistence counter of 10 and an acceleration threshold of approximately 57.3 ft/s2 resulted in an impact performance factor of safety (FOS) of 1.0 and a safety FOS of approximately 2.6 for touchdown declaration. An RCS termination acceleration threshold of approximately 53.1 ft/s(exp)2 with a persistence counter of 10 resulted in an increased impact performance FOS of 1.2 at the expense of a lowered under-parachutes safety factor of 2.2. The resulting tuned algorithm was then tested on data from eight Capsule Parachute Assembly System (CPAS) flight tests, showing an experimental minimum safety FOS of 6.1. The formulated touchdown detection algorithm will be flown on the Orion MPCV FSW during the Exploration Flight Test 1 (EFT-1) mission in the second half of 2014.
Jacob, Louis; Uvarova, Maria; Boulet, Sandrine; Begaj, Inva; Chevret, Sylvie
2016-06-02
Multi-Arm Multi-Stage designs aim at comparing several new treatments to a common reference, in order to select or drop any treatment arm to move forward when such evidence already exists based on interim analyses. We redesigned a Bayesian adaptive design initially proposed for dose-finding, focusing our interest in the comparison of multiple experimental drugs to a control on a binary criterion measure. We redesigned a phase II clinical trial that randomly allocates patients across three (one control and two experimental) treatment arms to assess dropping decision rules. We were interested in dropping any arm due to futility, either based on historical control rate (first rule) or comparison across arms (second rule), and in stopping experimental arm due to its ability to reach a sufficient response rate (third rule), using the difference of response probabilities in Bayes binomial trials between the treated and control as a measure of treatment benefit. Simulations were then conducted to investigate the decision operating characteristics under a variety of plausible scenarios, as a function of the decision thresholds. Our findings suggest that one experimental treatment was less efficient than the control and could have been dropped from the trial based on a sample of approximately 20 instead of 40 patients. In the simulation study, stopping decisions were reached sooner for the first rule than for the second rule, with close mean estimates of response rates and small bias. According to the decision threshold, the mean sample size to detect the required 0.15 absolute benefit ranged from 63 to 70 (rule 3) with false negative rates of less than 2 % (rule 1) up to 6 % (rule 2). In contrast, detecting a 0.15 inferiority in response rates required a sample size ranging on average from 23 to 35 (rules 1 and 2, respectively) with a false positive rate ranging from 3.6 to 0.6 % (rule 3). Adaptive trial design is a good way to improve clinical trials. It allows removing ineffective drugs and reducing the trial sample size, while maintaining unbiased estimates. Decision thresholds can be set according to predefined fixed error decision rates. ClinicalTrials.gov Identifier: NCT01342692 .
NASA Astrophysics Data System (ADS)
Uenomachi, M.; Orita, T.; Shimazoe, K.; Takahashi, H.; Ikeda, H.; Tsujita, K.; Sekiba, D.
2018-01-01
High-resolution Elastic Recoil Detection Analysis (HERDA), which consists of a 90o sector magnetic spectrometer and a position-sensitive detector (PSD), is a method of quantitative hydrogen analysis. In order to increase sensitivity, a HERDA system using a multi-channel silicon-based ion detector has been developed. Here, as a parallel and fast readout circuit from a multi-channel silicon-based ion detector, a slew-rate-limited time-over-threshold (ToT) application-specific integrated circuit (ASIC) was designed, and a new slew-rate-limited ToT method is proposed. The designed ASIC has 48 channels and each channel consists of a preamplifier, a slew-rate-limited shaping amplifier, which makes ToT response linear, and a comparator. The measured equivalent noise charges (ENCs) of the preamplifier, the shaper, and the ToT on no detector capacitance were 253±21, 343±46, and 560±56 electrons RMS, respectively. The spectra from a 241Am source measured using a slew-rate-limited ToT ASIC are also reported.
Wang, Yi-Ting; Sung, Pei-Yuan; Lin, Peng-Lin; Yu, Ya-Wen; Chung, Ren-Hua
2015-05-15
Genome-wide association studies (GWAS) have become a common approach to identifying single nucleotide polymorphisms (SNPs) associated with complex diseases. As complex diseases are caused by the joint effects of multiple genes, while the effect of individual gene or SNP is modest, a method considering the joint effects of multiple SNPs can be more powerful than testing individual SNPs. The multi-SNP analysis aims to test association based on a SNP set, usually defined based on biological knowledge such as gene or pathway, which may contain only a portion of SNPs with effects on the disease. Therefore, a challenge for the multi-SNP analysis is how to effectively select a subset of SNPs with promising association signals from the SNP set. We developed the Optimal P-value Threshold Pedigree Disequilibrium Test (OPTPDT). The OPTPDT uses general nuclear families. A variable p-value threshold algorithm is used to determine an optimal p-value threshold for selecting a subset of SNPs. A permutation procedure is used to assess the significance of the test. We used simulations to verify that the OPTPDT has correct type I error rates. Our power studies showed that the OPTPDT can be more powerful than the set-based test in PLINK, the multi-SNP FBAT test, and the p-value based test GATES. We applied the OPTPDT to a family-based autism GWAS dataset for gene-based association analysis and identified MACROD2-AS1 with genome-wide significance (p-value=2.5×10(-6)). Our simulation results suggested that the OPTPDT is a valid and powerful test. The OPTPDT will be helpful for gene-based or pathway association analysis. The method is ideal for the secondary analysis of existing GWAS datasets, which may identify a set of SNPs with joint effects on the disease.
Numerical investigation of the inertial cavitation threshold under multi-frequency ultrasound.
Suo, Dingjie; Govind, Bala; Zhang, Shengqi; Jing, Yun
2018-03-01
Through the introduction of multi-frequency sonication in High Intensity Focused Ultrasound (HIFU), enhancement of efficiency has been noted in several applications including thrombolysis, tissue ablation, sonochemistry, and sonoluminescence. One key experimental observation is that multi-frequency ultrasound can help lower the inertial cavitation threshold, thereby improving the power efficiency. However, this has not been well corroborated by the theory. In this paper, a numerical investigation on the inertial cavitation threshold of microbubbles (MBs) under multi-frequency ultrasound irradiation is conducted. The relationships between the cavitation threshold and MB size at various frequencies and in different media are investigated. The results of single-, dual and triple frequency sonication show reduced inertial cavitation thresholds by introducing additional frequencies which is consistent with previous experimental work. In addition, no significant difference is observed between dual frequency sonication with various frequency differences. This study, not only reaffirms the benefit of using multi-frequency ultrasound for various applications, but also provides a possible route for optimizing ultrasound excitations for initiating inertial cavitation. Copyright © 2017 Elsevier B.V. All rights reserved.
Multi-GHz Synchronous Waveform Acquisition With Real-Time Pattern-Matching Trigger Generation
NASA Astrophysics Data System (ADS)
Kleinfelder, Stuart A.; Chiang, Shiuh-hua Wood; Huang, Wei
2013-10-01
A transient waveform capture and digitization circuit with continuous synchronous 2-GHz sampling capability and real-time programmable windowed trigger generation has been fabricated and tested. Designed in 0.25 μm CMOS, the digitizer contains a circular array of 128 sample and hold circuits for continuous sample acquisition, and attains 2-GHz sample speeds with over 800-MHz analog bandwidth. Sample clock generation is synchronous, combining a phase-locked loop for high-speed clock generation and a high-speed fully-differential shift register for distributing clocks to all 128 sample circuits. Using two comparators per sample, the sampled voltage levels are compared against two reference levels, a high threshold and a low threshold, that are set via per-comparator digital to analog converters (DACs). The 256 per-comparator 5-bit DACs compensate for comparator offsets and allow for fine reference level adjustment. The comparator results are matched in 8-sample-wide windows against up to 72 programmable patterns in real time using an on-chip programmable logic array. Each 8-sample trigger window is equivalent to 4 ns of acquisition, overlapped sample by sample in a circular fashion through the entire 128-sample array. The 72 pattern-matching trigger criteria can be programmed to be any combination of High-above the high threshold, Low-below the low threshold, Middle-between the two thresholds, or “Don't Care”-any state is accepted. A trigger pattern of “HLHLHLHL,” for example, watches for a waveform that is oscillating at about 1 GHz given the 2-GHz sample rate. A trigger is flagged in under 20 ns if there is a match, after which sampling is stopped, and on-chip digitization can proceed via 128 parallel 10-bit converters, or off-chip conversion can proceed via an analog readout. The chip exceeds 11 bits of dynamic range, nets over 800-MHz -3-dB bandwidth in a realistic system, and jitter in the PLL-based sampling clock has been measured to be about 1 part per million, RMS.
NASA Astrophysics Data System (ADS)
Xu, Saiping; Zhao, Qianjun; Yin, Kai; Cui, Bei; Zhang, Xiupeng
2016-10-01
Hollow village is a special phenomenon in the process of urbanization in China, which causes the waste of land resources. Therefore, it's imminent to carry out the hollow village recognition and renovation. However, there are few researches on the remote sensing identification of hollow village. In this context, in order to recognize the abandoned homesteads by remote sensing technique, the experiment was carried out as follows. Firstly, Gram-Schmidt transform method was utilized to complete the image fusion between multi-spectral images and panchromatic image of WorldView-2. Then the fusion images were made edge enhanced by high pass filtering. The multi-resolution segmentation and spectral difference segmentation were carried out to obtain the image objects. Secondly, spectral characteristic parameters were calculated, such as the normalized difference vegetation index (NDVI), the normalized difference water index (NDWI), the normalized difference Soil index (NDSI) etc. The shape feature parameters were extracted, such as Area, Length/Width Ratio and Rectangular Fit etc.. Thirdly, the SEaTH algorithm was used to determine the thresholds and optimize the feature space. Furthermore, the threshold classification method and the random forest classifier were combined, and the appropriate amount of samples were selected to train the classifier in order to determine the important feature parameters and the best classifier parameters involved in classification. Finally, the classification results was verified by computing the confusion matrix. The classification results were continuous and the phenomenon of salt and pepper using pixel classification was avoided effectively. In addition, the results showed that the extracted Abandoned Homesteads were in complete shapes, which could be distinguished from those confusing classes such as Homestead in Use and Roads.
Mallik, Saurav; Bhadra, Tapas; Mukherji, Ayan; Mallik, Saurav; Bhadra, Tapas; Mukherji, Ayan; Mallik, Saurav; Bhadra, Tapas; Mukherji, Ayan
2018-04-01
Association rule mining is an important technique for identifying interesting relationships between gene pairs in a biological data set. Earlier methods basically work for a single biological data set, and, in maximum cases, a single minimum support cutoff can be applied globally, i.e., across all genesets/itemsets. To overcome this limitation, in this paper, we propose dynamic threshold-based FP-growth rule mining algorithm that integrates gene expression, methylation and protein-protein interaction profiles based on weighted shortest distance to find the novel associations among different pairs of genes in multi-view data sets. For this purpose, we introduce three new thresholds, namely, Distance-based Variable/Dynamic Supports (DVS), Distance-based Variable Confidences (DVC), and Distance-based Variable Lifts (DVL) for each rule by integrating co-expression, co-methylation, and protein-protein interactions existed in the multi-omics data set. We develop the proposed algorithm utilizing these three novel multiple threshold measures. In the proposed algorithm, the values of , , and are computed for each rule separately, and subsequently it is verified whether the support, confidence, and lift of each evolved rule are greater than or equal to the corresponding individual , , and values, respectively, or not. If all these three conditions for a rule are found to be true, the rule is treated as a resultant rule. One of the major advantages of the proposed method compared with other related state-of-the-art methods is that it considers both the quantitative and interactive significance among all pairwise genes belonging to each rule. Moreover, the proposed method generates fewer rules, takes less running time, and provides greater biological significance for the resultant top-ranking rules compared to previous methods.
Calculating the dim light melatonin onset: the impact of threshold and sampling rate.
Molina, Thomas A; Burgess, Helen J
2011-10-01
The dim light melatonin onset (DLMO) is the most reliable circadian phase marker in humans, but the cost of assaying samples is relatively high. Therefore, the authors examined differences between DLMOs calculated from hourly versus half-hourly sampling and differences between DLMOs calculated with two recommended thresholds (a fixed threshold of 3 pg/mL and a variable "3k" threshold equal to the mean plus two standard deviations of the first three low daytime points). The authors calculated these DLMOs from salivary dim light melatonin profiles collected from 122 individuals (64 women) at baseline. DLMOs derived from hourly sampling occurred on average only 6-8 min earlier than the DLMOs derived from half-hourly saliva sampling, and they were highly correlated with each other (r ≥ 0.89, p < .001). However, in up to 19% of cases the DLMO derived from hourly sampling was >30 min from the DLMO derived from half-hourly sampling. The 3 pg/mL threshold produced significantly less variable DLMOs than the 3k threshold. However, the 3k threshold was significantly lower than the 3 pg/mL threshold (p < .001). The DLMOs calculated with the 3k method were significantly earlier (by 22-24 min) than the DLMOs calculated with the 3 pg/mL threshold, regardless of sampling rate. These results suggest that in large research studies and clinical settings, the more affordable and practical option of hourly sampling is adequate for a reasonable estimate of circadian phase. Although the 3 pg/mL fixed threshold is less variable than the 3k threshold, it produces estimates of the DLMO that are further from the initial rise of melatonin.
Optimal thresholds for the estimation of area rain-rate moments by the threshold method
NASA Technical Reports Server (NTRS)
Short, David A.; Shimizu, Kunio; Kedem, Benjamin
1993-01-01
Optimization of the threshold method, achieved by determination of the threshold that maximizes the correlation between an area-average rain-rate moment and the area coverage of rain rates exceeding the threshold, is demonstrated empirically and theoretically. Empirical results for a sequence of GATE radar snapshots show optimal thresholds of 5 and 27 mm/h for the first and second moments, respectively. Theoretical optimization of the threshold method by the maximum-likelihood approach of Kedem and Pavlopoulos (1991) predicts optimal thresholds near 5 and 26 mm/h for lognormally distributed rain rates with GATE-like parameters. The agreement between theory and observations suggests that the optimal threshold can be understood as arising due to sampling variations, from snapshot to snapshot, of a parent rain-rate distribution. Optimal thresholds for gamma and inverse Gaussian distributions are also derived and compared.
Kim, Ju-Won; Park, Seunghee
2018-01-02
In this study, a magnetic flux leakage (MFL) method, known to be a suitable non-destructive evaluation (NDE) method for continuum ferromagnetic structures, was used to detect local damage when inspecting steel wire ropes. To demonstrate the proposed damage detection method through experiments, a multi-channel MFL sensor head was fabricated using a Hall sensor array and magnetic yokes to adapt to the wire rope. To prepare the damaged wire-rope specimens, several different amounts of artificial damages were inflicted on wire ropes. The MFL sensor head was used to scan the damaged specimens to measure the magnetic flux signals. After obtaining the signals, a series of signal processing steps, including the enveloping process based on the Hilbert transform (HT), was performed to better recognize the MFL signals by reducing the unexpected noise. The enveloped signals were then analyzed for objective damage detection by comparing them with a threshold that was established based on the generalized extreme value (GEV) distribution. The detected MFL signals that exceed the threshold were analyzed quantitatively by extracting the magnetic features from the MFL signals. To improve the quantitative analysis, damage indexes based on the relationship between the enveloped MFL signal and the threshold value were also utilized, along with a general damage index for the MFL method. The detected MFL signals for each damage type were quantified by using the proposed damage indexes and the general damage indexes for the MFL method. Finally, an artificial neural network (ANN) based multi-stage pattern recognition method using extracted multi-scale damage indexes was implemented to automatically estimate the severity of the damage. To analyze the reliability of the MFL-based automated wire rope NDE method, the accuracy and reliability were evaluated by comparing the repeatedly estimated damage size and the actual damage size.
Lesmes, Luis A.; Lu, Zhong-Lin; Baek, Jongsoo; Tran, Nina; Dosher, Barbara A.; Albright, Thomas D.
2015-01-01
Motivated by Signal Detection Theory (SDT), we developed a family of novel adaptive methods that estimate the sensitivity threshold—the signal intensity corresponding to a pre-defined sensitivity level (d′ = 1)—in Yes-No (YN) and Forced-Choice (FC) detection tasks. Rather than focus stimulus sampling to estimate a single level of %Yes or %Correct, the current methods sample psychometric functions more broadly, to concurrently estimate sensitivity and decision factors, and thereby estimate thresholds that are independent of decision confounds. Developed for four tasks—(1) simple YN detection, (2) cued YN detection, which cues the observer's response state before each trial, (3) rated YN detection, which incorporates a Not Sure response, and (4) FC detection—the qYN and qFC methods yield sensitivity thresholds that are independent of the task's decision structure (YN or FC) and/or the observer's subjective response state. Results from simulation and psychophysics suggest that 25 trials (and sometimes less) are sufficient to estimate YN thresholds with reasonable precision (s.d. = 0.10–0.15 decimal log units), but more trials are needed for FC thresholds. When the same subjects were tested across tasks of simple, cued, rated, and FC detection, adaptive threshold estimates exhibited excellent agreement with the method of constant stimuli (MCS), and with each other. These YN adaptive methods deliver criterion-free thresholds that have previously been exclusive to FC methods. PMID:26300798
NASA Astrophysics Data System (ADS)
Tan, Kok Liang; Tanaka, Toshiyuki; Nakamura, Hidetoshi; Shirahata, Toru; Sugiura, Hiroaki
Chronic Obstructive Pulmonary Disease is a disease in which the airways and tiny air sacs (alveoli) inside the lung are partially obstructed or destroyed. Emphysema is what occurs as more and more of the walls between air sacs get destroyed. The goal of this paper is to produce a more practical emphysema-quantification algorithm that has higher correlation with the parameters of pulmonary function tests compared to classical methods. The use of the threshold range from approximately -900 Hounsfield Unit to -990 Hounsfield Unit for extracting emphysema from CT has been reported in many papers. From our experiments, we realize that a threshold which is optimal for a particular CT data set might not be optimal for other CT data sets due to the subtle radiographic variations in the CT images. Consequently, we propose a multi-threshold method that utilizes ten thresholds between and including -900 Hounsfield Unit and -990 Hounsfield Unit for identifying the different potential emphysematous regions in the lung. Subsequently, we divide the lung into eight sub-volumes. From each sub-volume, we calculate the ratio of the voxels with the intensity below a certain threshold. The respective ratios of the voxels below the ten thresholds are employed as the features for classifying the sub-volumes into four emphysema severity classes. Neural network is used as the classifier. The neural network is trained using 80 training sub-volumes. The performance of the classifier is assessed by classifying 248 test sub-volumes of the lung obtained from 31 subjects. Actual diagnoses of the sub-volumes are hand-annotated and consensus-classified by radiologists. The four-class classification accuracy of the proposed method is 89.82%. The sub-volumetric classification results produced in this study encompass not only the information of emphysema severity but also the distribution of emphysema severity from the top to the bottom of the lung. We hypothesize that besides emphysema severity, the distribution of emphysema severity in the lung also plays an important role in the assessment of the overall functionality of the lung. We confirm our hypothesis by showing that the proposed sub-volumetric classification results correlate with the parameters of pulmonary function tests better than classical methods. We also visualize emphysema using a technique called the transparent lung model.
Pullman, Rebecca E; Roepke, Stephanie E; Duffy, Jeanne F
2012-06-01
To determine whether an accurate circadian phase assessment could be obtained from saliva samples collected by patients in their home. Twenty-four individuals with a complaint of sleep initiation or sleep maintenance difficulty were studied for two evenings. Each participant received instructions for collecting eight hourly saliva samples in dim light at home. On the following evening they spent 9h in a laboratory room with controlled dim (<20 lux) light where hourly saliva samples were collected. Circadian phase of dim light melatonin onset (DLMO) was determined using both an absolute threshold (3 pg ml(-1)) and a relative threshold (two standard deviations above the mean of three baseline values). Neither threshold method worked well for one participant who was a "low-secretor". In four cases the participants' in-lab melatonin levels rose much earlier or were much higher than their at-home levels, and one participant appeared to take the at home samples out of order. Overall, the at-home and in-lab DLMO values were significantly correlated using both methods, and differed on average by 37 (± 19)min using the absolute threshold and by 54 (± 36)min using the relative threshold. The at-home assessment procedure was able to determine an accurate DLMO using an absolute threshold in 62.5% of the participants. Thus, an at-home procedure for assessing circadian phase could be practical for evaluating patients for circadian rhythm sleep disorders. Copyright © 2012 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romero, Vicente; Bonney, Matthew; Schroeder, Benjamin
When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a classmore » of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10 -4 probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.« less
NASA Astrophysics Data System (ADS)
Zhong, Donglai; Zhao, Chenyi; Liu, Lijun; Zhang, Zhiyong; Peng, Lian-Mao
2018-04-01
In this letter, we report a gate engineering method to adjust threshold voltage of carbon nanotube (CNT) based field-effect transistors (FETs) continuously in a wide range, which makes the application of CNT FETs especially in digital integrated circuits (ICs) easier. Top-gated FETs are fabricated using solution-processed CNT network films with stacking Pd and Sc films as gate electrodes. By decreasing the thickness of the lower layer metal (Pd) from 20 nm to zero, the effective work function of the gate decreases, thus tuning the threshold voltage (Vt) of CNT FETs from -1.0 V to 0.2 V. The continuous adjustment of threshold voltage through gate engineering lays a solid foundation for multi-threshold technology in CNT based ICs, which then can simultaneously provide high performance and low power circuit modules on one chip.
Shriner, Susan A; VanDalen, Kaci K; Root, J Jeffrey; Sullivan, Heather J
2016-02-01
The availability of a validated commercial assay is an asset for any wildlife investigation. However, commercial products are often developed for use in livestock and are not optimized for wildlife. Consequently, it is incumbent upon researchers and managers to apply commercial products appropriately to optimize program outcomes. We tested more than 800 serum samples from mallards for antibodies to influenza A virus with the IDEXX AI MultiS-Screen Ab test to evaluate assay performance. Applying the test per manufacturer's recommendations resulted in good performance with 84% sensitivity and 100% specificity. However, performance was improved to 98% sensitivity and 98% specificity by increasing the recommended cut-off. Using this alternative threshold for identifying positive and negative samples would greatly improve sample classification, especially for field samples collected months after infection when antibody titers have waned from the initial primary immune response. Furthermore, a threshold that balances sensitivity and specificity reduces estimation bias in seroprevalence estimates. Published by Elsevier B.V.
Region-based multi-step optic disk and cup segmentation from color fundus image
NASA Astrophysics Data System (ADS)
Xiao, Di; Lock, Jane; Manresa, Javier Moreno; Vignarajan, Janardhan; Tay-Kearney, Mei-Ling; Kanagasingam, Yogesan
2013-02-01
Retinal optic cup-disk-ratio (CDR) is a one of important indicators of glaucomatous neuropathy. In this paper, we propose a novel multi-step 4-quadrant thresholding method for optic disk segmentation and a multi-step temporal-nasal segmenting method for optic cup segmentation based on blood vessel inpainted HSL lightness images and green images. The performance of the proposed methods was evaluated on a group of color fundus images and compared with the manual outlining results from two experts. Dice scores of detected disk and cup regions between the auto and manual results were computed and compared. Vertical CDRs were also compared among the three results. The preliminary experiment has demonstrated the robustness of the method for automatic optic disk and cup segmentation and its potential value for clinical application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, W; Jiang, M; Yin, F
Purpose: Dynamic tracking of moving organs, such as lung and liver tumors, under radiation therapy requires prediction of organ motions prior to delivery. The shift of moving organ may change a lot due to huge transform of respiration at different periods. This study aims to reduce the influence of that changes using adjustable training signals and multi-layer perceptron neural network (ASMLP). Methods: Respiratory signals obtained using a Real-time Position Management(RPM) device were used for this study. The ASMLP uses two multi-layer perceptron neural networks(MLPs) to infer respiration position alternately and the training sample will be updated with time. Firstly, amore » Savitzky-Golay finite impulse response smoothing filter was established to smooth the respiratory signal. Secondly, two same MLPs were developed to estimate respiratory position from its previous positions separately. Weights and thresholds were updated to minimize network errors according to Leverberg-Marquart optimization algorithm through backward propagation method. Finally, MLP 1 was used to predict 120∼150s respiration position using 0∼120s training signals. At the same time, MLP 2 was trained using 30∼150s training signals. Then MLP is used to predict 150∼180s training signals according to 30∼150s training signals. The respiration position is predicted as this way until it was finished. Results: In this experiment, the two methods were used to predict 2.5 minute respiratory signals. For predicting 1s ahead of response time, correlation coefficient was improved from 0.8250(MLP method) to 0.8856(ASMLP method). Besides, a 30% improvement of mean absolute error between MLP(0.1798 on average) and ASMLP(0.1267 on average) was achieved. For predicting 2s ahead of response time, correlation coefficient was improved from 0.61415 to 0.7098.Mean absolute error of MLP method(0.3111 on average) was reduced by 35% using ASMLP method(0.2020 on average). Conclusion: The preliminary results demonstrate that the ASMLP respiratory prediction method is more accurate than MLP method and can improve the respiration forecast accuracy.« less
Generalized analog thresholding for spike acquisition at ultralow sampling rates
He, Bryan D.; Wein, Alex; Varshney, Lav R.; Kusuma, Julius; Richardson, Andrew G.
2015-01-01
Efficient spike acquisition techniques are needed to bridge the divide from creating large multielectrode arrays (MEA) to achieving whole-cortex electrophysiology. In this paper, we introduce generalized analog thresholding (gAT), which achieves millisecond temporal resolution with sampling rates as low as 10 Hz. Consider the torrent of data from a single 1,000-channel MEA, which would generate more than 3 GB/min using standard 30-kHz Nyquist sampling. Recent neural signal processing methods based on compressive sensing still require Nyquist sampling as a first step and use iterative methods to reconstruct spikes. Analog thresholding (AT) remains the best existing alternative, where spike waveforms are passed through an analog comparator and sampled at 1 kHz, with instant spike reconstruction. By generalizing AT, the new method reduces sampling rates another order of magnitude, detects more than one spike per interval, and reconstructs spike width. Unlike compressive sensing, the new method reveals a simple closed-form solution to achieve instant (noniterative) spike reconstruction. The base method is already robust to hardware nonidealities, including realistic quantization error and integration noise. Because it achieves these considerable specifications using hardware-friendly components like integrators and comparators, generalized AT could translate large-scale MEAs into implantable devices for scientific investigation and medical technology. PMID:25904712
Polynomial sequences for bond percolation critical thresholds
Scullard, Christian R.
2011-09-22
In this paper, I compute the inhomogeneous (multi-probability) bond critical surfaces for the (4, 6, 12) and (3 4, 6) using the linearity approximation described in (Scullard and Ziff, J. Stat. Mech. 03021), implemented as a branching process of lattices. I find the estimates for the bond percolation thresholds, pc(4, 6, 12) = 0.69377849... and p c(3 4, 6) = 0.43437077..., compared with Parviainen’s numerical results of p c = 0.69373383... and p c = 0.43430621... . These deviations are of the order 10 -5, as is standard for this method. Deriving thresholds in this way for a given latticemore » leads to a polynomial with integer coefficients, the root in [0, 1] of which gives the estimate for the bond threshold and I show how the method can be refined, leading to a series of higher order polynomials making predictions that likely converge to the exact answer. Finally, I discuss how this fact hints that for certain graphs, such as the kagome lattice, the exact bond threshold may not be the root of any polynomial with integer coefficients.« less
Ding, Wen-jie; Chen, Wen-he; Deng, Ming-jia; Luo, Hui; Li, Lin; Liu, Jun-xin
2016-02-15
Co-processing of sewage sludge using the cement kiln can realize sludge harmless treatment, quantity reduction, stabilization and reutilization. The moisture content should be reduced to below 30% to meet the requirement of combustion. Thermal drying is an effective way for sludge desiccation. Odors and volatile organic compounds are generated and released during the sludge drying process, which could lead to odor pollution. The main odor pollutants were selected by the multi-index integrated assessment method. The concentration, olfactory threshold, threshold limit value, smell security level and saturated vapor pressure were considered as indexes based on the related regulations in China and foreign countries. Taking the pollution potential as the evaluation target, and the risk index and odor emission intensity as evaluation indexes, the odor pollution potential rated evaluation model of the pollutants was built according to the Weber-Fechner law. The aim of the present study is to form the rating evaluation method of odor potential pollution capacity suitable for the directly drying process of sludge.
Wright, Kirsty; Mundorff, Amy; Chaseling, Janet; Forrest, Alexander; Maguire, Christopher; Crane, Denis I
2015-05-01
The international disaster victim identification (DVI) response to the Boxing Day tsunami, led by the Royal Thai Police in Phuket, Thailand, was one of the largest and most complex in DVI history. Referred to as the Thai Tsunami Victim Identification operation, the group comprised a multi-national, multi-agency, and multi-disciplinary team. The traditional DVI approach proved successful in identifying a large number of victims quickly. However, the team struggled to identify certain victims due to incomplete or poor quality ante-mortem and post-mortem data. In response to these challenges, a new 'near-threshold' DVI management strategy was implemented to target presumptive identifications and improve operational efficiency. The strategy was implemented by the DNA Team, therefore DNA kinship matches that just failed to reach the reporting threshold of 99.9% were prioritized, however the same approach could be taken by targeting, for example, cases with partial fingerprint matches. The presumptive DNA identifications were progressively filtered through the Investigation, Dental and Fingerprint Teams to add additional information necessary to either strengthen or conclusively exclude the identification. Over a five-month period 111 victims from ten countries were identified using this targeted approach. The new identifications comprised 87 adults, 24 children and included 97 Thai locals. New data from the Fingerprint Team established nearly 60% of the total near-threshold identifications and the combined DNA/Physical method was responsible for over 30%. Implementing the new strategy, targeting near-threshold cases, had positive management implications. The process initiated additional ante-mortem information collections, and established a much-needed, distinct "end-point" for unresolved cases. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Analysis Concerning the Inspection Threshold for Multi-Site Damage.
DOT National Transportation Integrated Search
1993-12-01
Periodic inspections, at a prescribed interval, for Multi-Site Damage (MS) in longitudinal fuselage lap-joints start when the aircraft has accumulated a certain number of flights, the inspection threshold. The work reported here was an attempt to obt...
Relaxation channels of multi-photon excited xenon clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Serdobintsev, P. Yu.; Melnikov, A. S.; Department of Physics, St. Petersburg State University, Saint Petersburg 198904
2015-09-21
The relaxation processes of the xenon clusters subjected to multi-photon excitation by laser radiation with quantum energies significantly lower than the thresholds of excitation of atoms and ionization of clusters were studied. Results obtained by means of the photoelectron spectroscopy method showed that desorption processes of excited atoms play a significant role in the decay of two-photon excited xenon clusters. A number of excited states of xenon atoms formed during this process were discovered and identified.
Methods for threshold determination in multiplexed assays
Tammero, Lance F. Bentley; Dzenitis, John M; Hindson, Benjamin J
2014-06-24
Methods for determination of threshold values of signatures comprised in an assay are described. Each signature enables detection of a target. The methods determine a probability density function of negative samples and a corresponding false positive rate curve. A false positive criterion is established and a threshold for that signature is determined as a point at which the false positive rate curve intersects the false positive criterion. A method for quantitative analysis and interpretation of assay results together with a method for determination of a desired limit of detection of a signature in an assay are also described.
NASA Astrophysics Data System (ADS)
Wen, Hongwei; Liu, Yue; Wang, Shengpei; Li, Zuoyong; Zhang, Jishui; Peng, Yun; He, Huiguang
2017-03-01
Tourette syndrome (TS) is a childhood-onset neurobehavioral disorder. To date, TS is still misdiagnosed due to its varied presentation and lacking of obvious clinical symptoms. Therefore, studies of objective imaging biomarkers are of great importance for early TS diagnosis. As tic generation has been linked to disturbed structural networks, and many efforts have been made recently to investigate brain functional or structural networks using machine learning methods, for the purpose of disease diagnosis. However, few studies were related to TS and some drawbacks still existed in them. Therefore, we propose a novel classification framework integrating a multi-threshold strategy and a network fusion scheme to address the preexisting drawbacks. Here we used diffusion MRI probabilistic tractography to construct the structural networks of 44 TS children and 48 healthy children. We ameliorated the similarity network fusion algorithm specially to fuse the multi-threshold structural networks. Graph theoretical analysis was then implemented, and nodal degree, nodal efficiency and nodal betweenness centrality were selected as features. Finally, support vector machine recursive feature extraction (SVM-RFE) algorithm was used for feature selection, and then optimal features are fed into SVM to automatically discriminate TS children from controls. We achieved a high accuracy of 89.13% evaluated by a nested cross validation, demonstrated the superior performance of our framework over other comparison methods. The involved discriminative regions for classification primarily located in the basal ganglia and frontal cortico-cortical networks, all highly related to the pathology of TS. Together, our study may provide potential neuroimaging biomarkers for early-stage TS diagnosis.
Detection and quantification system for monitoring instruments
Dzenitis, John M [Danville, CA; Hertzog, Claudia K [Houston, TX; Makarewicz, Anthony J [Livermore, CA; Henderer, Bruce D [Livermore, CA; Riot, Vincent J [Oakland, CA
2008-08-12
A method of detecting real events by obtaining a set of recent signal results, calculating measures of the noise or variation based on the set of recent signal results, calculating an expected baseline value based on the set of recent signal results, determining sample deviation, calculating an allowable deviation by multiplying the sample deviation by a threshold factor, setting an alarm threshold from the baseline value plus or minus the allowable deviation, and determining whether the signal results exceed the alarm threshold.
Lohse, Christian; Bassett, Danielle S; Lim, Kelvin O; Carlson, Jean M
2014-10-01
Human brain anatomy and function display a combination of modular and hierarchical organization, suggesting the importance of both cohesive structures and variable resolutions in the facilitation of healthy cognitive processes. However, tools to simultaneously probe these features of brain architecture require further development. We propose and apply a set of methods to extract cohesive structures in network representations of brain connectivity using multi-resolution techniques. We employ a combination of soft thresholding, windowed thresholding, and resolution in community detection, that enable us to identify and isolate structures associated with different weights. One such mesoscale structure is bipartivity, which quantifies the extent to which the brain is divided into two partitions with high connectivity between partitions and low connectivity within partitions. A second, complementary mesoscale structure is modularity, which quantifies the extent to which the brain is divided into multiple communities with strong connectivity within each community and weak connectivity between communities. Our methods lead to multi-resolution curves of these network diagnostics over a range of spatial, geometric, and structural scales. For statistical comparison, we contrast our results with those obtained for several benchmark null models. Our work demonstrates that multi-resolution diagnostic curves capture complex organizational profiles in weighted graphs. We apply these methods to the identification of resolution-specific characteristics of healthy weighted graph architecture and altered connectivity profiles in psychiatric disease.
Recommendations for level-determined sampling in wells
NASA Astrophysics Data System (ADS)
Lerner, David N.; Teutsch, Georg
1995-10-01
Level-determined samples of groundwater are increasingly important for hydrogeological studies. The techniques for collecting them range from the use of purpose drilled wells, sometimes with sophisticated dedicated multi-level samplers in them, to a variety of methods used in open wells. Open, often existing, wells are frequently used on cost grounds, but there are risks of obtaining poor and unrepresentative samples. Alternative approaches to level-determined sampling incorporate seven concepts: depth sampling; packer systems; individual wells; dedicated multi-level systems; separation pumping; baffle systems; multi-port sock samplers. These are outlined and evaluated in terms of the environment to be sampled, and the features and performance of the methods. Recommendations are offered to match methods to sampling problems.
The information extraction of Gannan citrus orchard based on the GF-1 remote sensing image
NASA Astrophysics Data System (ADS)
Wang, S.; Chen, Y. L.
2017-02-01
The production of Gannan oranges is the largest in China, which occupied an important part in the world. The extraction of citrus orchard quickly and effectively has important significance for fruit pathogen defense, fruit production and industrial planning. The traditional spectra extraction method of citrus orchard based on pixel has a lower classification accuracy, difficult to avoid the “pepper phenomenon”. In the influence of noise, the phenomenon that different spectrums of objects have the same spectrum is graveness. Taking Xunwu County citrus fruit planting area of Ganzhou as the research object, aiming at the disadvantage of the lower accuracy of the traditional method based on image element classification method, a decision tree classification method based on object-oriented rule set is proposed. Firstly, multi-scale segmentation is performed on the GF-1 remote sensing image data of the study area. Subsequently the sample objects are selected for statistical analysis of spectral features and geometric features. Finally, combined with the concept of decision tree classification, a variety of empirical values of single band threshold, NDVI, band combination and object geometry characteristics are used hierarchically to execute the information extraction of the research area, and multi-scale segmentation and hierarchical decision tree classification is implemented. The classification results are verified with the confusion matrix, and the overall Kappa index is 87.91%.
Quintana, Penelope J E; Matt, Georg E; Chatfield, Dale; Zakarian, Joy M; Fortmann, Addie L; Hoh, Eunha
2013-09-01
Secondhand smoke contains a mixture of pollutants that can persist in air, dust, and on surfaces for months or longer. This persistent residue is known as thirdhand smoke (THS). Here, we detail a simple method of wipe sampling for nicotine as a marker of accumulated THS on surfaces. We analyzed findings from 5 real-world studies to investigate the performance of wipe sampling for nicotine on surfaces in homes, cars, and hotels in relation to smoking behavior and smoking restrictions. The intraclass correlation coefficient for side-by-side samples was 0.91 (95% CI: 0.87-0.94). Wipe sampling for nicotine reliably distinguished between private homes, private cars, rental cars, and hotels with and without smoking bans and was significantly positively correlated with other measures of tobacco smoke contamination such as air and dust nicotine. The sensitivity and specificity of possible threshold values (0.1, 1, and 10 μg/m(2)) were evaluated for distinguishing between nonsmoking and smoking environments. Sensitivity was highest at a threshold of 0.1 μg/m(2), with 74%-100% of smoker environments showing nicotine levels above threshold. Specificity was highest at a threshold of 10 μg/m(2), with 81%-100% of nonsmoker environments showing nicotine levels below threshold. The optimal threshold will depend on the desired balance of sensitivity and specificity and on the types of smoking and nonsmoking environments. Surface wipe sampling for nicotine is a reliable, valid, and relatively simple collection method to quantify THS contamination on surfaces across a wide range of field settings and to distinguish between nonsmoking and smoking environments.
The Uncertainty of Long-term Linear Trend in Global SST Due to Internal Variation
NASA Astrophysics Data System (ADS)
Lian, Tao
2016-04-01
In most parts of the global ocean, the magnitude of the long-term linear trend in sea surface temperature (SST) is much smaller than the amplitude of local multi-scale internal variation. One can thus use the record of a specified period to arbitrarily determine the value and the sign of the long-term linear trend in regional SST, and further leading to controversial conclusions on how global SST responds to global warming in the recent history. Analyzing the linear trend coefficient estimated by the ordinary least-square method indicates that the linear trend consists of two parts: One related to the long-term change, and the other related to the multi-scale internal variation. The sign of the long-term change can be correctly reproduced only when the magnitude of the linear trend coefficient is greater than a theoretical threshold which scales the influence from the multi-scale internal variation. Otherwise, the sign of the linear trend coefficient will depend on the phase of the internal variation, or in the other words, the period being used. An improved least-square method is then proposed to reduce the theoretical threshold. When apply the new method to a global SST reconstruction from 1881 to 2013, we find that in a large part of Pacific, the southern Indian Ocean and North Atlantic, the influence from the multi-scale internal variation on the sign of the linear trend coefficient can-not be excluded. Therefore, the resulting warming or/and cooling linear trends in these regions can-not be fully assigned to global warming.
Application of composite dictionary multi-atom matching in gear fault diagnosis.
Cui, Lingli; Kang, Chenhui; Wang, Huaqing; Chen, Peng
2011-01-01
The sparse decomposition based on matching pursuit is an adaptive sparse expression method for signals. This paper proposes an idea concerning a composite dictionary multi-atom matching decomposition and reconstruction algorithm, and the introduction of threshold de-noising in the reconstruction algorithm. Based on the structural characteristics of gear fault signals, a composite dictionary combining the impulse time-frequency dictionary and the Fourier dictionary was constituted, and a genetic algorithm was applied to search for the best matching atom. The analysis results of gear fault simulation signals indicated the effectiveness of the hard threshold, and the impulse or harmonic characteristic components could be separately extracted. Meanwhile, the robustness of the composite dictionary multi-atom matching algorithm at different noise levels was investigated. Aiming at the effects of data lengths on the calculation efficiency of the algorithm, an improved segmented decomposition and reconstruction algorithm was proposed, and the calculation efficiency of the decomposition algorithm was significantly enhanced. In addition it is shown that the multi-atom matching algorithm was superior to the single-atom matching algorithm in both calculation efficiency and algorithm robustness. Finally, the above algorithm was applied to gear fault engineering signals, and achieved good results.
Peripleural lung disease detection based on multi-slice CT images
NASA Astrophysics Data System (ADS)
Matsuhiro, M.; Suzuki, H.; Kawata, Y.; Niki, N.; Nakano, Y.; Ohmatsu, H.; Kusumoto, M.; Tsuchida, T.; Eguchi, K.; Kaneko, M.
2015-03-01
With the development of multi-slice CT technology, obtaining accurate 3D images of lung field in a short time become possible. To support that, a lot of image processing methods need to be developed. Detection peripleural lung disease is difficult due to its existence out of lung region, because lung extraction is often performed based on threshold processing. The proposed method uses thoracic inner region extracted by inner cavity of bone as well as air region, covers peripleural lung diseased cases such as lung nodule, calcification, pleural effusion and pleural plaque. We applied this method to 50 cases including 39 peripleural lung diseased cases. This method was able to detect 39 peripleural lung disease with 2.9 false positive per case.
Muir, Ryan D.; Pogranichney, Nicholas R.; Muir, J. Lewis; Sullivan, Shane Z.; Battaile, Kevin P.; Mulichak, Anne M.; Toth, Scott J.; Keefe, Lisa J.; Simpson, Garth J.
2014-01-01
Experiments and modeling are described to perform spectral fitting of multi-threshold counting measurements on a pixel-array detector. An analytical model was developed for describing the probability density function of detected voltage in X-ray photon-counting arrays, utilizing fractional photon counting to account for edge/corner effects from voltage plumes that spread across multiple pixels. Each pixel was mathematically calibrated by fitting the detected voltage distributions to the model at both 13.5 keV and 15.0 keV X-ray energies. The model and established pixel responses were then exploited to statistically recover images of X-ray intensity as a function of X-ray energy in a simulated multi-wavelength and multi-counting threshold experiment. PMID:25178010
Muir, Ryan D; Pogranichney, Nicholas R; Muir, J Lewis; Sullivan, Shane Z; Battaile, Kevin P; Mulichak, Anne M; Toth, Scott J; Keefe, Lisa J; Simpson, Garth J
2014-09-01
Experiments and modeling are described to perform spectral fitting of multi-threshold counting measurements on a pixel-array detector. An analytical model was developed for describing the probability density function of detected voltage in X-ray photon-counting arrays, utilizing fractional photon counting to account for edge/corner effects from voltage plumes that spread across multiple pixels. Each pixel was mathematically calibrated by fitting the detected voltage distributions to the model at both 13.5 keV and 15.0 keV X-ray energies. The model and established pixel responses were then exploited to statistically recover images of X-ray intensity as a function of X-ray energy in a simulated multi-wavelength and multi-counting threshold experiment.
NASA Astrophysics Data System (ADS)
Lachaut, T.; Yoon, J.; Klassert, C. J. A.; Talozi, S.; Mustafa, D.; Knox, S.; Selby, P. D.; Haddad, Y.; Gorelick, S.; Tilmant, A.
2016-12-01
Probabilistic approaches to uncertainty in water systems management can face challenges of several types: non stationary climate, sudden shocks such as conflict-driven migrations, or the internal complexity and dynamics of large systems. There has been a rising trend in the development of bottom-up methods that place focus on the decision side instead of probability distributions and climate scenarios. These approaches are based on defining acceptability thresholds for the decision makers and considering the entire range of possibilities over which such thresholds are crossed. We aim at improving the knowledge on the applicability and relevance of this approach by enlarging its scope beyond climate uncertainty and single decision makers; thus including demographic shifts, internal system dynamics, and multiple stakeholders at different scales. This vulnerability analysis is part of the Jordan Water Project and makes use of an ambitious multi-agent model developed by its teams with the extensive cooperation of the Ministry of Water and Irrigation of Jordan. The case of Jordan is a relevant example for migration spikes, rapid social changes, resource depletion and climate change impacts. The multi-agent modeling framework used provides a consistent structure to assess the vulnerability of complex water resources systems with distributed acceptability thresholds and stakeholder interaction. A proof of concept and preliminary results are presented for a non-probabilistic vulnerability analysis that involves different types of stakeholders, uncertainties other than climatic and the integration of threshold-based indicators. For each stakeholder (agent) a vulnerability matrix is constructed over a multi-dimensional domain, which includes various hydrologic and/or demographic variables.
A fast learning method for large scale and multi-class samples of SVM
NASA Astrophysics Data System (ADS)
Fan, Yu; Guo, Huiming
2017-06-01
A multi-class classification SVM(Support Vector Machine) fast learning method based on binary tree is presented to solve its low learning efficiency when SVM processing large scale multi-class samples. This paper adopts bottom-up method to set up binary tree hierarchy structure, according to achieved hierarchy structure, sub-classifier learns from corresponding samples of each node. During the learning, several class clusters are generated after the first clustering of the training samples. Firstly, central points are extracted from those class clusters which just have one type of samples. For those which have two types of samples, cluster numbers of their positive and negative samples are set respectively according to their mixture degree, secondary clustering undertaken afterwards, after which, central points are extracted from achieved sub-class clusters. By learning from the reduced samples formed by the integration of extracted central points above, sub-classifiers are obtained. Simulation experiment shows that, this fast learning method, which is based on multi-level clustering, can guarantee higher classification accuracy, greatly reduce sample numbers and effectively improve learning efficiency.
Feltus, F Alex; Ficklin, Stephen P; Gibson, Scott M; Smith, Melissa C
2013-06-05
In genomics, highly relevant gene interaction (co-expression) networks have been constructed by finding significant pair-wise correlations between genes in expression datasets. These networks are then mined to elucidate biological function at the polygenic level. In some cases networks may be constructed from input samples that measure gene expression under a variety of different conditions, such as for different genotypes, environments, disease states and tissues. When large sets of samples are obtained from public repositories it is often unmanageable to associate samples into condition-specific groups, and combining samples from various conditions has a negative effect on network size. A fixed significance threshold is often applied also limiting the size of the final network. Therefore, we propose pre-clustering of input expression samples to approximate condition-specific grouping of samples and individual network construction of each group as a means for dynamic significance thresholding. The net effect is increase sensitivity thus maximizing the total co-expression relationships in the final co-expression network compendium. A total of 86 Arabidopsis thaliana co-expression networks were constructed after k-means partitioning of 7,105 publicly available ATH1 Affymetrix microarray samples. We term each pre-sorted network a Gene Interaction Layer (GIL). Random Matrix Theory (RMT), an un-supervised thresholding method, was used to threshold each of the 86 networks independently, effectively providing a dynamic (non-global) threshold for the network. The overall gene count across all GILs reached 19,588 genes (94.7% measured gene coverage) and 558,022 unique co-expression relationships. In comparison, network construction without pre-sorting of input samples yielded only 3,297 genes (15.9%) and 129,134 relationships. in the global network. Here we show that pre-clustering of microarray samples helps approximate condition-specific networks and allows for dynamic thresholding using un-supervised methods. Because RMT ensures only highly significant interactions are kept, the GIL compendium consists of 558,022 unique high quality A. thaliana co-expression relationships across almost all of the measurable genes on the ATH1 array. For A. thaliana, these networks represent the largest compendium to date of significant gene co-expression relationships, and are a means to explore complex pathway, polygenic, and pleiotropic relationships for this focal model plant. The networks can be explored at sysbio.genome.clemson.edu. Finally, this method is applicable to any large expression profile collection for any organism and is best suited where a knowledge-independent network construction method is desired.
2013-01-01
Background In genomics, highly relevant gene interaction (co-expression) networks have been constructed by finding significant pair-wise correlations between genes in expression datasets. These networks are then mined to elucidate biological function at the polygenic level. In some cases networks may be constructed from input samples that measure gene expression under a variety of different conditions, such as for different genotypes, environments, disease states and tissues. When large sets of samples are obtained from public repositories it is often unmanageable to associate samples into condition-specific groups, and combining samples from various conditions has a negative effect on network size. A fixed significance threshold is often applied also limiting the size of the final network. Therefore, we propose pre-clustering of input expression samples to approximate condition-specific grouping of samples and individual network construction of each group as a means for dynamic significance thresholding. The net effect is increase sensitivity thus maximizing the total co-expression relationships in the final co-expression network compendium. Results A total of 86 Arabidopsis thaliana co-expression networks were constructed after k-means partitioning of 7,105 publicly available ATH1 Affymetrix microarray samples. We term each pre-sorted network a Gene Interaction Layer (GIL). Random Matrix Theory (RMT), an un-supervised thresholding method, was used to threshold each of the 86 networks independently, effectively providing a dynamic (non-global) threshold for the network. The overall gene count across all GILs reached 19,588 genes (94.7% measured gene coverage) and 558,022 unique co-expression relationships. In comparison, network construction without pre-sorting of input samples yielded only 3,297 genes (15.9%) and 129,134 relationships. in the global network. Conclusions Here we show that pre-clustering of microarray samples helps approximate condition-specific networks and allows for dynamic thresholding using un-supervised methods. Because RMT ensures only highly significant interactions are kept, the GIL compendium consists of 558,022 unique high quality A. thaliana co-expression relationships across almost all of the measurable genes on the ATH1 array. For A. thaliana, these networks represent the largest compendium to date of significant gene co-expression relationships, and are a means to explore complex pathway, polygenic, and pleiotropic relationships for this focal model plant. The networks can be explored at sysbio.genome.clemson.edu. Finally, this method is applicable to any large expression profile collection for any organism and is best suited where a knowledge-independent network construction method is desired. PMID:23738693
Optimal maintenance policy incorporating system level and unit level for mechanical systems
NASA Astrophysics Data System (ADS)
Duan, Chaoqun; Deng, Chao; Wang, Bingran
2018-04-01
The study works on a multi-level maintenance policy combining system level and unit level under soft and hard failure modes. The system experiences system-level preventive maintenance (SLPM) when the conditional reliability of entire system exceeds SLPM threshold, and also undergoes a two-level maintenance for each single unit, which is initiated when a single unit exceeds its preventive maintenance (PM) threshold, and the other is performed simultaneously the moment when any unit is going for maintenance. The units experience both periodic inspections and aperiodic inspections provided by failures of hard-type units. To model the practical situations, two types of economic dependence have been taken into account, which are set-up cost dependence and maintenance expertise dependence due to the same technology and tool/equipment can be utilised. The optimisation problem is formulated and solved in a semi-Markov decision process framework. The objective is to find the optimal system-level threshold and unit-level thresholds by minimising the long-run expected average cost per unit time. A formula for the mean residual life is derived for the proposed multi-level maintenance policy. The method is illustrated by a real case study of feed subsystem from a boring machine, and a comparison with other policies demonstrates the effectiveness of our approach.
2011-01-01
Background The International Multi-centre ADHD Genetics (IMAGE) project with 11 participating centres from 7 European countries and Israel has collected a large behavioural and genetic database for present and future research. Behavioural data were collected from 1068 probands with the combined type of attention deficit/hyperactivity disorder (ADHD-CT) and 1446 'unselected' siblings. The aim was to analyse the IMAGE sample with respect to demographic features (gender, age, family status, and recruiting centres) and psychopathological characteristics (diagnostic subtype, symptom frequencies, age at symptom detection, and comorbidities). A particular focus was on the effects of the study design and the diagnostic procedure on the homogeneity of the sample in terms of symptom-based behavioural data, and potential consequences for further analyses based on these data. Methods Diagnosis was based on the Parental Account of Childhood Symptoms (PACS) interview and the DSM-IV items of the Conners' teacher questionnaire. Demographics of the full sample and the homogeneity of a subsample (all probands) were analysed by using robust statistical procedures which were adjusted for unequal sample sizes and skewed distributions. These procedures included multi-way analyses based on trimmed means and winsorised variances as well as bootstrapping. Results Age and proband/sibling ratios differed between participating centres. There was no significant difference in the distribution of gender between centres. There was a significant interaction between age and centre for number of inattentive, but not number of hyperactive symptoms. Higher ADHD symptom frequencies were reported by parents than teachers. The diagnostic symptoms differed from each other in their frequencies. The face-to-face interview was more sensitive than the questionnaire. The differentiation between ADHD-CT probands and unaffected siblings was mainly due to differences in hyperactive/impulsive symptoms. Conclusions Despite a symptom-based standardized inclusion procedure according to DSM-IV criteria with defined symptom thresholds, centres may differ markedly in probands' ADHD symptom frequencies. Both the diagnostic procedure and the multi-centre design influence the behavioural characteristics of a sample and, thus, may bias statistical analyses, particularly in genetic or neurobehavioral studies. PMID:21473745
Comparability among four invertebrate sampling methods, Fountain Creek Basin, Colorado, 2010-2012
Zuellig, Robert E.; Bruce, James F.; Stogner, Sr., Robert W.; Brown, Krystal D.
2014-01-01
The U.S. Geological Survey, in cooperation with Colorado Springs City Engineering and Colorado Springs Utilities, designed a study to determine if sampling method and sample timing resulted in comparable samples and assessments of biological condition. To accomplish this task, annual invertebrate samples were collected concurrently using four sampling methods at 15 U.S. Geological Survey streamflow gages in the Fountain Creek basin from 2010 to 2012. Collectively, the four methods are used by local (U.S. Geological Survey cooperative monitoring program) and State monitoring programs (Colorado Department of Public Health and Environment) in the Fountain Creek basin to produce two distinct sample types for each program that target single-and multiple-habitats. This study found distinguishable differences between single-and multi-habitat sample types using both community similarities and multi-metric index values, while methods from each program within sample type were comparable. This indicates that the Colorado Department of Public Health and Environment methods were compatible with the cooperative monitoring program methods within multi-and single-habitat sample types. Comparisons between September and October samples found distinguishable differences based on community similarities for both sample types, whereas only differences were found for single-habitat samples when multi-metric index values were considered. At one site, differences between September and October index values from single-habitat samples resulted in opposing assessments of biological condition. Direct application of the results to inform the revision of the existing Fountain Creek basin U.S. Geological Survey cooperative monitoring program are discussed.
NASA Astrophysics Data System (ADS)
Pries, V. V.; Proskuriakov, N. E.
2018-04-01
To control the assembly quality of multi-element mass-produced products on automatic rotor lines, control methods with operational feedback are required. However, due to possible failures in the operation of the devices and systems of automatic rotor line, there is always a real probability of getting defective (incomplete) products into the output process stream. Therefore, a continuous sampling control of the products completeness, based on the use of statistical methods, remains an important element in managing the quality of assembly of multi-element mass products on automatic rotor lines. The feature of continuous sampling control of the multi-element products completeness in the assembly process is its breaking sort, which excludes the possibility of returning component parts after sampling control to the process stream and leads to a decrease in the actual productivity of the assembly equipment. Therefore, the use of statistical procedures for continuous sampling control of the multi-element products completeness when assembled on automatic rotor lines requires the use of such sampling plans that ensure a minimum size of control samples. Comparison of the values of the limit of the average output defect level for the continuous sampling plan (CSP) and for the automated continuous sampling plan (ACSP) shows the possibility of providing lower limit values for the average output defects level using the ACSP-1. Also, the average sample size when using the ACSP-1 plan is less than when using the CSP-1 plan. Thus, the application of statistical methods in the assembly quality management of multi-element products on automatic rotor lines, involving the use of proposed plans and methods for continuous selective control, will allow to automating sampling control procedures and the required level of quality of assembled products while minimizing sample size.
Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M.; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E.; Allen, Peter J.; Sempere, Lorenzo F.; Haab, Brian B.
2016-01-01
Certain experiments involve the high-throughput quantification of image data, thus requiring algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multi-color, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu’s method for selected images. SFT promises to advance the goal of full automation in image analysis. PMID:26339978
An opportunity cost approach to sample size calculation in cost-effectiveness analysis.
Gafni, A; Walter, S D; Birch, S; Sendi, P
2008-01-01
The inclusion of economic evaluations as part of clinical trials has led to concerns about the adequacy of trial sample size to support such analysis. The analytical tool of cost-effectiveness analysis is the incremental cost-effectiveness ratio (ICER), which is compared with a threshold value (lambda) as a method to determine the efficiency of a health-care intervention. Accordingly, many of the methods suggested to calculating the sample size requirements for the economic component of clinical trials are based on the properties of the ICER. However, use of the ICER and a threshold value as a basis for determining efficiency has been shown to be inconsistent with the economic concept of opportunity cost. As a result, the validity of the ICER-based approaches to sample size calculations can be challenged. Alternative methods for determining improvements in efficiency have been presented in the literature that does not depend upon ICER values. In this paper, we develop an opportunity cost approach to calculating sample size for economic evaluations alongside clinical trials, and illustrate the approach using a numerical example. We compare the sample size requirement of the opportunity cost method with the ICER threshold method. In general, either method may yield the larger required sample size. However, the opportunity cost approach, although simple to use, has additional data requirements. We believe that the additional data requirements represent a small price to pay for being able to perform an analysis consistent with both concept of opportunity cost and the problem faced by decision makers. Copyright (c) 2007 John Wiley & Sons, Ltd.
A Novel Degradation Identification Method for Wind Turbine Pitch System
NASA Astrophysics Data System (ADS)
Guo, Hui-Dong
2018-04-01
It’s difficult for traditional threshold value method to identify degradation of operating equipment accurately. An novel degradation evaluation method suitable for wind turbine condition maintenance strategy implementation was proposed in this paper. Based on the analysis of typical variable-speed pitch-to-feather control principle and monitoring parameters for pitch system, a multi input multi output (MIMO) regression model was applied to pitch system, where wind speed, power generation regarding as input parameters, wheel rotation speed, pitch angle and motor driving currency for three blades as output parameters. Then, the difference between the on-line measurement and the calculated value from the MIMO regression model applying least square support vector machines (LSSVM) method was defined as the Observed Vector of the system. The Gaussian mixture model (GMM) was applied to fitting the distribution of the multi dimension Observed Vectors. Applying the model established, the Degradation Index was calculated using the SCADA data of a wind turbine damaged its pitch bearing retainer and rolling body, which illustrated the feasibility of the provided method.
Threshold for plasma phase transition of aluminum single crystal induced by hypervelocity impact
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ju, Yuanyuan; Zhang, Qingming, E-mail: qmzhang@bit.edu.cn
2015-12-15
Molecular dynamics method is used to study the threshold for plasma phase transition of aluminum single crystal induced by hypervelocity impact. Two effective simulation methods, piston-driven method and multi-scale shock technique, are used to simulate the shock wave. The simulation results from the two methods agree well with the experimental data, indicating that the shock wave velocity is linearly dependent on the particle velocity. The atom is considered to be ionized if the increase of its internal energy is larger than the first ionization energy. The critical impact velocity for plasma phase transition is about 13.0 km/s, corresponding to the thresholdmore » of pressure and temperature which is about 220 GPa and 11.0 × 10{sup 3 }K on the shock Hugoniot, respectively.« less
NASA Astrophysics Data System (ADS)
Gliese, U.; Avanov, L. A.; Barrie, A.; Kujawski, J. T.; Mariano, A. J.; Tucker, C. J.; Chornay, D. J.; Cao, N. T.; Zeuch, M.; Pollock, C. J.; Jacques, A. D.
2013-12-01
The Fast Plasma Investigation (FPI) of the NASA Magnetospheric MultiScale (MMS) mission employs 16 Dual Electron Spectrometers (DESs) and 16 Dual Ion Spectrometers (DISs) with 4 of each type on each of 4 spacecraft to enable fast (30ms for electrons; 150ms for ions) and spatially differentiated measurements of full the 3D particle velocity distributions. This approach presents a new and challenging aspect to the calibration and operation of these instruments on ground and in flight. The response uniformity and reliability of their calibration and the approach to handling any temporal evolution of these calibrated characteristics all assume enhanced importance in this application, where we attempt to understand the meaning of particle distributions within the ion and electron diffusion regions. Traditionally, the micro-channel plate (MCP) based detection systems for electrostatic particle spectrometers have been calibrated by setting a fixed detection threshold and, subsequently, measuring a detection system count rate plateau curve to determine the MCP voltage that ensures the count rate has reached a constant value independent of further variation in the MCP voltage. This is achieved when most of the MCP pulse height distribution (PHD) is located at higher values (larger pulses) than the detection amplifier threshold. This method is adequate in single-channel detection systems and in multi-channel detection systems with very low crosstalk between channels. However, in dense multi-channel systems, it can be inadequate. Furthermore, it fails to fully and individually characterize each of the fundamental parameters of the detection system. We present a new detection system calibration method that enables accurate and repeatable measurement and calibration of MCP gain, MCP efficiency, signal loss due to variation in gain and efficiency, crosstalk from effects both above and below the MCP, noise margin, and stability margin in one single measurement. The fundamental concepts of this method, named threshold scan, will be presented. It will be shown how to derive all the individual detection system parameters. This new method has been successfully applied to achieve a highly accurate calibration of the 16 Dual Electron Spectrometers and 16 Dual Ion Spectrometers of the MMS mission. The practical application of the method will be presented together with the achieved calibration results and their significance. Finally, it will be shown how this method will be applied to ensure the best possible in flight calibration during the mission.
GROUND WATER MONITORING AND SAMPLING: MULTI-LEVEL VERSUS TRADITIONAL METHODS WHATS WHAT?
After years of research and many publications, the question still remains: What is the best method to collect representative ground water samples from monitoring wells? Numerous systems and devices are currently available for obtaining both multi-level samples as well as traditi...
Cluster-based analysis improves predictive validity of spike-triggered receptive field estimates
Malone, Brian J.
2017-01-01
Spectrotemporal receptive field (STRF) characterization is a central goal of auditory physiology. STRFs are often approximated by the spike-triggered average (STA), which reflects the average stimulus preceding a spike. In many cases, the raw STA is subjected to a threshold defined by gain values expected by chance. However, such correction methods have not been universally adopted, and the consequences of specific gain-thresholding approaches have not been investigated systematically. Here, we evaluate two classes of statistical correction techniques, using the resulting STRF estimates to predict responses to a novel validation stimulus. The first, more traditional technique eliminated STRF pixels (time-frequency bins) with gain values expected by chance. This correction method yielded significant increases in prediction accuracy, including when the threshold setting was optimized for each unit. The second technique was a two-step thresholding procedure wherein clusters of contiguous pixels surviving an initial gain threshold were then subjected to a cluster mass threshold based on summed pixel values. This approach significantly improved upon even the best gain-thresholding techniques. Additional analyses suggested that allowing threshold settings to vary independently for excitatory and inhibitory subfields of the STRF resulted in only marginal additional gains, at best. In summary, augmenting reverse correlation techniques with principled statistical correction choices increased prediction accuracy by over 80% for multi-unit STRFs and by over 40% for single-unit STRFs, furthering the interpretational relevance of the recovered spectrotemporal filters for auditory systems analysis. PMID:28877194
Thermal detection thresholds in 5-year-old preterm born children; IQ does matter.
de Graaf, Joke; Valkenburg, Abraham J; Tibboel, Dick; van Dijk, Monique
2012-07-01
Experiencing pain at newborn age may have consequences on one's somatosensory perception later in life. Children's perception for cold and warm stimuli may be determined with the Thermal Sensory Analyzer (TSA) device by two different methods. This pilot study in 5-year-old children born preterm aimed at establishing whether the TSA method of limits, which is dependent of reaction time, and the method of levels, which is independent of reaction time, would yield different cold and warm detection thresholds. The second aim was to establish possible associations between intellectual ability and the detection thresholds obtained with either method. A convenience sample was drawn from the participants in an ongoing 5-year follow-up study of a randomized controlled trial on effects of morphine during mechanical ventilation. Thresholds were assessed using both methods and statistically compared. Possible associations between the child's intelligence quotient (IQ) and threshold levels were analyzed. The method of levels yielded more sensitive thresholds than did the method of limits, i.e. mean (SD) cold detection thresholds: 30.3 (1.4) versus 28.4 (1.7) (Cohen'sd=1.2, P=0.001) and warm detection thresholds; 33.9 (1.9) versus 35.6 (2.1) (Cohen's d=0.8, P=0.04). IQ was statistically significantly associated only with the detection thresholds obtained with the method of limits (cold: r=0.64, warm: r=-0.52). The TSA method of levels, is to be preferred over the method of limits in 5-year-old preterm born children, as it establishes more sensitive detection thresholds and is independent of IQ. Copyright © 2011 Elsevier Ltd. All rights reserved.
A novel method for 3D measurement of RFID multi-tag network based on matching vision and wavelet
NASA Astrophysics Data System (ADS)
Zhuang, Xiao; Yu, Xiaolei; Zhao, Zhimin; Wang, Donghua; Zhang, Wenjie; Liu, Zhenlu; Lu, Dongsheng; Dong, Dingbang
2018-07-01
In the field of radio frequency identification (RFID), the three-dimensional (3D) distribution of RFID multi-tag networks has a significant impact on their reading performance. At the same time, in order to realize the anti-collision of RFID multi-tag networks in practical engineering applications, the 3D distribution of RFID multi-tag networks must be measured. In this paper, a novel method for the 3D measurement of RFID multi-tag networks is proposed. A dual-CCD system (vertical and horizontal cameras) is used to obtain images of RFID multi-tag networks from different angles. Then, the wavelet threshold denoising method is used to remove noise in the obtained images. The template matching method is used to determine the two-dimensional coordinates and vertical coordinate of each tag. The 3D coordinates of each tag are obtained subsequently. Finally, a model of the nonlinear relation between the 3D coordinate distribution of the RFID multi-tag network and the corresponding reading distance is established using the wavelet neural network. The experiment results show that the average prediction relative error is 0.71% and the time cost is 2.17 s. The values of the average prediction relative error and time cost are smaller than those of the particle swarm optimization neural network and genetic algorithm–back propagation neural network. The time cost of the wavelet neural network is about 1% of that of the other two methods. The method proposed in this paper has a smaller relative error. The proposed method can improve the real-time performance of RFID multi-tag networks and the overall dynamic performance of multi-tag networks.
Liu, Gui-Long; Huang, Shi-Hong; Shi, Che-Si; Zeng, Bin; Zhang, Ke-Shi; Zhong, Xian-Ci
2018-02-10
Using copper thin-walled tubular specimens, the subsequent yield surfaces under pre-tension, pre-torsion and pre-combined tension-torsion are measured, where the single-sample and multi-sample methods are applied respectively to determine the yield stresses at specified offset strain. The rule and characteristics of the evolution of the subsequent yield surface are investigated. Under the conditions of different pre-strains, the influence of test point number, test sequence and specified offset strain on the measurement of subsequent yield surface and the concave phenomenon for measured yield surface are studied. Moreover, the feasibility and validity of the two methods are compared. The main conclusions are drawn as follows: (1) For the single or multi-sample method, the measured subsequent yield surfaces are remarkably different from cylindrical yield surfaces proposed by the classical plasticity theory; (2) there are apparent differences between the test results from the two kinds of methods: the multi-sample method is not influenced by the number of test points, test order and the cumulative effect of residual plastic strain resulting from the other test point, while those are very influential in the single-sample method; and (3) the measured subsequent yield surface may appear concave, which can be transformed to convex for single-sample method by changing the test sequence. However, for the multiple-sample method, the concave phenomenon will disappear when a larger offset strain is specified.
A low threshold nanocavity in a two-dimensional 12-fold photonic quasicrystal
NASA Astrophysics Data System (ADS)
Ren, Jie; Sun, XiaoHong; Wang, Shuai
2018-05-01
In this article, a low threshold nanocavity is built and investigated in a two-dimensional 12-fold holographic photonic quasicrystal (PQC). The cavity is formed by using the method of multi-beam common-path interference. By finely adjusting the structure parameters of the cavity, the Q factor and the mode volume are optimized, which are two keys to low-threshold on the basis of Purcell effect. Finally, an optimal cavity is obtained with Q value of 6023 and mode volume of 1.24 ×10-12cm3 . On the other hand, by Fourier Transformation of the electric field components in the cavity, the in-plane wave vectors are calculated and fitted to evaluate the cavity performance. The performance analysis of the cavity further proves the effectiveness of the optimization process. This has a guiding significance for the research of low threshold nano-laser.
Methods of scaling threshold color difference using printed samples
NASA Astrophysics Data System (ADS)
Huang, Min; Cui, Guihua; Liu, Haoxue; Luo, M. Ronnier
2012-01-01
A series of printed samples on substrate of semi-gloss paper and with the magnitude of threshold color difference were prepared for scaling the visual color difference and to evaluate the performance of different method. The probabilities of perceptibly was used to normalized to Z-score and different color differences were scaled to the Z-score. The visual color difference was got, and checked with the STRESS factor. The results indicated that only the scales have been changed but the relative scales between pairs in the data are preserved.
1975-02-01
UNCLASSIFIED AD NUMBER LIMITATION CHANGES TO: FROM: AUTHORITY THIS PAGE IS UNCLASSIFIED ADB013811 Approved for public release; distribution is...of Changing Sampling Frequency and Bits/Sample 13 Image Coding Methods 63 Basic Dual-Mode Coder Code Assignment 73 Oversampled Dual...results from the threshold at which a 1 bit will oe trans- mitted. The threshold corresponds to a finite change on the gray scale or resolution of the
Vanamail, P; Subramanian, S; Srividya, A; Ravi, R; Krishnamoorthy, K; Das, P K
2006-08-01
Lot quality assurance sampling (LQAS) with two-stage sampling plan was applied for rapid monitoring of coverage after every round of mass drug administration (MDA). A Primary Health Centre (PHC) consisting of 29 villages in Thiruvannamalai district, Tamil Nadu was selected as the study area. Two threshold levels of coverage were used: threshold A (maximum: 60%; minimum: 40%) and threshold B (maximum: 80%; minimum: 60%). Based on these thresholds, one sampling plan each for A and B was derived with the necessary sample size and the number of allowable defectives (i.e. defectives mean those who have not received the drug). Using data generated through simple random sampling (SRSI) of 1,750 individuals in the study area, LQAS was validated with the above two sampling plans for its diagnostic and field applicability. Simultaneously, a household survey (SRSH) was conducted for validation and cost-effectiveness analysis. Based on SRSH survey, the estimated coverage was 93.5% (CI: 91.7-95.3%). LQAS with threshold A revealed that by sampling a maximum of 14 individuals and by allowing four defectives, the coverage was >or=60% in >90% of villages at the first stage. Similarly, with threshold B by sampling a maximum of nine individuals and by allowing four defectives, the coverage was >or=80% in >90% of villages at the first stage. These analyses suggest that the sampling plan (14,4,52,25) of threshold A may be adopted in MDA to assess if a minimum coverage of 60% has been achieved. However, to achieve the goal of elimination, the sampling plan (9, 4, 42, 29) of threshold B can identify villages in which the coverage is <80% so that remedial measures can be taken. Cost-effectiveness analysis showed that both options of LQAS are more cost-effective than SRSH to detect a village with a given level of coverage. The cost per village was US dollars 76.18 under SRSH. The cost of LQAS was US dollars 65.81 and 55.63 per village for thresholds A and B respectively. The total financial cost of classifying a village correctly with the given threshold level of LQAS could be reduced by 14% and 26% of the cost of conventional SRSH method.
NASA Astrophysics Data System (ADS)
Zhou, Yatong; Han, Chunying; Chi, Yue
2018-06-01
In a simultaneous source survey, no limitation is required for the shot scheduling of nearby sources and thus a huge acquisition efficiency can be obtained but at the same time making the recorded seismic data contaminated by strong blending interference. In this paper, we propose a multi-dip seislet frame based sparse inversion algorithm to iteratively separate simultaneous sources. We overcome two inherent drawbacks of traditional seislet transform. For the multi-dip problem, we propose to apply a multi-dip seislet frame thresholding strategy instead of the traditional seislet transform for deblending simultaneous-source data that contains multiple dips, e.g., containing multiple reflections. The multi-dip seislet frame strategy solves the conflicting dip problem that degrades the performance of the traditional seislet transform. For the noise issue, we propose to use a robust dip estimation algorithm that is based on velocity-slope transformation. Instead of calculating the local slope directly using the plane-wave destruction (PWD) based method, we first apply NMO-based velocity analysis and obtain NMO velocities for multi-dip components that correspond to multiples of different orders, then a fairly accurate slope estimation can be obtained using the velocity-slope conversion equation. An iterative deblending framework is given and validated through a comprehensive analysis over both numerical synthetic and field data examples.
High energy PIXE: A tool to characterize multi-layer thick samples
NASA Astrophysics Data System (ADS)
Subercaze, A.; Koumeir, C.; Métivier, V.; Servagent, N.; Guertin, A.; Haddad, F.
2018-02-01
High energy PIXE is a useful and non-destructive tool to characterize multi-layer thick samples such as cultural heritage objects. In a previous work, we demonstrated the possibility to perform quantitative analysis of simple multi-layer samples using high energy PIXE, without any assumption on their composition. In this work an in-depth study of the parameters involved in the method previously published is proposed. Its extension to more complex samples with a repeated layer is also presented. Experiments have been performed at the ARRONAX cyclotron using 68 MeV protons. The thicknesses and sequences of a multi-layer sample including two different layers of the same element have been determined. Performances and limits of this method are presented and discussed.
Penalized spline estimation for functional coefficient regression models.
Cao, Yanrong; Lin, Haiqun; Wu, Tracy Z; Yu, Yan
2010-04-01
The functional coefficient regression models assume that the regression coefficients vary with some "threshold" variable, providing appreciable flexibility in capturing the underlying dynamics in data and avoiding the so-called "curse of dimensionality" in multivariate nonparametric estimation. We first investigate the estimation, inference, and forecasting for the functional coefficient regression models with dependent observations via penalized splines. The P-spline approach, as a direct ridge regression shrinkage type global smoothing method, is computationally efficient and stable. With established fixed-knot asymptotics, inference is readily available. Exact inference can be obtained for fixed smoothing parameter λ, which is most appealing for finite samples. Our penalized spline approach gives an explicit model expression, which also enables multi-step-ahead forecasting via simulations. Furthermore, we examine different methods of choosing the important smoothing parameter λ: modified multi-fold cross-validation (MCV), generalized cross-validation (GCV), and an extension of empirical bias bandwidth selection (EBBS) to P-splines. In addition, we implement smoothing parameter selection using mixed model framework through restricted maximum likelihood (REML) for P-spline functional coefficient regression models with independent observations. The P-spline approach also easily allows different smoothness for different functional coefficients, which is enabled by assigning different penalty λ accordingly. We demonstrate the proposed approach by both simulation examples and a real data application.
The (in)famous GWAS P-value threshold revisited and updated for low-frequency variants.
Fadista, João; Manning, Alisa K; Florez, Jose C; Groop, Leif
2016-08-01
Genome-wide association studies (GWAS) have long relied on proposed statistical significance thresholds to be able to differentiate true positives from false positives. Although the genome-wide significance P-value threshold of 5 × 10(-8) has become a standard for common-variant GWAS, it has not been updated to cope with the lower allele frequency spectrum used in many recent array-based GWAS studies and sequencing studies. Using a whole-genome- and -exome-sequencing data set of 2875 individuals of European ancestry from the Genetics of Type 2 Diabetes (GoT2D) project and a whole-exome-sequencing data set of 13 000 individuals from five ancestries from the GoT2D and T2D-GENES (Type 2 Diabetes Genetic Exploration by Next-generation sequencing in multi-Ethnic Samples) projects, we describe guidelines for genome- and exome-wide association P-value thresholds needed to correct for multiple testing, explaining the impact of linkage disequilibrium thresholds for distinguishing independent variants, minor allele frequency and ancestry characteristics. We emphasize the advantage of studying recent genetic isolate populations when performing rare and low-frequency genetic association analyses, as the multiple testing burden is diminished due to higher genetic homogeneity.
Compressively sampled MR image reconstruction using generalized thresholding iterative algorithm
NASA Astrophysics Data System (ADS)
Elahi, Sana; kaleem, Muhammad; Omer, Hammad
2018-01-01
Compressed sensing (CS) is an emerging area of interest in Magnetic Resonance Imaging (MRI). CS is used for the reconstruction of the images from a very limited number of samples in k-space. This significantly reduces the MRI data acquisition time. One important requirement for signal recovery in CS is the use of an appropriate non-linear reconstruction algorithm. It is a challenging task to choose a reconstruction algorithm that would accurately reconstruct the MR images from the under-sampled k-space data. Various algorithms have been used to solve the system of non-linear equations for better image quality and reconstruction speed in CS. In the recent past, iterative soft thresholding algorithm (ISTA) has been introduced in CS-MRI. This algorithm directly cancels the incoherent artifacts produced because of the undersampling in k -space. This paper introduces an improved iterative algorithm based on p -thresholding technique for CS-MRI image reconstruction. The use of p -thresholding function promotes sparsity in the image which is a key factor for CS based image reconstruction. The p -thresholding based iterative algorithm is a modification of ISTA, and minimizes non-convex functions. It has been shown that the proposed p -thresholding iterative algorithm can be used effectively to recover fully sampled image from the under-sampled data in MRI. The performance of the proposed method is verified using simulated and actual MRI data taken at St. Mary's Hospital, London. The quality of the reconstructed images is measured in terms of peak signal-to-noise ratio (PSNR), artifact power (AP), and structural similarity index measure (SSIM). The proposed approach shows improved performance when compared to other iterative algorithms based on log thresholding, soft thresholding and hard thresholding techniques at different reduction factors.
We compared three benthic macroinvertebrate sampling methods on the St. Croix, Wisconsin and Scioto Rivers in summer 2004 and 2005. EPA's newly developed, multi-habitat Large River Bioassessment Protocol (LR-BP) was compared to the multi-habitat method of the Minnesota Pollution...
Winhusen, Theresa; Winstanley, Erin L; Somoza, Eugene; Brigham, Gregory
2012-01-01
Recruitment method can impact the sample composition of a clinical trial and, thus, the generalizability of the results, but the importance of recruitment method in substance use disorder trials has received little attention. The present paper sought to address this research gap by evaluating the association between recruitment method and sample characteristics and treatment outcomes in a substance use disorder trial. In a multi-site trial evaluating Seeking Safety (SS), relative to Women's Health Education (WHE), for women with co-occurring PTSD (either sub-threshold or full PTSD) and substance use disorders, one site assessed the method by which each participant was recruited. Data from this site (n=106), which recruited participants from newspaper advertising and clinic intakes, were analyzed. Participants recruited through advertising, relative to those from the clinic, had significantly higher levels of baseline drug use and higher rates of meeting DSM-IV-TR criteria for full PTSD. Results suggest that the effectiveness of SS in decreasing PTSD symptoms was greater for participants recruited through advertising relative to those recruited from the clinic. Conversely, the results revealed a significant treatment effect in the clinic-recruited participants, not seen in the advertising-recruited participants, with SS, relative to WHE, participants being more likely to report past week drug use during the follow-up phase. Recruitment method may impact sample composition and treatment effects. Replication of this finding would have important implications for substance use disorder efficacy trials which often utilize advertising to recruit participants. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
The impact of manual threshold selection in medical additive manufacturing.
van Eijnatten, Maureen; Koivisto, Juha; Karhu, Kalle; Forouzanfar, Tymour; Wolff, Jan
2017-04-01
Medical additive manufacturing requires standard tessellation language (STL) models. Such models are commonly derived from computed tomography (CT) images using thresholding. Threshold selection can be performed manually or automatically. The aim of this study was to assess the impact of manual and default threshold selection on the reliability and accuracy of skull STL models using different CT technologies. One female and one male human cadaver head were imaged using multi-detector row CT, dual-energy CT, and two cone-beam CT scanners. Four medical engineers manually thresholded the bony structures on all CT images. The lowest and highest selected mean threshold values and the default threshold value were used to generate skull STL models. Geometric variations between all manually thresholded STL models were calculated. Furthermore, in order to calculate the accuracy of the manually and default thresholded STL models, all STL models were superimposed on an optical scan of the dry female and male skulls ("gold standard"). The intra- and inter-observer variability of the manual threshold selection was good (intra-class correlation coefficients >0.9). All engineers selected grey values closer to soft tissue to compensate for bone voids. Geometric variations between the manually thresholded STL models were 0.13 mm (multi-detector row CT), 0.59 mm (dual-energy CT), and 0.55 mm (cone-beam CT). All STL models demonstrated inaccuracies ranging from -0.8 to +1.1 mm (multi-detector row CT), -0.7 to +2.0 mm (dual-energy CT), and -2.3 to +4.8 mm (cone-beam CT). This study demonstrates that manual threshold selection results in better STL models than default thresholding. The use of dual-energy CT and cone-beam CT technology in its present form does not deliver reliable or accurate STL models for medical additive manufacturing. New approaches are required that are based on pattern recognition and machine learning algorithms.
High mobility high efficiency organic films based on pure organic materials
Salzman, Rhonda F [Ann Arbor, MI; Forrest, Stephen R [Ann Arbor, MI
2009-01-27
A method of purifying small molecule organic material, performed as a series of operations beginning with a first sample of the organic small molecule material. The first step is to purify the organic small molecule material by thermal gradient sublimation. The second step is to test the purity of at least one sample from the purified organic small molecule material by spectroscopy. The third step is to repeat the first through third steps on the purified small molecule material if the spectroscopic testing reveals any peaks exceeding a threshold percentage of a magnitude of a characteristic peak of a target organic small molecule. The steps are performed at least twice. The threshold percentage is at most 10%. Preferably the threshold percentage is 5% and more preferably 2%. The threshold percentage may be selected based on the spectra of past samples that achieved target performance characteristics in finished devices.
A newly identified calculation discrepancy of the Sunset semi-continuous carbon analyzer
NASA Astrophysics Data System (ADS)
Zheng, G. J.; Cheng, Y.; He, K. B.; Duan, F. K.; Ma, Y. L.
2014-07-01
The Sunset semi-continuous carbon analyzer (SCCA) is an instrument widely used for carbonaceous aerosol measurement. Despite previous validation work, in this study we identified a new type of SCCA calculation discrepancy caused by the default multipoint baseline correction method. When exceeding a certain threshold carbon load, multipoint correction could cause significant total carbon (TC) underestimation. This calculation discrepancy was characterized for both sucrose and ambient samples, with two protocols based on IMPROVE (Interagency Monitoring of PROtected Visual Environments) (i.e., IMPshort and IMPlong) and one NIOSH (National Institute for Occupational Safety and Health)-like protocol (rtNIOSH). For ambient samples, the IMPshort, IMPlong and rtNIOSH protocol underestimated 22, 36 and 12% of TC, respectively, with the corresponding threshold being ~ 0, 20 and 25 μgC. For sucrose, however, such discrepancy was observed only with the IMPshort protocol, indicating the need of more refractory SCCA calibration substance. Although the calculation discrepancy could be largely reduced by the single-point baseline correction method, the instrumental blanks of single-point method were higher. The correction method proposed was to use multipoint-corrected data when below the determined threshold, and use single-point results when beyond that threshold. The effectiveness of this correction method was supported by correlation with optical data.
Liu, Jinjun; Leng, Yonggang; Lai, Zhihui; Fan, Shengbo
2018-04-25
Mechanical fault diagnosis usually requires not only identification of the fault characteristic frequency, but also detection of its second and/or higher harmonics. However, it is difficult to detect a multi-frequency fault signal through the existing Stochastic Resonance (SR) methods, because the characteristic frequency of the fault signal as well as its second and higher harmonics frequencies tend to be large parameters. To solve the problem, this paper proposes a multi-frequency signal detection method based on Frequency Exchange and Re-scaling Stochastic Resonance (FERSR). In the method, frequency exchange is implemented using filtering technique and Single SideBand (SSB) modulation. This new method can overcome the limitation of "sampling ratio" which is the ratio of the sampling frequency to the frequency of target signal. It also ensures that the multi-frequency target signals can be processed to meet the small-parameter conditions. Simulation results demonstrate that the method shows good performance for detecting a multi-frequency signal with low sampling ratio. Two practical cases are employed to further validate the effectiveness and applicability of this method.
Large signal-to-noise ratio quantification in MLE for ARARMAX models
NASA Astrophysics Data System (ADS)
Zou, Yiqun; Tang, Xiafei
2014-06-01
It has been shown that closed-loop linear system identification by indirect method can be generally transferred to open-loop ARARMAX (AutoRegressive AutoRegressive Moving Average with eXogenous input) estimation. For such models, the gradient-related optimisation with large enough signal-to-noise ratio (SNR) can avoid the potential local convergence in maximum likelihood estimation. To ease the application of this condition, the threshold SNR needs to be quantified. In this paper, we build the amplitude coefficient which is an equivalence to the SNR and prove the finiteness of the threshold amplitude coefficient within the stability region. The quantification of threshold is achieved by the minimisation of an elaborately designed multi-variable cost function which unifies all the restrictions on the amplitude coefficient. The corresponding algorithm based on two sets of physically realisable system input-output data details the minimisation and also points out how to use the gradient-related method to estimate ARARMAX parameters when local minimum is present as the SNR is small. Then, the algorithm is tested on a theoretical AutoRegressive Moving Average with eXogenous input model for the derivation of the threshold and a gas turbine engine real system for model identification, respectively. Finally, the graphical validation of threshold on a two-dimensional plot is discussed.
Zhang, L; Liu, X J
2016-06-03
With the rapid development of next-generation high-throughput sequencing technology, RNA-seq has become a standard and important technique for transcriptome analysis. For multi-sample RNA-seq data, the existing expression estimation methods usually deal with each single-RNA-seq sample, and ignore that the read distributions are consistent across multiple samples. In the current study, we propose a structured sparse regression method, SSRSeq, to estimate isoform expression using multi-sample RNA-seq data. SSRSeq uses a non-parameter model to capture the general tendency of non-uniformity read distribution for all genes across multiple samples. Additionally, our method adds a structured sparse regularization, which not only incorporates the sparse specificity between a gene and its corresponding isoform expression levels, but also reduces the effects of noisy reads, especially for lowly expressed genes and isoforms. Four real datasets were used to evaluate our method on isoform expression estimation. Compared with other popular methods, SSRSeq reduced the variance between multiple samples, and produced more accurate isoform expression estimations, and thus more meaningful biological interpretations.
The oxygen uptake slow component at submaximal intensities in breaststroke swimming
Oliveira, Diogo R.; Gonçalves, Lio F.; Reis, António M.; Fernandes, Ricardo J.; Garrido, Nuno D.
2016-01-01
Abstract The present work proposed to study the oxygen uptake slow component (VO2 SC) of breaststroke swimmers at four different intensities of submaximal exercise, via mathematical modeling of a multi-exponential function. The slow component (SC) was also assessed with two different fixed interval methods and the three methods were compared. Twelve male swimmers performed a test comprising four submaximal 300 m bouts at different intensities where all expired gases were collected breath by breath. Multi-exponential modeling showed values above 450 ml·min−1 of the SC in the two last bouts of exercise (those with intensities above the lactate threshold). A significant effect of the method that was used to calculate the VO2 SC was revealed. Higher mean values were observed when using mathematical modeling compared with the fixed interval 3rd min method (F=7.111; p=0.012; η2=0.587); furthermore, differences were detected among the two fixed interval methods. No significant relationship was found between the SC determined by any method and the blood lactate measured at each of the four exercise intensities. In addition, no significant association between the SC and peak oxygen uptake was found. It was concluded that in trained breaststroke swimmers, the presence of the VO2 SC may be observed at intensities above that corresponding to the 3.5 mM-1 threshold. Moreover, mathematical modeling of the oxygen uptake on-kinetics tended to show a higher slow component as compared to fixed interval methods. PMID:28149379
NASA Astrophysics Data System (ADS)
Chen, Cheng; Jin, Dakai; Zhang, Xiaoliu; Levy, Steven M.; Saha, Punam K.
2017-03-01
Osteoporosis is associated with an increased risk of low-trauma fractures. Segmentation of trabecular bone (TB) is essential to assess TB microstructure, which is a key determinant of bone strength and fracture risk. Here, we present a new method for TB segmentation for in vivo CT imaging. The method uses Hessian matrix-guided anisotropic diffusion to improve local separability of trabecular structures, followed by a new multi-scale morphological reconstruction algorithm for TB segmentation. High sensitivity (0.93), specificity (0.93), and accuracy (0.92) were observed for the new method based on regional manual thresholding on in vivo CT images. Mechanical tests have shown that TB segmentation using the new method improved the ability of derived TB spacing measure for predicting actual bone strength (R2=0.83).
Galaxy clustering dependence on the [O II] emission line luminosity in the local Universe
NASA Astrophysics Data System (ADS)
Favole, Ginevra; Rodríguez-Torres, Sergio A.; Comparat, Johan; Prada, Francisco; Guo, Hong; Klypin, Anatoly; Montero-Dorta, Antonio D.
2017-11-01
We study the galaxy clustering dependence on the [O II] emission line luminosity in the SDSS DR7 Main galaxy sample at mean redshift z ∼ 0.1. We select volume-limited samples of galaxies with different [O II] luminosity thresholds and measure their projected, monopole and quadrupole two-point correlation functions. We model these observations using the 1 h-1 Gpc MultiDark-Planck cosmological simulation and generate light cones with the SUrvey GenerAtoR algorithm. To interpret our results, we adopt a modified (Sub)Halo Abundance Matching scheme, accounting for the stellar mass incompleteness of the emission line galaxies. The satellite fraction constitutes an extra parameter in this model and allows to optimize the clustering fit on both small and intermediate scales (i.e. rp ≲ 30 h-1 Mpc), with no need of any velocity bias correction. We find that, in the local Universe, the [O II] luminosity correlates with all the clustering statistics explored and with the galaxy bias. This latter quantity correlates more strongly with the SDSS r-band magnitude than [O II] luminosity. In conclusion, we propose a straightforward method to produce reliable clustering models, entirely built on the simulation products, which provides robust predictions of the typical ELG host halo masses and satellite fraction values. The SDSS galaxy data, MultiDark mock catalogues and clustering results are made publicly available.
Cremonini, F; Houghton, L A; Camilleri, M; Ferber, I; Fell, C; Cox, V; Castillo, E J; Alpers, D H; Dewit, O E; Gray, E; Lea, R; Zinsmeister, A R; Whorwell, P J
2005-12-01
We assessed reproducibility of measurements of rectal compliance and sensation in health in studies conducted at two centres. We estimated samples size necessary to show clinically meaningful changes in future studies. We performed rectal barostat tests three times (day 1, day 1 after 4 h and 14-17 days later) in 34 healthy participants. We measured compliance and pressure thresholds for first sensation, urgency, discomfort and pain using ascending method of limits and symptom ratings for gas, urgency, discomfort and pain during four phasic distensions (12, 24, 36 and 48 mmHg) in random order. Results obtained at the two centres differed minimally. Reproducibility of sensory end points varies with type of sensation, pressure level and method of distension. Pressure threshold for pain and sensory ratings for non-painful sensations at 36 and 48 mmHg distension were most reproducible in the two centres. Sample size calculations suggested that crossover design is preferable in therapeutic trials: for each dose of medication tested, a sample of 21 should be sufficient to demonstrate 30% changes in all sensory thresholds and almost all sensory ratings. We conclude that reproducibility varies with sensation type, pressure level and distension method, but in a two-centre study, differences in observed results of sensation are minimal and pressure threshold for pain and sensory ratings at 36-48 mmHg of distension are reproducible.
Knight, Josh; Wells, Susan; Marshall, Roger; Exeter, Daniel; Jackson, Rod
2017-01-01
Many national cardiovascular disease (CVD) risk factor management guidelines now recommend that drug treatment decisions should be informed primarily by patients' multi-variable predicted risk of CVD, rather than on the basis of single risk factor thresholds. To investigate the potential impact of treatment guidelines based on CVD risk thresholds at a national level requires individual level data representing the multi-variable CVD risk factor profiles for a country's total adult population. As these data are seldom, if ever, available, we aimed to create a synthetic population, representing the joint CVD risk factor distributions of the adult New Zealand population. A synthetic population of 2,451,278 individuals, representing the actual age, gender, ethnicity and social deprivation composition of people aged 30-84 years who completed the 2013 New Zealand census was generated using Monte Carlo sampling. Each 'synthetic' person was then probabilistically assigned values of the remaining cardiovascular disease (CVD) risk factors required for predicting their CVD risk, based on data from the national census national hospitalisation and drug dispensing databases and a large regional cohort study, using Monte Carlo sampling and multiple imputation. Where possible, the synthetic population CVD risk distributions for each non-demographic risk factor were validated against independent New Zealand data sources. We were able to develop a synthetic national population with realistic multi-variable CVD risk characteristics. The construction of this population is the first step in the development of a micro-simulation model intended to investigate the likely impact of a range of national CVD risk management strategies that will inform CVD risk management guideline updates in New Zealand and elsewhere.
GROUND WATER MONITORING AND SAMPLING: MULTI-LEVEL VERSUS TRADITIONAL METHODS – WHAT’S WHAT?
Recent studies have been conducted to evaluate different sampling techniques for determining VOC concentrations in groundwater. Samples were obtained using multi-level and traditional sampling techniques in three monitoring wells at the Raymark Superfund site in Stratford, CT. Ve...
Development of a plant based threshold for tarnished plant bug (Hemiptera: miridae) in cotton
USDA-ARS?s Scientific Manuscript database
The tarnished plant bug, Lygus lineolaris (Palisot de Beauvois), is the most important insect pest of cotton, Gossypium hirsutum L., in the midsouthern United States. It is almost exclusively controlled with foliar insecticide applications, and sampling methods and thresholds need to be revisited. ...
Blair, Christopher; Bryson, Robert W
2017-11-01
Biodiversity reduction and loss continues to progress at an alarming rate, and thus, there is widespread interest in utilizing rapid and efficient methods for quantifying and delimiting taxonomic diversity. Single-locus species delimitation methods have become popular, in part due to the adoption of the DNA barcoding paradigm. These techniques can be broadly classified into tree-based and distance-based methods depending on whether species are delimited based on a constructed genealogy. Although the relative performance of these methods has been tested repeatedly with simulations, additional studies are needed to assess congruence with empirical data. We compiled a large data set of mitochondrial ND4 sequences from horned lizards (Phrynosoma) to elucidate congruence using four tree-based (single-threshold GMYC, multiple-threshold GMYC, bPTP, mPTP) and one distance-based (ABGD) species delimitation models. We were particularly interested in cases with highly uneven sampling and/or large differences in intraspecific diversity. Results showed a high degree of discordance among methods, with multiple-threshold GMYC and bPTP suggesting an unrealistically high number of species (29 and 26 species within the P. douglasii complex alone). The single-threshold GMYC model was the most conservative, likely a result of difficulty in locating the inflection point in the genealogies. mPTP and ABGD appeared to be the most stable across sampling regimes and suggested the presence of additional cryptic species that warrant further investigation. These results suggest that the mPTP model may be preferable in empirical data sets with highly uneven sampling or large differences in effective population sizes of species. © 2017 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Guan, Yihong; Luo, Yatao; Yang, Tao; Qiu, Lei; Li, Junchang
2012-01-01
The features of the spatial information of Markov random field image was used in image segmentation. It can effectively remove the noise, and get a more accurate segmentation results. Based on the fuzziness and clustering of pixel grayscale information, we find clustering center of the medical image different organizations and background through Fuzzy cmeans clustering method. Then we find each threshold point of multi-threshold segmentation through two dimensional histogram method, and segment it. The features of fusing multivariate information based on the Dempster-Shafer evidence theory, getting image fusion and segmentation. This paper will adopt the above three theories to propose a new human brain image segmentation method. Experimental result shows that the segmentation result is more in line with human vision, and is of vital significance to accurate analysis and application of tissues.
Laser diodes using InAlGaAs multiple quantum wells intermixed to varying extent
NASA Astrophysics Data System (ADS)
Alahmadi, Yousef; LiKam Wa, Patrick
2018-02-01
Bandgap-modified InAlGaAs/InP multi-quantum well lasers have been demonstrated using an impurity-free disordering technique. Varying degrees of disordering are achieved by rapidly annealing silicon nitride-capped samples at temperatures ranging from 730°C to 830°C for 20 s. The lasing wavelength shift resulting from the intermixing, ranges between 28.2 nm and 147.2 nm. As the annealing temperature is increased, the lasing threshold currents of the fabricated waveguide lasers increase from 25mA to 45mA, while the slope efficiency decrease from 0.101 W/A to 0.068 W/A, compared to a threshold current of 27.8 mA and a slope efficiency of 0.121 W/A for an as-grown laser diode.
Ultra-high spatial resolution multi-energy CT using photon counting detector technology
NASA Astrophysics Data System (ADS)
Leng, S.; Gutjahr, R.; Ferrero, A.; Kappler, S.; Henning, A.; Halaweish, A.; Zhou, W.; Montoya, J.; McCollough, C.
2017-03-01
Two ultra-high-resolution (UHR) imaging modes, each with two energy thresholds, were implemented on a research, whole-body photon-counting-detector (PCD) CT scanner, referred to as sharp and UHR, respectively. The UHR mode has a pixel size of 0.25 mm at iso-center for both energy thresholds, with a collimation of 32 × 0.25 mm. The sharp mode has a 0.25 mm pixel for the low-energy threshold and 0.5 mm for the high-energy threshold, with a collimation of 48 × 0.25 mm. Kidney stones with mixed mineral composition and lung nodules with different shapes were scanned using both modes, and with the standard imaging mode, referred to as macro mode (0.5 mm pixel and 32 × 0.5 mm collimation). Evaluation and comparison of the three modes focused on the ability to accurately delineate anatomic structures using the high-spatial resolution capability and the ability to quantify stone composition using the multi-energy capability. The low-energy threshold images of the sharp and UHR modes showed better shape and texture information due to the achieved higher spatial resolution, although noise was also higher. No noticeable benefit was shown in multi-energy analysis using UHR compared to standard resolution (macro mode) when standard doses were used. This was due to excessive noise in the higher resolution images. However, UHR scans at higher dose showed improvement in multi-energy analysis over macro mode with regular dose. To fully take advantage of the higher spatial resolution in multi-energy analysis, either increased radiation dose, or application of noise reduction techniques, is needed.
Leng, Yonggang; Fan, Shengbo
2018-01-01
Mechanical fault diagnosis usually requires not only identification of the fault characteristic frequency, but also detection of its second and/or higher harmonics. However, it is difficult to detect a multi-frequency fault signal through the existing Stochastic Resonance (SR) methods, because the characteristic frequency of the fault signal as well as its second and higher harmonics frequencies tend to be large parameters. To solve the problem, this paper proposes a multi-frequency signal detection method based on Frequency Exchange and Re-scaling Stochastic Resonance (FERSR). In the method, frequency exchange is implemented using filtering technique and Single SideBand (SSB) modulation. This new method can overcome the limitation of "sampling ratio" which is the ratio of the sampling frequency to the frequency of target signal. It also ensures that the multi-frequency target signals can be processed to meet the small-parameter conditions. Simulation results demonstrate that the method shows good performance for detecting a multi-frequency signal with low sampling ratio. Two practical cases are employed to further validate the effectiveness and applicability of this method. PMID:29693577
Radon Spectrum and Its Application for Small Moving Target Detection
2015-04-01
cumulative distribution function) starts separating from the exact distribution only at the very end of the upper tail, normally in the 610−= faP or...131059.1 −×= PNtotal (15) According to (15), for instance, to determine the threshold for 610−= faP , a total 91059.1 × samples shall...is the incomplete Gamma function9 Suppose we require a false-alarm rate of 610−= faP for the original data, therefore after non-coherent multi
Hopfer, Helene; Jodari, Farman; Negre-Zakharov, Florence; Wylie, Phillip L; Ebeler, Susan E
2016-05-25
Demand for aromatic rice varieties (e.g., Basmati) is increasing in the US. Aromatic varieties typically have elevated levels of the aroma compound 2-acetyl-1-pyrroline (2AP). Due to its very low aroma threshold, analysis of 2AP provides a useful screening tool for rice breeders. Methods for 2AP analysis in rice should quantitate 2AP at or below sensory threshold level, avoid artifactual 2AP generation, and be able to analyze single rice kernels in cases where only small sample quantities are available (e.g., breeding trials). We combined headspace solid phase microextraction with gas chromatography tandem mass spectrometry (HS-SPME-GC-MS/MS) for analysis of 2AP, using an extraction temperature of 40 °C and a stable isotopologue as internal standard. 2AP calibrations were linear between the concentrations of 53 and 5380 pg/g, with detection limits below the sensory threshold of 2AP. Forty-eight aromatic and nonaromatic, milled rice samples from three harvest years were screened with the method for their 2AP content, and overall reproducibility, observed for all samples, ranged from 5% for experimental aromatic lines to 33% for nonaromatic lines.
Li, Ke; Liu, Yi; Wang, Quanxin; Wu, Yalei; Song, Shimin; Sun, Yi; Liu, Tengchong; Wang, Jun; Li, Yang; Du, Shaoyi
2015-01-01
This paper proposes a novel multi-label classification method for resolving the spacecraft electrical characteristics problems which involve many unlabeled test data processing, high-dimensional features, long computing time and identification of slow rate. Firstly, both the fuzzy c-means (FCM) offline clustering and the principal component feature extraction algorithms are applied for the feature selection process. Secondly, the approximate weighted proximal support vector machine (WPSVM) online classification algorithms is used to reduce the feature dimension and further improve the rate of recognition for electrical characteristics spacecraft. Finally, the data capture contribution method by using thresholds is proposed to guarantee the validity and consistency of the data selection. The experimental results indicate that the method proposed can obtain better data features of the spacecraft electrical characteristics, improve the accuracy of identification and shorten the computing time effectively. PMID:26544549
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Jae -Hyuck; Lange, Andrew; Bude, Jeff
In this paper, we investigated whether the optical and electrical properties of indium tin oxide (ITO) films are degraded under laser irradiation below their laser ablation threshold. While performing multi-pulse laser damage experiments on a single ITO film (4.7 ns, 1064 nm, 10 Hz), we examined the optical and electrical properties in situ. A decrease in reflectance was observed prior to laser damage initiation. However, under sub-damage threshold irradiation, conductivity and reflectance of the film were maintained without measurable degradation. This indicates that ITO films in optoelectronic devices may be operated below their lifetime laser damage threshold without noticeable performancemore » degradation.« less
Yoo, Jae -Hyuck; Lange, Andrew; Bude, Jeff; ...
2017-02-10
In this paper, we investigated whether the optical and electrical properties of indium tin oxide (ITO) films are degraded under laser irradiation below their laser ablation threshold. While performing multi-pulse laser damage experiments on a single ITO film (4.7 ns, 1064 nm, 10 Hz), we examined the optical and electrical properties in situ. A decrease in reflectance was observed prior to laser damage initiation. However, under sub-damage threshold irradiation, conductivity and reflectance of the film were maintained without measurable degradation. This indicates that ITO films in optoelectronic devices may be operated below their lifetime laser damage threshold without noticeable performancemore » degradation.« less
Fiber Optic Sensor Embedment Study for Multi-Parameter Strain Sensing
Drissi-Habti, Monssef; Raman, Venkadesh; Khadour, Aghiad; Timorian, Safiullah
2017-01-01
The fiber optic sensors (FOSs) are commonly used for large-scale structure monitoring systems for their small size, noise free and low electrical risk characteristics. Embedded fiber optic sensors (FOSs) lead to micro-damage in composite structures. This damage generation threshold is based on the coating material of the FOSs and their diameter. In addition, embedded FOSs are aligned parallel to reinforcement fibers to avoid micro-damage creation. This linear positioning of distributed FOS fails to provide all strain parameters. We suggest novel sinusoidal sensor positioning to overcome this issue. This method tends to provide multi-parameter strains in a large surface area. The effectiveness of sinusoidal FOS positioning over linear FOS positioning is studied under both numerical and experimental methods. This study proves the advantages of the sinusoidal positioning method for FOS in composite material’s bonding. PMID:28333117
Johnson, Kevin A; Baig, Mirza; Ramsey, Dave; Lisanby, Sarah H; Avery, David; McDonald, William M; Li, Xingbao; Bernhardt, Elisabeth R; Haynor, David R; Holtzheimer, Paul E; Sackeim, Harold A; George, Mark S; Nahas, Ziad
2013-03-01
Motor cortex localization and motor threshold determination often guide Transcranial Magnetic Stimulation (TMS) placement and intensity settings for non-motor brain stimulation. However, anatomic variability results in variability of placement and effective intensity. Post-study analysis of the OPT-TMS Study reviewed both the final positioning and the effective intensity of stimulation (accounting for relative prefrontal scalp-cortex distances). We acquired MRI scans of 185 patients in a multi-site trial of left prefrontal TMS for depression. Scans had marked motor sites (localized with TMS) and marked prefrontal sites (5 cm anterior of motor cortex by the "5 cm rule"). Based on a visual determination made before the first treatment, TMS therapy occurred either at the 5 cm location or was adjusted 1 cm forward. Stimulation intensity was 120% of resting motor threshold. The "5 cm rule" would have placed stimulation in premotor cortex for 9% of patients, which was reduced to 4% with adjustments. We did not find a statistically significant effect of positioning on remission, but no patients with premotor stimulation achieved remission (0/7). Effective stimulation ranged from 93 to 156% of motor threshold, and no seizures were induced across this range. Patients experienced remission with effective stimulation intensity ranging from 93 to 146% of motor threshold, and we did not find a significant effect of effective intensity on remission. Our data indicates that individualized positioning methods are useful to reduce variability in placement. Stimulation at 120% of motor threshold, unadjusted for scalp-cortex distances, appears safe for a broad range of patients. Copyright © 2013 Elsevier Inc. All rights reserved.
Estimation of ultrashort laser irradiation effect over thin transparent biopolymer films morphology
NASA Astrophysics Data System (ADS)
Daskalova, A.; Nathala, C.; Bliznakova, I.; Slavov, D.; Husinsky, W.
2015-01-01
The collagen - elastin biopolymer thin films treated by CPA Ti:Sapphire laser (Femtopower - Compact Pro) at 800nm central wavelength with 30fs and 1kHz repetition rate are investigated. A process of surface modifications and microporous scaffold creation after ultrashort laser irradiation has been observed. The single-shot (N=1) and multi-shot (N<1) ablation threshold values were estimated by studying the linear relationship between the square of the crater diameter D2 and the logarithm of the laser fluence F for determination of the threshold fluences for N=1, 2, 5, 10, 15 and 30 number of laser pulses. The incubation analysis by calculation of the incubation coefficient ξ for multi - shot fluence threshold for selected materials by power - law relationship form Fth(N)=Fth(1)Nξ-1 was also obtained. In this paper, we have also shown another consideration of the multi - shot ablation threshold calculation by logarithmic dependence of the ablation rate d on the laser fluence. The morphological surface changes of the modified regions were characterized by scanning electron microscopy to estimate the generated variations after the laser treatment.
Impacts of selected stimulation patterns on the perception threshold in electrocutaneous stimulation
2011-01-01
Background Consistency is one of the most important concerns to convey stable artificially induced sensory feedback. However, the constancy of perceived sensations cannot be guaranteed, as the artificially evoked sensation is a function of the interaction of stimulation parameters. The hypothesis of this study is that the selected stimulation parameters in multi-electrode cutaneous stimulation have significant impacts on the perception threshold. Methods The investigated parameters included the stimulated location, the number of active electrodes, the number of pulses, and the interleaved time between a pair of electrodes. Biphasic, rectangular pulses were applied via five surface electrodes placed on the forearm of 12 healthy subjects. Results Our main findings were: 1) the perception thresholds at the five stimulated locations were significantly different (p < 0.0001), 2) dual-channel simultaneous stimulation lowered the perception thresholds and led to smaller variance in perception thresholds compared to single-channel stimulation, 3) the perception threshold was inversely related to the number of pulses, and 4) the perception threshold increased with increasing interleaved time when the interleaved time between two electrodes was below 500 μs. Conclusions To maintain a consistent perception threshold, our findings indicate that dual-channel simultaneous stimulation with at least five pulses should be used, and that the interleaved time between two electrodes should be longer than 500 μs. We believe that these findings have implications for design of reliable sensory feedback codes. PMID:21306616
Integration of hybrid silicon lasers and electroabsorption modulators.
Sysak, Matthew N; Anthes, Joel O; Bowers, John E; Raday, Omri; Jones, Richard
2008-08-18
We present an integration platform based on quantum well intermixing for multi-section hybrid silicon lasers and electroabsorption modulators. As a demonstration of the technology, we have fabricated discrete sampled grating DBR lasers and sampled grating DBR lasers integrated with InGaAsP/InP electroabsorption modulators. The integrated sampled grating DBR laser-modulators use the as-grown III-V bandgap for optical gain, a 50 nm blue shifted bandgap for the electrabosprtion modulators, and an 80 nm blue shifted bandgap for low loss mirrors. Laser continuous wave operation up to 45 ?C is achieved with output power >1.0 mW and threshold current of <50 mA. The modulator bandwidth is >2GHz with 5 dB DC extinction.
Operational Risk Measurement of Chinese Commercial Banks Based on Extreme Value Theory
NASA Astrophysics Data System (ADS)
Song, Jiashan; Li, Yong; Ji, Feng; Peng, Cheng
The financial institutions and supervision institutions have all agreed on strengthening the measurement and management of operational risks. This paper attempts to build a model on the loss of operational risks basing on Peak Over Threshold model, emphasizing on weighted least square, which improved Hill’s estimation method, while discussing the situation of small sample, and fix the sample threshold more objectively basing on the media-published data of primary banks loss on operational risk from 1994 to 2007.
NASA Astrophysics Data System (ADS)
Lawson, Gareth L.; Wiebe, Peter H.; Stanton, Timothy K.; Ashjian, Carin J.
2008-02-01
Methods were refined and tested for identifying the aggregations of Antarctic euphausiids ( Euphausia spp.) and then estimating euphausiid size, abundance, and biomass, based on multi-frequency acoustic survey data. A threshold level of volume backscattering strength for distinguishing euphausiid aggregations from other zooplankton was derived on the basis of published measurements of euphausiid visual acuity and estimates of the minimum density of animals over which an individual can maintain visual contact with its nearest neighbor. Differences in mean volume backscattering strength at 120 and 43 kHz further served to distinguish euphausiids from other sources of scattering. An inversion method was then developed to estimate simultaneously the mean length and density of euphausiids in these acoustically identified aggregations based on measurements of mean volume backscattering strength at four frequencies (43, 120, 200, and 420 kHz). The methods were tested at certain locations within an acoustically surveyed continental shelf region in and around Marguerite Bay, west of the Antarctic Peninsula, where independent evidence was also available from net and video systems. Inversion results at these test sites were similar to net samples for estimated length, but acoustic estimates of euphausiid density exceeded those from nets by one to two orders of magnitude, likely due primarily to avoidance and to a lesser extent to differences in the volumes sampled by the two systems. In a companion study, these methods were applied to the full acoustic survey data in order to examine the distribution of euphausiids in relation to aspects of the physical and biological environment [Lawson, G.L., Wiebe, P.H., Ashjian, C.J., Stanton, T.K., 2008. Euphausiid distribution along the Western Antarctic Peninsula—Part B: Distribution of euphausiid aggregations and biomass, and associations with environmental features. Deep-Sea Research II, this issue [doi:10.1016/j.dsr2.2007.11.014
Almenoff, June S; LaCroix, Karol K; Yuen, Nancy A; Fram, David; DuMouchel, William
2006-01-01
There is increasing interest in using disproportionality-based signal detection methods to support postmarketing safety surveillance activities. Two commonly used methods, empirical Bayes multi-item gamma Poisson shrinker (MGPS) and proportional reporting ratio (PRR), perform differently with respect to the number and types of signals detected. The goal of this study was to compare and analyse the performance characteristics of these two methods, to understand why they differ and to consider the practical implications of these differences for a large, industry-based pharmacovigilance department. We compared the numbers and types of signals of disproportionate reporting (SDRs) obtained with MGPS and PRR using two postmarketing safety databases and a simulated database. We recorded signal counts and performed a qualitative comparison of the drug-event combinations signalled by the two methods as well as a sensitivity analysis to better understand how the thresholds commonly used for these methods impact their performance. PRR detected more SDRs than MGPS. We observed that MGPS is less subject to confounding by demographic factors because it employs stratification and is more stable than PRR when report counts are low. Simulation experiments performed using published empirical thresholds demonstrated that PRR detected false-positive signals at a rate of 1.1%, while MGPS did not detect any statistical false positives. In an attempt to separate the effect of choice of signal threshold from more fundamental methodological differences, we performed a series of experiments in which we modified the conventional threshold values for each method so that each method detected the same number of SDRs for the example drugs studied. This analysis, which provided quantitative examples of the relationship between the published thresholds for the two methods, demonstrates that the signalling criterion published for PRR has a higher signalling frequency than that published for MGPS. The performance differences between the PRR and MGPS methods are related to (i) greater confounding by demographic factors with PRR; (ii) a higher tendency of PRR to detect false-positive signals when the number of reports is small; and (iii) the conventional thresholds that have been adapted for each method. PRR tends to be more 'sensitive' and less 'specific' than MGPS. A high-specificity disproportionality method, when used in conjunction with medical triage and investigation of critical medical events, may provide an efficient and robust approach to applying quantitative methods in routine postmarketing pharmacovigilance.
Wu, Wei; Chen, Albert Y C; Zhao, Liang; Corso, Jason J
2014-03-01
Detection and segmentation of a brain tumor such as glioblastoma multiforme (GBM) in magnetic resonance (MR) images are often challenging due to its intrinsically heterogeneous signal characteristics. A robust segmentation method for brain tumor MRI scans was developed and tested. Simple thresholds and statistical methods are unable to adequately segment the various elements of the GBM, such as local contrast enhancement, necrosis, and edema. Most voxel-based methods cannot achieve satisfactory results in larger data sets, and the methods based on generative or discriminative models have intrinsic limitations during application, such as small sample set learning and transfer. A new method was developed to overcome these challenges. Multimodal MR images are segmented into superpixels using algorithms to alleviate the sampling issue and to improve the sample representativeness. Next, features were extracted from the superpixels using multi-level Gabor wavelet filters. Based on the features, a support vector machine (SVM) model and an affinity metric model for tumors were trained to overcome the limitations of previous generative models. Based on the output of the SVM and spatial affinity models, conditional random fields theory was applied to segment the tumor in a maximum a posteriori fashion given the smoothness prior defined by our affinity model. Finally, labeling noise was removed using "structural knowledge" such as the symmetrical and continuous characteristics of the tumor in spatial domain. The system was evaluated with 20 GBM cases and the BraTS challenge data set. Dice coefficients were computed, and the results were highly consistent with those reported by Zikic et al. (MICCAI 2012, Lecture notes in computer science. vol 7512, pp 369-376, 2012). A brain tumor segmentation method using model-aware affinity demonstrates comparable performance with other state-of-the art algorithms.
Machine vision application in animal trajectory tracking.
Koniar, Dušan; Hargaš, Libor; Loncová, Zuzana; Duchoň, František; Beňo, Peter
2016-04-01
This article was motivated by the doctors' demand to make a technical support in pathologies of gastrointestinal tract research [10], which would be based on machine vision tools. Proposed solution should be less expensive alternative to already existing RF (radio frequency) methods. The objective of whole experiment was to evaluate the amount of animal motion dependent on degree of pathology (gastric ulcer). In the theoretical part of the article, several methods of animal trajectory tracking are presented: two differential methods based on background subtraction, the thresholding methods based on global and local threshold and the last method used for animal tracking was the color matching with a chosen template containing a searched spectrum of colors. The methods were tested offline on five video samples. Each sample contained situation with moving guinea pig locked in a cage under various lighting conditions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Greenland, K; Rondy, M; Chevez, A; Sadozai, N; Gasasira, A; Abanida, E A; Pate, M A; Ronveaux, O; Okayasu, H; Pedalino, B; Pezzoli, L
2011-07-01
To evaluate oral poliovirus vaccine (OPV) coverage of the November 2009 round in five Northern Nigeria states with ongoing wild poliovirus transmission using clustered lot quality assurance sampling (CLQAS). We selected four local government areas in each pre-selected state and sampled six clusters of 10 children in each Local Government Area, defined as the lot area. We used three decision thresholds to classify OPV coverage: 75-90%, 55-70% and 35-50%. A full lot was completed, but we also assessed in retrospect the potential time-saving benefits of stopping sampling when a lot had been classified. We accepted two local government areas (LGAs) with vaccination coverage above 75%. Of the remaining 18 rejected LGAs, 11 also failed to reach 70% coverage, of which four also failed to reach 50%. The average time taken to complete a lot was 10 h. By stopping sampling when a decision was reached, we could have classified lots in 5.3, 7.7 and 7.3 h on average at the 90%, 70% and 50% coverage targets, respectively. Clustered lot quality assurance sampling was feasible and useful to estimate OPV coverage in Northern Nigeria. The multi-threshold approach provided useful information on the variation of IPD vaccination coverage. CLQAS is a very timely tool, allowing corrective actions to be directly taken in insufficiently covered areas. © 2011 Blackwell Publishing Ltd.
Schubert, Maria; Clarke, Sean P; Glass, Tracy R; Schaffert-Witvliet, Bianca; De Geest, Sabina
2009-07-01
In the Rationing of Nursing Care in Switzerland Study, implicit rationing of care was the only factor consistently significantly associated with all six studied patient outcomes. These results highlight the importance of rationing as a new system factor regarding patient safety and quality of care. Since at least some rationing of care appears inevitable, it is important to identify the thresholds of its influences in order to minimize its negative effects on patient outcomes. To describe the levels of implicit rationing of nursing care in a sample of Swiss acute care hospitals and to identify clinically meaningful thresholds of rationing. Descriptive cross-sectional multi-center study. Five Swiss-German and three Swiss-French acute care hospitals. 1338 nurses and 779 patients. Implicit rationing of nursing care was measured using the newly developed Basel Extent of Rationing of Nursing Care (BERNCA) instrument. Other variables were measured using survey items from the International Hospital Outcomes Study battery. Data were summarized using appropriate descriptive measures, and logistic regression models were used to define a clinically meaningful rationing threshold level. For the studied patient outcomes, identified rationing threshold levels varied from 0.5 (i.e., between 0 ('never') and 1 ('rarely') to 2 ('sometimes')). Three of the identified patient outcomes (nosocomial infections, pressure ulcers, and patient satisfaction) were particularly sensitive to rationing, showing negative consequences anywhere it was consistently reported (i.e., average BERNCA scores of 0.5 or above). In other cases, increases in negative outcomes were first observed from the level of 1 (average ratings of rarely). Rationing scores generated using the BERNCA instrument provide a clinically meaningful method for tracking the correlates of low resources or difficulties in resource allocation on patient outcomes. Thresholds identified here provide parameters for administrators to respond to whenever rationing reports exceed the determined level of '0.5' or '1'. Since even very low levels of rationing had negative consequences on three of the six studied outcomes, it is advisable to treat consistent evidence of any rationing as a significant threat to patient safety and quality of care.
Wang, Ming; Zhang, Kai; Dai, Xin-Xin; Li, Yin; Guo, Jiang; Liu, Hu; Li, Gen-Hui; Tan, Yan-Jun; Zeng, Jian-Bing; Guo, Zhanhu
2017-08-10
Formation of highly conductive networks is essential for achieving flexible conductive polymer composites (CPCs) with high force sensitivity and high electrical conductivity. In this study, self-segregated structures were constructed in polydimethylsiloxane/multi-wall carbon nanotube (PDMS/MWCNT) nanocomposites, which then exhibited high piezoresistive sensitivity and low percolation threshold without sacrificing their mechanical properties. First, PDMS was cured and pulverized into 40-60 mesh-sized particles (with the size range of 250-425 μm) as an optimum self-segregated phase to improve the subsequent electrical conductivity. Then, the uncured PDMS/MWCNT base together with the curing agent was mixed with the abovementioned PDMS particles, serving as the segregated phase. Finally, the mixture was cured again to form the PDMS/MWCNT nanocomposites with self-segregated structures. The morphological evaluation indicated that MWCNTs were located in the second cured three-dimensional (3D) continuous PDMS phase, resulting in an ultralow percolation threshold of 0.003 vol% MWCNTs. The nanocomposites with self-segregated structures with 0.2 vol% MWCNTs achieved a high electrical conductivity of 0.003 S m -1 , whereas only 4.87 × 10 -10 S m -1 was achieved for the conventional samples with 0.2 vol% MWCNTs. The gauge factor GF of the self-segregated samples was 7.4-fold that of the conventional samples at 30% compression strain. Furthermore, the self-segregated samples also showed higher compression modulus and strength as compared to the conventional samples. These enhanced properties were attributed to the construction of 3D self-segregated structures, concentrated distribution of MWCNTs, and strong interfacial interaction between the segregated phase and the continuous phase with chemical bonds formed during the second curing process. These self-segregated structures provide a new insight into the fabrication of elastomers with high electrical conductivity and piezoresistive sensitivity for flexible force-sensitive materials.
Molecular orbital imaging via above-threshold ionization with circularly polarized pulses.
Zhu, Xiaosong; Zhang, Qingbin; Hong, Weiyi; Lu, Peixiang; Xu, Zhizhan
2011-07-18
Above-threshold ionization (ATI) for aligned or orientated linear molecules by circularly polarized laser pulsed is investigated. It is found that the all-round structural information of the molecular orbital is extracted with only one shot by the circularly polarized probe pulse rather than with multi-shot detections in a linearly polarized case. The obtained photoelectron momentum spectrum directly depicts the symmetry and electron distribution of the occupied molecular orbital, which results from the strong sensitivity of the ionization probability to these structural features. Our investigation indicates that the circularly polarized probe scheme would present a simple method to study the angle-dependent ionization and image the occupied electronic orbital.
Phosphatase activity tunes two-component system sensor detection threshold.
Landry, Brian P; Palanki, Rohan; Dyulgyarov, Nikola; Hartsough, Lucas A; Tabor, Jeffrey J
2018-04-12
Two-component systems (TCSs) are the largest family of multi-step signal transduction pathways in biology, and a major source of sensors for biotechnology. However, the input concentrations to which biosensors respond are often mismatched with application requirements. Here, we utilize a mathematical model to show that TCS detection thresholds increase with the phosphatase activity of the sensor histidine kinase. We experimentally validate this result in engineered Bacillus subtilis nitrate and E. coli aspartate TCS sensors by tuning their detection threshold up to two orders of magnitude. We go on to apply our TCS tuning method to recently described tetrathionate and thiosulfate sensors by mutating a widely conserved residue previously shown to impact phosphatase activity. Finally, we apply TCS tuning to engineer B. subtilis to sense and report a wide range of fertilizer concentrations in soil. This work will enable the engineering of tailor-made biosensors for diverse synthetic biology applications.
Turbidity threshold sampling: Methods and instrumentation
Rand Eads; Jack Lewis
2001-01-01
Traditional methods for determining the frequency of suspended sediment sample collection often rely on measurements, such as water discharge, that are not well correlated to sediment concentration. Stream power is generally not a good predictor of sediment concentration for rivers that transport the bulk of their load as fines, due to the highly variable routing of...
Deng, Hang; Fitts, Jeffrey P.; Peters, Catherine A.
2016-02-01
This paper presents a new method—the Technique of Iterative Local Thresholding (TILT)—for processing 3D X-ray computed tomography (xCT) images for visualization and quantification of rock fractures. The TILT method includes the following advancements. First, custom masks are generated by a fracture-dilation procedure, which significantly amplifies the fracture signal on the intensity histogram used for local thresholding. Second, TILT is particularly well suited for fracture characterization in granular rocks because the multi-scale Hessian fracture (MHF) filter has been incorporated to distinguish fractures from pores in the rock matrix. Third, TILT wraps the thresholding and fracture isolation steps in an optimized iterativemore » routine for binary segmentation, minimizing human intervention and enabling automated processing of large 3D datasets. As an illustrative example, we applied TILT to 3D xCT images of reacted and unreacted fractured limestone cores. Other segmentation methods were also applied to provide insights regarding variability in image processing. The results show that TILT significantly enhanced separability of grayscale intensities, outperformed the other methods in automation, and was successful in isolating fractures from the porous rock matrix. Because the other methods are more likely to misclassify fracture edges as void and/or have limited capacity in distinguishing fractures from pores, those methods estimated larger fracture volumes (up to 80 %), surface areas (up to 60 %), and roughness (up to a factor of 2). In conclusion, these differences in fracture geometry would lead to significant disparities in hydraulic permeability predictions, as determined by 2D flow simulations.« less
Unbiased multi-fidelity estimate of failure probability of a free plane jet
NASA Astrophysics Data System (ADS)
Marques, Alexandre; Kramer, Boris; Willcox, Karen; Peherstorfer, Benjamin
2017-11-01
Estimating failure probability related to fluid flows is a challenge because it requires a large number of evaluations of expensive models. We address this challenge by leveraging multiple low fidelity models of the flow dynamics to create an optimal unbiased estimator. In particular, we investigate the effects of uncertain inlet conditions in the width of a free plane jet. We classify a condition as failure when the corresponding jet width is below a small threshold, such that failure is a rare event (failure probability is smaller than 0.001). We estimate failure probability by combining the frameworks of multi-fidelity importance sampling and optimal fusion of estimators. Multi-fidelity importance sampling uses a low fidelity model to explore the parameter space and create a biasing distribution. An unbiased estimate is then computed with a relatively small number of evaluations of the high fidelity model. In the presence of multiple low fidelity models, this framework offers multiple competing estimators. Optimal fusion combines all competing estimators into a single estimator with minimal variance. We show that this combined framework can significantly reduce the cost of estimating failure probabilities, and thus can have a large impact in fluid flow applications. This work was funded by DARPA.
Nayagam, David A. X.; Williams, Richard A.; Allen, Penelope J.; Shivdasani, Mohit N.; Luu, Chi D.; Salinas-LaRosa, Cesar M.; Finch, Sue; Ayton, Lauren N.; Saunders, Alexia L.; McPhedran, Michelle; McGowan, Ceara; Villalobos, Joel; Fallon, James B.; Wise, Andrew K.; Yeoh, Jonathan; Xu, Jin; Feng, Helen; Millard, Rodney; McWade, Melanie; Thien, Patrick C.; Williams, Chris E.; Shepherd, Robert K.
2014-01-01
Purpose To assess the safety and efficacy of chronic electrical stimulation of the retina with a suprachoroidal visual prosthesis. Methods Seven normally-sighted feline subjects were implanted for 96–143 days with a suprachoroidal electrode array and six were chronically stimulated for 70–105 days at levels that activated the visual cortex. Charge balanced, biphasic, current pulses were delivered to platinum electrodes in a monopolar stimulation mode. Retinal integrity/function and the mechanical stability of the implant were assessed monthly using electroretinography (ERG), optical coherence tomography (OCT) and fundus photography. Electrode impedances were measured weekly and electrically-evoked visual cortex potentials (eEVCPs) were measured monthly to verify that chronic stimuli were suprathreshold. At the end of the chronic stimulation period, thresholds were confirmed with multi-unit recordings from the visual cortex. Randomized, blinded histological assessments were performed by two pathologists to compare the stimulated and non-stimulated retina and adjacent tissue. Results All subjects tolerated the surgical and stimulation procedure with no evidence of discomfort or unexpected adverse outcomes. After an initial post-operative settling period, electrode arrays were mechanically stable. Mean electrode impedances were stable between 11–15 kΩ during the implantation period. Visually-evoked ERGs & OCT were normal, and mean eEVCP thresholds did not substantially differ over time. In 81 of 84 electrode-adjacent tissue samples examined, there were no discernible histopathological differences between stimulated and unstimulated tissue. In the remaining three tissue samples there were minor focal fibroblastic and acute inflammatory responses. Conclusions Chronic suprathreshold electrical stimulation of the retina using a suprachoroidal electrode array evoked a minimal tissue response and no adverse clinical or histological findings. Moreover, thresholds and electrode impedance remained stable for stimulation durations of up to 15 weeks. This study has demonstrated the safety and efficacy of suprachoroidal stimulation with charge balanced stimulus currents. PMID:24853376
NASA Astrophysics Data System (ADS)
Sharif, Morteza A.; Majles Ara, M. H.; Ghafary, Bijan; Salmani, Somayeh; Mohajer, Salman
2016-03-01
We have experimentally investigated low threshold Optical Bistability (OB) and multi-stability in exfoliated graphene ink with low oxidation degree. Theoretical predictions of N-layer problem and the resonator feedback problem show good agreement with the experimental observation. In contrary to the other graphene oxide samples, we have indicated that the absorbance does not restrict OB process. We have concluded from the experimental results and Nonlinear Schrödinger Equation (NLSE) that the nonlinear dispersion - rather than absorption - is the main nonlinear mechanism of OB. In addition to the enhanced nonlinearity, exfoliated graphene with low oxidation degree possesses semiconductors group III-V equivalent band gap energy, high charge carrier mobility and thus, ultra-fast optical response which makes it a unique optical material for application in all optical switching, especially in THz frequency range.
Lower-Order Compensation Chain Threshold-Reduction Technique for Multi-Stage Voltage Multipliers.
Dell' Anna, Francesco; Dong, Tao; Li, Ping; Wen, Yumei; Azadmehr, Mehdi; Casu, Mario; Berg, Yngvar
2018-04-17
This paper presents a novel threshold-compensation technique for multi-stage voltage multipliers employed in low power applications such as passive and autonomous wireless sensing nodes (WSNs) powered by energy harvesters. The proposed threshold-reduction technique enables a topological design methodology which, through an optimum control of the trade-off among transistor conductivity and leakage losses, is aimed at maximizing the voltage conversion efficiency (VCE) for a given ac input signal and physical chip area occupation. The conducted simulations positively assert the validity of the proposed design methodology, emphasizing the exploitable design space yielded by the transistor connection scheme in the voltage multiplier chain. An experimental validation and comparison of threshold-compensation techniques was performed, adopting 2N5247 N-channel junction field effect transistors (JFETs) for the realization of the voltage multiplier prototypes. The attained measurements clearly support the effectiveness of the proposed threshold-reduction approach, which can significantly reduce the chip area occupation for a given target output performance and ac input signal.
Pérez-Báez, Wendy; García-Latorre, Ethel A; Maldonado-Martínez, Héctor Aquiles; Coronado-Martínez, Iris; Flores-García, Leonardo; Taja-Chayeb, Lucía
2017-10-01
Treatment in metastatic colorectal cancer (mCRC) has expanded with monoclonal antibodies targeting epidermal growth factor receptor, but is restricted to patients with a wild-type (WT) KRAS mutational status. The most sensitive assays for KRAS mutation detection in formalin-fixed paraffin embedded (FFPE) tissues are based on real-time PCR. Among them, high resolution melting analysis (HRMA), is a simple, fast, highly sensitive, specific and cost-effective method, proposed as adjunct for KRAS mutation detection. However the method to categorize WT vs mutant sequences in HRMA is not clearly specified in available studies, besides the impact of FFPE artifacts on HRMA performance hasn't been addressed either. Avowedly adequate samples from 104 consecutive mCRC patients were tested for KRAS mutations by Therascreen™ (FDA Validated test), HRMA, and HRMA with UDG pre-treatment to reverse FFPE fixation artifacts. Comparisons of KRAS status allocation among the three methods were done. Focusing on HRMA as screening test, ROC curve analyses were performed for HRMA and HMRA-UDG against Therascreen™, in order to evaluate their discriminative power and to determine the threshold of profile concordance between WT control and sample for KRAS status determination. Comparing HRMA and HRMA-UDG against Therascreen™ as surrogate gold standard, sensitivity was 1 for both HRMA and HRMA-UDG; and specificity and positive predictive values were respectively 0.838 and 0.939; and 0.777 and 0.913. As evaluated by the McNemar test, HRMA-UDG allocated samples to a WT/mutated genotype in a significatively different way from HRMA (p > 0.001). On the other hand HRMA-UDG did not differ from Therascreen™ (p = 0.125). ROC-curve analysis showed a significant discriminative power for both HRMA and HRMA-UDG against Therascreen™ (respectively, AUC of 0.978, p > 0.0001, CI 95% 0.957-0.999; and AUC of 0.98, p > 0.0001, CI 95% 0.000-1.0). For HRMA as a screening tool, the best threshold (degree of concordance between sample curves and WT control) was attained at 92.14% for HRMA (specificity of 0.887), and at 92.55% for HRMA-UDG (specificity of 0.952). HRMA is a highly sensitive method for KRAS mutation detection, with apparently adequate and statistically significant discriminative power. FFPE sample fixation artifacts have an impact on HRMA results, so for HRMA on FFPE samples pre-treatment with UDG should be strongly suggested. The choice of the threshold for melting curve concordance has also great impact on HRMA performance. A threshold of 93% or greater might be adequate if using HRMA as a screening tool. Further validation of this threshold is required. Copyright © 2017 Elsevier Ltd. All rights reserved.
Multi-point objective-oriented sequential sampling strategy for constrained robust design
NASA Astrophysics Data System (ADS)
Zhu, Ping; Zhang, Siliang; Chen, Wei
2015-03-01
Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.
Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System.
Li, Hongqiang; Yuan, Danyang; Wang, Youxi; Cui, Dianyin; Cao, Lu
2016-10-20
Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG) recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias.
Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System
Li, Hongqiang; Yuan, Danyang; Wang, Youxi; Cui, Dianyin; Cao, Lu
2016-01-01
Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG) recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias. PMID:27775596
Spindler, Patrice; Paretti, Nick V.
2007-01-01
The Arizona Department of Environmental Quality (ADEQ) and the U.S. Environmental Protection Agency (USEPA) Ecological Monitoring and Assessment Program (EMAP), use different field methods for collecting macroinvertebrate samples and habitat data for bioassessment purposes. Arizona’s Biocriteria index was developed using a riffle habitat sampling methodology, whereas the EMAP method employs a multi-habitat sampling protocol. There was a need to demonstrate comparability of these different bioassessment methodologies to allow use of the EMAP multi-habitat protocol for both statewide probabilistic assessments for integration of the EMAP data into the national (305b) assessment and for targeted in-state bioassessments for 303d determinations of standards violations and impaired aquatic life conditions. The purpose of this study was to evaluate whether the two methods yield similar bioassessment results, such that the data could be used interchangeably in water quality assessments. In this Regional EMAP grant funded project, a probabilistic survey of 30 sites in the Little Colorado River basin was conducted in the spring of 2007. Macroinvertebrate and habitat data were collected using both ADEQ and EMAP sampling methods, from adjacent reaches within these stream channels.
All analyses indicated that the two macroinvertebrate sampling methods were significantly correlated. ADEQ and EMAP samples were classified into the same scoring categories (meeting, inconclusive, violating the biocriteria standard) 82% of the time. When the ADEQ-IBI was applied to both the ADEQ and EMAP taxa lists, the resulting IBI scores were significantly correlated (r=0.91), even though only 4 of the 7 metrics in the IBI were significantly correlated. The IBI scores from both methods were significantly correlated to the percent of riffle habitat, even though the average percent riffle habitat was only 30% of the stream reach. Multivariate analyses found that the percent riffle was an important attribute for both datasets in classifying IBI scores into assessment categories.
Habitat measurements generated from EMAP and ADEQ methods were also significantly correlated; 13 of 16 habitat measures were significantly correlated (p<0.01). The visual-based percentage estimates of percent riffle and pool habitats, vegetative cover and percent canopy cover, and substrate measurements of percent fine substrate and embeddedness were all remarkably similar, given the different field methods used. A multivariate analysis identified substrate and flow conditions, as well as canopy cover as important combinations of habitat attributes affecting both IBI scores. These results indicate that similar habitat measures can be obtained using two different field sampling protocols. In addition, similar combinations of these habitat parameters were important to macroinvertebrate community condition in multivariate analyses of both ADEQ and EMAP datasets.
These results indicate the two sampling methods for macroinvertebrates and habitat data were very similar in terms of bioassessment results and stressors. While the bioassessment category was not identical for all sites, overall the assessments were significantly correlated, providing similar bioassessment results for the cold water streams used in this study. The findings of this study indicate that ADEQ can utilize either a riffle-based sampling methodology or a multi-habitat sampling approach in cold water streams as both yield similar results relative to the macroinvertebrate assemblage. These results will allow for use of either macroinvertebrate dataset to determine water quality standards compliance with the ADEQ Indexes of Biological Integrity, for which threshold values were just recently placed into the Arizona Surface Water Quality Standards. While this survey did not include warm water desert streams of Arizona, we would predict that EMAP and ADEQ sampling methodologies would provide similar bioassessment results and would not be significantly different, as we have found that the percent riffle habitat in cold and warm water perennial, wadeable streams is not significantly different. However, a comparison study of sampling methodologies in warm water streams should be conducted to confirm the predicted similarity of bioassessment results. ADEQ will continue to implement a monitoring strategy that includes probabilistic monitoring for a statewide ecological assessment of stream conditions. Conclusions from this study will guide decisions regarding the most appropriate sampling methods for future probabilistic monitoring sample plans.
Postmus, Douwe; Tervonen, Tommi; van Valkenhoef, Gert; Hillege, Hans L; Buskens, Erik
2014-09-01
A standard practice in health economic evaluation is to monetize health effects by assuming a certain societal willingness-to-pay per unit of health gain. Although the resulting net monetary benefit (NMB) is easy to compute, the use of a single willingness-to-pay threshold assumes expressibility of the health effects on a single non-monetary scale. To relax this assumption, this article proves that the NMB framework is a special case of the more general stochastic multi-criteria acceptability analysis (SMAA) method. Specifically, as SMAA does not restrict the number of criteria to two and also does not require the marginal rates of substitution to be constant, there are problem instances for which the use of this more general method may result in a better understanding of the trade-offs underlying the reimbursement decision-making problem. This is illustrated by applying both methods in a case study related to infertility treatment.
Linear segmentation algorithm for detecting layer boundary with lidar.
Mao, Feiyue; Gong, Wei; Logan, Timothy
2013-11-04
The automatic detection of aerosol- and cloud-layer boundary (base and top) is important in atmospheric lidar data processing, because the boundary information is not only useful for environment and climate studies, but can also be used as input for further data processing. Previous methods have demonstrated limitations in defining the base and top, window-size setting, and have neglected the in-layer attenuation. To overcome these limitations, we present a new layer detection scheme for up-looking lidars based on linear segmentation with a reasonable threshold setting, boundary selecting, and false positive removing strategies. Preliminary results from both real and simulated data show that this algorithm cannot only detect the layer-base as accurate as the simple multi-scale method, but can also detect the layer-top more accurately than that of the simple multi-scale method. Our algorithm can be directly applied to uncalibrated data without requiring any additional measurements or window size selections.
SU-F-J-113: Multi-Atlas Based Automatic Organ Segmentation for Lung Radiotherapy Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, J; Han, J; Ailawadi, S
Purpose: Normal organ segmentation is one time-consuming and labor-intensive step for lung radiotherapy treatment planning. The aim of this study is to evaluate the performance of a multi-atlas based segmentation approach for automatic organs at risk (OAR) delineation. Methods: Fifteen Lung stereotactic body radiation therapy patients were randomly selected. Planning CT images and OAR contours of the heart - HT, aorta - AO, vena cava - VC, pulmonary trunk - PT, and esophagus – ES were exported and used as reference and atlas sets. For automatic organ delineation for a given target CT, 1) all atlas sets were deformably warpedmore » to the target CT, 2) the deformed sets were accumulated and normalized to produce organ probability density (OPD) maps, and 3) the OPD maps were converted to contours via image thresholding. Optimal threshold for each organ was empirically determined by comparing the auto-segmented contours against their respective reference contours. The delineated results were evaluated by measuring contour similarity metrics: DICE, mean distance (MD), and true detection rate (TD), where DICE=(intersection volume/sum of two volumes) and TD = {1.0 - (false positive + false negative)/2.0}. Diffeomorphic Demons algorithm was employed for CT-CT deformable image registrations. Results: Optimal thresholds were determined to be 0.53 for HT, 0.38 for AO, 0.28 for PT, 0.43 for VC, and 0.31 for ES. The mean similarity metrics (DICE[%], MD[mm], TD[%]) were (88, 3.2, 89) for HT, (79, 3.2, 82) for AO, (75, 2.7, 77) for PT, (68, 3.4, 73) for VC, and (51,2.7, 60) for ES. Conclusion: The investigated multi-atlas based approach produced reliable segmentations for the organs with large and relatively clear boundaries (HT and AO). However, the detection of small and narrow organs with diffused boundaries (ES) were challenging. Sophisticated atlas selection and multi-atlas fusion algorithms may further improve the quality of segmentations.« less
Dynamic Sensor Tasking for Space Situational Awareness via Reinforcement Learning
NASA Astrophysics Data System (ADS)
Linares, R.; Furfaro, R.
2016-09-01
This paper studies the Sensor Management (SM) problem for optical Space Object (SO) tracking. The tasking problem is formulated as a Markov Decision Process (MDP) and solved using Reinforcement Learning (RL). The RL problem is solved using the actor-critic policy gradient approach. The actor provides a policy which is random over actions and given by a parametric probability density function (pdf). The critic evaluates the policy by calculating the estimated total reward or the value function for the problem. The parameters of the policy action pdf are optimized using gradients with respect to the reward function. Both the critic and the actor are modeled using deep neural networks (multi-layer neural networks). The policy neural network takes the current state as input and outputs probabilities for each possible action. This policy is random, and can be evaluated by sampling random actions using the probabilities determined by the policy neural network's outputs. The critic approximates the total reward using a neural network. The estimated total reward is used to approximate the gradient of the policy network with respect to the network parameters. This approach is used to find the non-myopic optimal policy for tasking optical sensors to estimate SO orbits. The reward function is based on reducing the uncertainty for the overall catalog to below a user specified uncertainty threshold. This work uses a 30 km total position error for the uncertainty threshold. This work provides the RL method with a negative reward as long as any SO has a total position error above the uncertainty threshold. This penalizes policies that take longer to achieve the desired accuracy. A positive reward is provided when all SOs are below the catalog uncertainty threshold. An optimal policy is sought that takes actions to achieve the desired catalog uncertainty in minimum time. This work trains the policy in simulation by letting it task a single sensor to "learn" from its performance. The proposed approach for the SM problem is tested in simulation and good performance is found using the actor-critic policy gradient method.
Credibilistic multi-period portfolio optimization based on scenario tree
NASA Astrophysics Data System (ADS)
Mohebbi, Negin; Najafi, Amir Abbas
2018-02-01
In this paper, we consider a multi-period fuzzy portfolio optimization model with considering transaction costs and the possibility of risk-free investment. We formulate a bi-objective mean-VaR portfolio selection model based on the integration of fuzzy credibility theory and scenario tree in order to dealing with the markets uncertainty. The scenario tree is also a proper method for modeling multi-period portfolio problems since the length and continuity of their horizon. We take the return and risk as well cardinality, threshold, class, and liquidity constraints into consideration for further compliance of the model with reality. Then, an interactive dynamic programming method, which is based on a two-phase fuzzy interactive approach, is employed to solve the proposed model. In order to verify the proposed model, we present an empirical application in NYSE under different circumstances. The results show that the consideration of data uncertainty and other real-world assumptions lead to more practical and efficient solutions.
An automatic brain tumor segmentation tool.
Diaz, Idanis; Boulanger, Pierre; Greiner, Russell; Hoehn, Bret; Rowe, Lindsay; Murtha, Albert
2013-01-01
This paper introduces an automatic brain tumor segmentation method (ABTS) for segmenting multiple components of brain tumor using four magnetic resonance image modalities. ABTS's four stages involve automatic histogram multi-thresholding and morphological operations including geodesic dilation. Our empirical results, on 16 real tumors, show that ABTS works very effectively, achieving a Dice accuracy compared to expert segmentation of 81% in segmenting edema and 85% in segmenting gross tumor volume (GTV).
Modeling and simulation of dynamic ant colony's labor division for task allocation of UAV swarm
NASA Astrophysics Data System (ADS)
Wu, Husheng; Li, Hao; Xiao, Renbin; Liu, Jie
2018-02-01
The problem of unmanned aerial vehicle (UAV) task allocation not only has the intrinsic attribute of complexity, such as highly nonlinear, dynamic, highly adversarial and multi-modal, but also has a better practicability in various multi-agent systems, which makes it more and more attractive recently. In this paper, based on the classic fixed response threshold model (FRTM), under the idea of "problem centered + evolutionary solution" and by a bottom-up way, the new dynamic environmental stimulus, response threshold and transition probability are designed, and a dynamic ant colony's labor division (DACLD) model is proposed. DACLD allows a swarm of agents with a relatively low-level of intelligence to perform complex tasks, and has the characteristic of distributed framework, multi-tasks with execution order, multi-state, adaptive response threshold and multi-individual response. With the proposed model, numerical simulations are performed to illustrate the effectiveness of the distributed task allocation scheme in two situations of UAV swarm combat (dynamic task allocation with a certain number of enemy targets and task re-allocation due to unexpected threats). Results show that our model can get both the heterogeneous UAVs' real-time positions and states at the same time, and has high degree of self-organization, flexibility and real-time response to dynamic environments.
Turbidity-controlled sampling for suspended sediment load estimation
Jack Lewis
2003-01-01
Abstract - Automated data collection is essential to effectively measure suspended sediment loads in storm events, particularly in small basins. Continuous turbidity measurements can be used, along with discharge, in an automated system that makes real-time sampling decisions to facilitate sediment load estimation. The Turbidity Threshold Sampling method distributes...
Crowley, Stephanie J.; Suh, Christina; Molina, Thomas A.; Fogg, Louis F.; Sharkey, Katherine M.; Carskadon, Mary A.
2016-01-01
Objective/Background Circadian rhythm sleep-wake disorders often manifest during the adolescent years. Measurement of circadian phase such as the Dim Light Melatonin Onset (DLMO) improves diagnosis and treatment of these disorders, but financial and time costs limit the use of DLMO phase assessments in clinic. The current analysis aims to inform a cost-effective and efficient protocol to measure the DLMO in older adolescents by reducing the number of samples and total sampling duration. Patients/Methods A total of 66 healthy adolescents (26 males) aged 14.8 to 17.8 years participated in a study in which sleep was fixed for one week before they came to the laboratory for saliva collection in dim light (<20 lux). Two partial 6-h salivary melatonin profiles were derived for each participant. Both profiles began 5 h before bedtime and ended 1 h after bedtime, but one profile was derived from samples taken every 30 mins (13 samples) and the other from samples taken every 60 mins (7 samples). Three standard thresholds (first 3 melatonin values mean + 2 SDs, 3 pg/mL, and 4 pg/mL) were used to compute the DLMO. Agreement between DLMOs derived from 30-min and 60-min sampling rates was determined using a Bland-Altman analysis; agreement between sampling rate DLMOs was defined as ± 1 h. Results and Conclusions Within a 6-h sampling window, 60-min sampling provided DLMO estimates that were within ± 1 h of DLMO from 30-min sampling, but only when an absolute threshold (3 pg/mL or 4 pg/mL) was used to compute the DLMO. Future analyses should be extended to include adolescents with circadian rhythm sleep-wake disorders. PMID:27318227
Cao, Xiaoqin; Li, Xiaofei; Li, Jian; Niu, Yunhui; Shi, Lu; Fang, Zhenfeng; Zhang, Tao; Ding, Hong
2018-01-15
A sensitive and reliable multi-mycotoxin-based method was developed to identify and quantify several carcinogenic mycotoxins in human blood and urine, as well as edible animal tissues, including muscle and liver tissue from swine and chickens, using liquid chromatography-tandem mass spectrometry (LC-MS/MS). For the toxicokinetic studies with individual mycotoxins, highly sensitive analyte-specific LC-MS/MS methods were developed for rat plasma and urine. Sample purification consisted of a rapid 'dilute and shoot' approach in urine samples, a simple 'dilute, evaporate and shoot' approach in plasma samples and a 'QuEChERS' procedure in edible animal tissues. The multi-mycotoxin and analyte-specific methods were validated in-house: The limits of detection (LOD) for the multi-mycotoxin and analyte-specific methods ranged from 0.02 to 0.41 μg/kg (μg/L) and 0.01 to 0.19 μg/L, respectively, and limits of quantification (LOQ) between 0.10 to 1.02 μg/kg (μg/L) and 0.09 to 0.47 μg/L, respectively. Apparent recoveries of the samples spiked with 0.25 to 4 μg/kg (μg/L) ranged from 60.1% to 109.8% with relative standard deviations below 15%. The methods were successfully applied to real samples. To the best of our knowledge, this is the first study carried out using a small group of patients from the Chinese population with hepatocellular carcinoma to assess their exposure to carcinogenic mycotoxins using biomarkers. Finally, the multi-mycotoxin method is a useful analytical method for assessing exposure to mycotoxins edible in animal tissues. The analyte-specific methods could be useful during toxicokinetic and toxicological studies. Copyright © 2017. Published by Elsevier B.V.
The MPLEx Protocol for Multi-omic Analyses of Soil Samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nicora, Carrie D.; Burnum-Johnson, Kristin E.; Nakayasu, Ernesto S.
Mass spectrometry (MS)-based integrated metaproteomic, metabolomic and lipidomic (multi-omic) studies are transforming our ability to understand and characterize microbial communities in environmental and biological systems. These measurements are even enabling enhanced analyses of complex soil microbial communities, which are the most complex microbial systems known to date. Multi-omic analyses, however, do have sample preparation challenges since separate extractions are typically needed for each omic study, thereby greatly amplifying the preparation time and amount of sample required. To address this limitation, a 3-in-1 method for simultaneous metabolite, protein, and lipid extraction (MPLEx) from the exact same soil sample was created bymore » adapting a solvent-based approach. This MPLEx protocol has proven to be simple yet robust for many sample types and even when utilized for limited quantities of complex soil samples. The MPLEx method also greatly enabled the rapid multi-omic measurements needed to gain a better understanding of the members of each microbial community, while evaluating the changes taking place upon biological and environmental perturbations.« less
Abraham, Mwesigye R; Susan, Tumwebaze B
2017-02-01
The mining and processing of copper in Kilembe, Western Uganda, from 1956 to 1982 left over 15 Mt of cupriferous and cobaltiferous pyrite dumped within a mountain river valley, in addition to mine water which is pumped to the land surface. This study was conducted to assess the sources and concentrations of heavy metals and trace elements in Kilembe mine catchment water. Multi-element analysis of trace elements from point sources and sinks was conducted which included mine tailings, mine water, mine leachate, Nyamwamba River water, public water sources and domestic water samples using ICP-MS. The study found that mean concentrations (mg kg -1 ) of Co (112), Cu (3320), Ni (131), As (8.6) in mine tailings were significantly higher than world average crust and were being eroded and discharged into water bodies within the catchment. Underground mine water and leachate contained higher mean concentrations (μg L -1 ) of Cu (9470), Co (3430) and Ni (590) compared with background concentrations (μg L -1 ) in un contaminated water of 1.9, 0.21 and 0.67 for Cu, Co and Ni respectively. Over 25% of household water samples exceeded UK drinking water thresholds for Al of 200 μg L -1 , Co exceeded Winsconsin (USA drinking) water thresholds of 40 μg L -1 in 40% of samples while Fe in 42% of samples exceeded UK thresholds of 200 μg L -1 . The study however found that besides mining activities, natural processes of geological weathering also contributed to Al, Fe, and Mn water contamination in a number of public water sources. Copyright © 2016 Elsevier Ltd. All rights reserved.
Adaptive local thresholding for robust nucleus segmentation utilizing shape priors
NASA Astrophysics Data System (ADS)
Wang, Xiuzhong; Srinivas, Chukka
2016-03-01
This paper describes a novel local thresholding method for foreground detection. First, a Canny edge detection method is used for initial edge detection. Then, tensor voting is applied on the initial edge pixels, using a nonsymmetric tensor field tailored to encode prior information about nucleus size, shape, and intensity spatial distribution. Tensor analysis is then performed to generate the saliency image and, based on that, the refined edge. Next, the image domain is divided into blocks. In each block, at least one foreground and one background pixel are sampled for each refined edge pixel. The saliency weighted foreground histogram and background histogram are then created. These two histograms are used to calculate a threshold by minimizing the background and foreground pixel classification error. The block-wise thresholds are then used to generate the threshold for each pixel via interpolation. Finally, the foreground is obtained by comparing the original image with the threshold image. The effective use of prior information, combined with robust techniques, results in far more reliable foreground detection, which leads to robust nucleus segmentation.
The Purpose of Generating Fatigue Crack Growth Threshold Data
NASA Technical Reports Server (NTRS)
Forth, Scott
2006-01-01
Test data shows that different width and thickness C(T), M(T) and ESE(T) specimens generate different thresholds Structures designed for "infinite life" are being re-evaluated: a) Threshold changes from 6 to 3 ksi in(sup 1/2); b) Computational life changes from infinite to 4 missions. Multi-million dollar test programs required to substantiate operation. Using ASTM E647 as standard guidance to generate threshold data is not practical. A threshold test approach needs to be standardized that will provide positive margin for high cycle fatigue applications.
Differentially Private Histogram Publication For Dynamic Datasets: An Adaptive Sampling Approach
Li, Haoran; Jiang, Xiaoqian; Xiong, Li; Liu, Jinfei
2016-01-01
Differential privacy has recently become a de facto standard for private statistical data release. Many algorithms have been proposed to generate differentially private histograms or synthetic data. However, most of them focus on “one-time” release of a static dataset and do not adequately address the increasing need of releasing series of dynamic datasets in real time. A straightforward application of existing histogram methods on each snapshot of such dynamic datasets will incur high accumulated error due to the composibility of differential privacy and correlations or overlapping users between the snapshots. In this paper, we address the problem of releasing series of dynamic datasets in real time with differential privacy, using a novel adaptive distance-based sampling approach. Our first method, DSFT, uses a fixed distance threshold and releases a differentially private histogram only when the current snapshot is sufficiently different from the previous one, i.e., with a distance greater than a predefined threshold. Our second method, DSAT, further improves DSFT and uses a dynamic threshold adaptively adjusted by a feedback control mechanism to capture the data dynamics. Extensive experiments on real and synthetic datasets demonstrate that our approach achieves better utility than baseline methods and existing state-of-the-art methods. PMID:26973795
Johnson, A P; Macgowan, R J; Eldridge, G D; Morrow, K M; Sosman, J; Zack, B; Margolis, A
2013-10-01
The objectives of this study were to: (a) estimate the costs of providing a single-session HIV prevention intervention and a multi-session intervention, and (b) estimate the number of HIV transmissions that would need to be prevented for the intervention to be cost-saving or cost-effective (threshold analysis). Project START was evaluated with 522 young men aged 18-29 years released from eight prisons located in California, Mississippi, Rhode Island, and Wisconsin. Cost data were collected prospectively. Costs per participant were $689 for the single-session comparison intervention, and ranged from $1,823 to 1,836 for the Project START multi-session intervention. From the incremental threshold analysis, the multi-session intervention would be cost-effective if it prevented one HIV transmission for every 753 participants compared to the single-session intervention. Costs are comparable with other HIV prevention programs. Program managers can use these data to gauge costs of initiating these HIV prevention programs in correctional facilities.
A Continuous Threshold Expectile Model.
Zhang, Feipeng; Li, Qunhua
2017-12-01
Expectile regression is a useful tool for exploring the relation between the response and the explanatory variables beyond the conditional mean. A continuous threshold expectile regression is developed for modeling data in which the effect of a covariate on the response variable is linear but varies below and above an unknown threshold in a continuous way. The estimators for the threshold and the regression coefficients are obtained using a grid search approach. The asymptotic properties for all the estimators are derived, and the estimator for the threshold is shown to achieve root-n consistency. A weighted CUSUM type test statistic is proposed for the existence of a threshold at a given expectile, and its asymptotic properties are derived under both the null and the local alternative models. This test only requires fitting the model under the null hypothesis in the absence of a threshold, thus it is computationally more efficient than the likelihood-ratio type tests. Simulation studies show that the proposed estimators and test have desirable finite sample performance in both homoscedastic and heteroscedastic cases. The application of the proposed method on a Dutch growth data and a baseball pitcher salary data reveals interesting insights. The proposed method is implemented in the R package cthreshER .
NASA Astrophysics Data System (ADS)
Fanood, Mohammad M. Rafiee; Ram, N. Bhargava; Lehmann, C. Stefan; Powis, Ivan; Janssen, Maurice H. M.
2015-06-01
Simultaneous, enantiomer-specific identification of chiral molecules in multi-component mixtures is extremely challenging. Many established techniques for single-component analysis fail to provide selectivity in multi-component mixtures and lack sensitivity for dilute samples. Here we show how enantiomers may be differentiated by mass-selected photoelectron circular dichroism using an electron-ion coincidence imaging spectrometer. As proof of concept, vapours containing ~1% of two chiral monoterpene molecules, limonene and camphor, are irradiated by a circularly polarized femtosecond laser, resulting in multiphoton near-threshold ionization with little molecular fragmentation. Large chiral asymmetries (2-4%) are observed in the mass-tagged photoelectron angular distributions. These asymmetries switch sign according to the handedness (R- or S-) of the enantiomer in the mixture and scale with enantiomeric excess of a component. The results demonstrate that mass spectrometric identification of mixtures of chiral molecules and quantitative determination of enantiomeric excess can be achieved in a table-top instrument.
Fanood, Mohammad M Rafiee; Ram, N. Bhargava; Lehmann, C. Stefan; Powis, Ivan; Janssen, Maurice H. M.
2015-01-01
Simultaneous, enantiomer-specific identification of chiral molecules in multi-component mixtures is extremely challenging. Many established techniques for single-component analysis fail to provide selectivity in multi-component mixtures and lack sensitivity for dilute samples. Here we show how enantiomers may be differentiated by mass-selected photoelectron circular dichroism using an electron–ion coincidence imaging spectrometer. As proof of concept, vapours containing ∼1% of two chiral monoterpene molecules, limonene and camphor, are irradiated by a circularly polarized femtosecond laser, resulting in multiphoton near-threshold ionization with little molecular fragmentation. Large chiral asymmetries (2–4%) are observed in the mass-tagged photoelectron angular distributions. These asymmetries switch sign according to the handedness (R- or S-) of the enantiomer in the mixture and scale with enantiomeric excess of a component. The results demonstrate that mass spectrometric identification of mixtures of chiral molecules and quantitative determination of enantiomeric excess can be achieved in a table-top instrument. PMID:26104140
Sparse and redundant representations for inverse problems and recognition
NASA Astrophysics Data System (ADS)
Patel, Vishal M.
Sparse and redundant representation of data enables the description of signals as linear combinations of a few atoms from a dictionary. In this dissertation, we study applications of sparse and redundant representations in inverse problems and object recognition. Furthermore, we propose two novel imaging modalities based on the recently introduced theory of Compressed Sensing (CS). This dissertation consists of four major parts. In the first part of the dissertation, we study a new type of deconvolution algorithm that is based on estimating the image from a shearlet decomposition. Shearlets provide a multi-directional and multi-scale decomposition that has been mathematically shown to represent distributed discontinuities such as edges better than traditional wavelets. We develop a deconvolution algorithm that allows for the approximation inversion operator to be controlled on a multi-scale and multi-directional basis. Furthermore, we develop a method for the automatic determination of the threshold values for the noise shrinkage for each scale and direction without explicit knowledge of the noise variance using a generalized cross validation method. In the second part of the dissertation, we study a reconstruction method that recovers highly undersampled images assumed to have a sparse representation in a gradient domain by using partial measurement samples that are collected in the Fourier domain. Our method makes use of a robust generalized Poisson solver that greatly aids in achieving a significantly improved performance over similar proposed methods. We will demonstrate by experiments that this new technique is more flexible to work with either random or restricted sampling scenarios better than its competitors. In the third part of the dissertation, we introduce a novel Synthetic Aperture Radar (SAR) imaging modality which can provide a high resolution map of the spatial distribution of targets and terrain using a significantly reduced number of needed transmitted and/or received electromagnetic waveforms. We demonstrate that this new imaging scheme, requires no new hardware components and allows the aperture to be compressed. Also, it presents many new applications and advantages which include strong resistance to countermesasures and interception, imaging much wider swaths and reduced on-board storage requirements. The last part of the dissertation deals with object recognition based on learning dictionaries for simultaneous sparse signal approximations and feature extraction. A dictionary is learned for each object class based on given training examples which minimize the representation error with a sparseness constraint. A novel test image is then projected onto the span of the atoms in each learned dictionary. The residual vectors along with the coefficients are then used for recognition. Applications to illumination robust face recognition and automatic target recognition are presented.
Zhou, Yi-Biao; Chen, Yue; Liang, Song; Song, Xiu-Xia; Chen, Geng-Xin; He, Zhong; Cai, Bin; Yihuo, Wu-Li; He, Zong-Gui; Jiang, Qing-Wu
2016-08-18
Schistosomiasis remains a serious public health issue in many tropical countries, with more than 700 million people at risk of infection. In China, a national integrated control strategy, aiming at blocking its transmission, has been carried out throughout endemic areas since 2005. A longitudinal study was conducted to determine the effects of different intervention measures on the transmission dynamics of S. japonicum in three study areas and the data were analyzed using a multi-host model. The multi-host model was also used to estimate the threshold of Oncomelania snail density for interrupting schistosomiasis transmission based on the longitudinal data as well as data from the national surveillance system for schistosomiasis. The data showed a continuous decline in the risk of human infection and the multi-host model fit the data well. The 25th, 50th and 75th percentiles, and the mean of estimated thresholds of Oncomelania snail density below which the schistosomiasis transmission cannot be sustained were 0.006, 0.009, 0.028 and 0.020 snails/0.11 m(2), respectively. The study results could help develop specific strategies of schistosomiasis control and elimination tailored to the local situation for each endemic area.
Stark, Peter C.; Kuske, Cheryl R.; Mullen, Kenneth I.
2002-01-01
A method for quantitating dsDNA in an aqueous sample solution containing an unknown amount of dsDNA. A first aqueous test solution containing a known amount of a fluorescent dye-dsDNA complex and at least one fluorescence-attenutating contaminant is prepared. The fluorescence intensity of the test solution is measured. The first test solution is diluted by a known amount to provide a second test solution having a known concentration of dsDNA. The fluorescence intensity of the second test solution is measured. Additional diluted test solutions are similarly prepared until a sufficiently dilute test solution having a known amount of dsDNA is prepared that has a fluorescence intensity that is not attenuated upon further dilution. The value of the maximum absorbance of this solution between 200-900 nanometers (nm), referred to herein as the threshold absorbance, is measured. A sample solution having an unknown amount of dsDNA and an absorbance identical to that of the sufficiently dilute test solution at the same chosen wavelength is prepared. Dye is then added to the sample solution to form the fluorescent dye-dsDNA-complex, after which the fluorescence intensity of the sample solution is measured and the quantity of dsDNA in the sample solution is determined. Once the threshold absorbance of a sample solution obtained from a particular environment has been determined, any similarly prepared sample solution taken from a similar environment and having the same value for the threshold absorbance can be quantified for dsDNA by adding a large excess of dye to the sample solution and measuring its fluorescence intensity.
Herlitz, Georg N.; Sanders, Renee L.; Cheung, Nora H.; Coyle, Susette M.; Griffel, Benjamin; Macor, Marie A.; Lowry, Stephen F.; Calvano, Steve E.; Gale, Stephen C.
2014-01-01
Introduction Human injury or infection induces systemic inflammation with characteristic neuro-endocrine responses. Fluctuations in autonomic function during inflammation are reflected by beat-to-beat variation in heart rate, termed heart rate variability (HRV). In the present study, we determine threshold doses of endotoxin needed to induce observable changes in markers of systemic inflammation, we investigate whether metrics of HRV exhibit a differing threshold dose from other inflammatory markers, and we investigate the size of data sets required for meaningful use of multi-scale entropy (MSE) analysis of HRV. Methods Healthy human volunteers (n=25) were randomized to receive placebo (normal saline) or endotoxin/lipopolysaccharide (LPS): 0.1, 0.25, 0.5, 1.0, or 2.0 ng/kg administered intravenously. Vital signs were recorded every 30 minutes for 6 hours and then at 9, 12, and 24 hours after LPS. Blood samples were drawn at specific time points for cytokine measurements. HRV analysis was performed using EKG epochs of 5 minutes. MSE for HRV was calculated for all dose groups to scale factor 40. Results The lowest significant threshold dose was noted in core temperature at 0.25ng/kg. Endogenous TNF-α and IL-6 were significantly responsive at the next dosage level (0.5ng/kg) along with elevations in circulating leukocytes and heart rate. Responses were exaggerated at higher doses (1 and 2 ng/kg). Time domain and frequency domain HRV metrics similarly suggested a threshold dose, differing from placebo at 1.0 and 2.0 ng/kg, below which no clear pattern in response was evident. By applying repeated-measures ANOVA across scale factors, a significant decrease in MSE was seen at 1.0 and 2.0 ng/kg by 2 hours post exposure to LPS. While not statistically significant below 1.0 ng/kg, MSE unexpectedly decreased across all groups in an orderly dose-response pattern not seen in the other outcomes. Conclusions By usingrANOVA across scale factors, MSE can detect autonomic change after LPS challenge in a group of 25 subjects using EKG epochs of only 5 minutes and entropy analysis to scale factor of only 40, potentially facilitating MSE’s wider use as a research tool or bedside monitor. Traditional markers of inflammation generally exhibit threshold dose behavior. In contrast, MSE’s apparent continuous dose-response pattern, while not statistically verifiable in this study, suggests a potential subclinical harbinger of infectious or other insult. The possible derangement of autonomic complexity prior to or independent of the cytokine surge cannot be ruled out. Future investigation should focus on confirmation of overt inflammation following observed decreases in MSE in a clinical setting. PMID:25526373
Trzcinski, Natalie K; Gomez-Ramirez, Manuel; Hsiao, Steven S
2016-09-01
Continuous training enhances perceptual discrimination and promotes neural changes in areas encoding the experienced stimuli. This type of experience-dependent plasticity has been demonstrated in several sensory and motor systems. Particularly, non-human primates trained to detect consecutive tactile bar indentations across multiple digits showed expanded excitatory receptive fields (RFs) in somatosensory cortex. However, the perceptual implications of these anatomical changes remain undetermined. Here, we trained human participants for 9 days on a tactile task that promoted expansion of multi-digit RFs. Participants were required to detect consecutive indentations of bar stimuli spanning multiple digits. Throughout the training regime we tracked participants' discrimination thresholds on spatial (grating orientation) and temporal tasks on the trained and untrained hands in separate sessions. We hypothesized that training on the multi-digit task would decrease perceptual thresholds on tasks that require stimulus processing across multiple digits, while also increasing thresholds on tasks requiring discrimination on single digits. We observed an increase in orientation thresholds on a single digit. Importantly, this effect was selective for the stimulus orientation and hand used during multi-digit training. We also found that temporal acuity between digits improved across trained digits, suggesting that discriminating the temporal order of multi-digit stimuli can transfer to temporal discrimination of other tactile stimuli. These results suggest that experience-dependent plasticity following perceptual learning improves and interferes with tactile abilities in manners predictive of the task and stimulus features used during training. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Trzcinski, Natalie K; Gomez-Ramirez, Manuel; Hsiao, Steven S.
2016-01-01
Continuous training enhances perceptual discrimination and promotes neural changes in areas encoding the experienced stimuli. This type of experience-dependent plasticity has been demonstrated in several sensory and motor systems. Particularly, non-human primates trained to detect consecutive tactile bar indentations across multiple digits showed expanded excitatory receptive fields (RFs) in somatosensory cortex. However, the perceptual implications of these anatomical changes remain undetermined. Here, we trained human participants for nine days on a tactile task that promoted expansion of multi-digit RFs. Participants were required to detect consecutive indentations of bar stimuli spanning multiple digits. Throughout the training regime we tracked participants’ discrimination thresholds on spatial (grating orientation) and temporal tasks on the trained and untrained hands in separate sessions. We hypothesized that training on the multi-digit task would decrease perceptual thresholds on tasks that require stimulus processing across multiple digits, while also increasing thresholds on tasks requiring discrimination on single digits. We observed an increase in orientation thresholds on a single-digit. Importantly, this effect was selective for the stimulus orientation and hand used during multi-digit training. We also found that temporal acuity between digits improved across trained digits, suggesting that discriminating the temporal order of multi-digit stimuli can transfer to temporal discrimination of other tactile stimuli. These results suggest that experience-dependent plasticity following perceptual learning improves and interferes with tactile abilities in manners predictive of the task and stimulus features used during training. PMID:27422224
Error diffusion concept for multi-level quantization
NASA Astrophysics Data System (ADS)
Broja, Manfred; Michalowski, Kristina; Bryngdahl, Olof
1990-11-01
The error diffusion binarization procedure is adapted to multi-level quantization. The threshold parameters then available have a noticeable influence on the process. Characteristic features of the technique are shown together with experimental results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakayasu, Ernesto S.; Nicora, Carrie D.; Sims, Amy C.
2016-05-03
ABSTRACT Integrative multi-omics analyses can empower more effective investigation and complete understanding of complex biological systems. Despite recent advances in a range of omics analyses, multi-omic measurements of the same sample are still challenging and current methods have not been well evaluated in terms of reproducibility and broad applicability. Here we adapted a solvent-based method, widely applied for extracting lipids and metabolites, to add proteomics to mass spectrometry-based multi-omics measurements. Themetabolite,protein, andlipidextraction (MPLEx) protocol proved to be robust and applicable to a diverse set of sample types, including cell cultures, microbial communities, and tissues. To illustrate the utility of thismore » protocol, an integrative multi-omics analysis was performed using a lung epithelial cell line infected with Middle East respiratory syndrome coronavirus, which showed the impact of this virus on the host glycolytic pathway and also suggested a role for lipids during infection. The MPLEx method is a simple, fast, and robust protocol that can be applied for integrative multi-omic measurements from diverse sample types (e.g., environmental,in vitro, and clinical). IMPORTANCEIn systems biology studies, the integration of multiple omics measurements (i.e., genomics, transcriptomics, proteomics, metabolomics, and lipidomics) has been shown to provide a more complete and informative view of biological pathways. Thus, the prospect of extracting different types of molecules (e.g., DNAs, RNAs, proteins, and metabolites) and performing multiple omics measurements on single samples is very attractive, but such studies are challenging due to the fact that the extraction conditions differ according to the molecule type. Here, we adapted an organic solvent-based extraction method that demonstrated broad applicability and robustness, which enabled comprehensive proteomics, metabolomics, and lipidomics analyses from the same sample.« less
Cocco, Arturo; Serra, Giuseppe; Lentini, Andrea; Deliperi, Salvatore; Delrio, Gavino
2015-09-01
The within- and between-plant distribution of the tomato leafminer, Tuta absoluta (Meyrick), was investigated in order to define action thresholds based on leaf infestation and to propose enumerative and binomial sequential sampling plans for pest management applications in protected crops. The pest spatial distribution was aggregated between plants, and median leaves were the most suitable sample to evaluate the pest density. Action thresholds of 36 and 48%, 43 and 56% and 60 and 73% infested leaves, corresponding to economic thresholds of 1 and 3% damaged fruits, were defined for tomato cultivars with big, medium and small fruits respectively. Green's method was a more suitable enumerative sampling plan as it required a lower sampling effort. Binomial sampling plans needed lower average sample sizes than enumerative plans to make a treatment decision, with probabilities of error of <0.10. The enumerative sampling plan required 87 or 343 leaves to estimate the population density in extensive or intensive ecological studies respectively. Binomial plans would be more practical and efficient for control purposes, needing average sample sizes of 17, 20 and 14 leaves to take a pest management decision in order to avoid fruit damage higher than 1% in cultivars with big, medium and small fruits respectively. © 2014 Society of Chemical Industry.
This document contains analytical methods for the analysis of metals and cyanide in environmental samples. It also contains contractual requirements for laboratories participating in Superfund's Contract Laboratory Program.
This document contains analytical methods for the analysis of metals and cyanide in environmental samples. It also contains contractual requirements for laboratories participating in Superfund's Contract Laboratory Program.
van der Hoek, Yntze; Renfrew, Rosalind; Manne, Lisa L
2013-01-01
Identifying persistence and extinction thresholds in species-habitat relationships is a major focal point of ecological research and conservation. However, one major concern regarding the incorporation of threshold analyses in conservation is the lack of knowledge on the generality and transferability of results across species and regions. We present a multi-region, multi-species approach of modeling threshold responses, which we use to investigate whether threshold effects are similar across species and regions. We modeled local persistence and extinction dynamics of 25 forest-associated breeding birds based on detection/non-detection data, which were derived from repeated breeding bird atlases for the state of Vermont. We did not find threshold responses to be particularly well-supported, with 9 species supporting extinction thresholds and 5 supporting persistence thresholds. This contrasts with a previous study based on breeding bird atlas data from adjacent New York State, which showed that most species support persistence and extinction threshold models (15 and 22 of 25 study species respectively). In addition, species that supported a threshold model in both states had associated average threshold estimates of 61.41% (SE = 6.11, persistence) and 66.45% (SE = 9.15, extinction) in New York, compared to 51.08% (SE = 10.60, persistence) and 73.67% (SE = 5.70, extinction) in Vermont. Across species, thresholds were found at 19.45-87.96% forest cover for persistence and 50.82-91.02% for extinction dynamics. Through an approach that allows for broad-scale comparisons of threshold responses, we show that species vary in their threshold responses with regard to habitat amount, and that differences between even nearby regions can be pronounced. We present both ecological and methodological factors that may contribute to the different model results, but propose that regardless of the reasons behind these differences, our results merit a warning that threshold values cannot simply be transferred across regions or interpreted as clear-cut targets for ecosystem management and conservation.
NASA Astrophysics Data System (ADS)
Raczyński, L.; Moskal, P.; Kowalski, P.; Wiślicki, W.; Bednarski, T.; Białas, P.; Czerwiński, E.; Kapłon, Ł.; Kochanowski, A.; Korcyl, G.; Kowal, J.; Kozik, T.; Krzemień, W.; Kubicz, E.; Molenda, M.; Moskal, I.; Niedźwiecki, Sz.; Pałka, M.; Pawlik-Niedźwiecka, M.; Rudy, Z.; Salabura, P.; Sharma, N. G.; Silarski, M.; Słomski, A.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zieliński, M.; Zoń, N.
2014-11-01
Currently inorganic scintillator detectors are used in all commercial Time of Flight Positron Emission Tomograph (TOF-PET) devices. The J-PET collaboration investigates a possibility of construction of a PET scanner from plastic scintillators which would allow for single bed imaging of the whole human body. This paper describes a novel method of hit-position reconstruction based on sampled signals and an example of an application of the method for a single module with a 30 cm long plastic strip, read out on both ends by Hamamatsu R4998 photomultipliers. The sampling scheme to generate a vector with samples of a PET event waveform with respect to four user-defined amplitudes is introduced. The experimental setup provides irradiation of a chosen position in the plastic scintillator strip with an annihilation gamma quanta of energy 511 keV. The statistical test for a multivariate normal (MVN) distribution of measured vectors at a given position is developed, and it is shown that signals sampled at four thresholds in a voltage domain are approximately normally distributed variables. With the presented method of a vector analysis made out of waveform samples acquired with four thresholds, we obtain a spatial resolution of about 1 cm and a timing resolution of about 80 ps (σ).
NASA Astrophysics Data System (ADS)
Yulianti, D.; Marwoto, P.; Fianti
2018-03-01
This research aims to determine the type, concentration, and distribution of heavy metals in vegetables on the banks river Kaligarang using Neutron Analysis Activation (NAA) Method. The result is then compared to its predefined threshold. Vegetable samples included papaya leaf, cassava leaf, spinach, and water spinach. This research was conducted by taking a snippet of sediment and vegetation from 4 locations of Kaligarang river. These snippets are then prepared for further irradiated in the reactor for radioactive samples emiting γ-ray. The level of γ-ray energy determines the contained elements of sample that would be matched to Neutron Activation Table. The results showed that vegetablesat Kaligarang are containing Cr-50, Co-59, Zn-64, Fe-58, and Mn-25, and well distributed at all research locations. Furthermore, the level of the detected metal elements is less than the predefined threshold.
DeepSAT's CloudCNN: A Deep Neural Network for Rapid Cloud Detection from Geostationary Satellites
NASA Astrophysics Data System (ADS)
Kalia, S.; Li, S.; Ganguly, S.; Nemani, R. R.
2017-12-01
Cloud and cloud shadow detection has important applications in weather and climate studies. It is even more crucial when we introduce geostationary satellites into the field of terrestrial remotesensing. With the challenges associated with data acquired in very high frequency (10-15 mins per scan), the ability to derive an accurate cloud/shadow mask from geostationary satellite data iscritical. The key to the success for most of the existing algorithms depends on spatially and temporally varying thresholds, which better capture local atmospheric and surface effects.However, the selection of proper threshold is difficult and may lead to erroneous results. In this work, we propose a deep neural network based approach called CloudCNN to classifycloud/shadow from Himawari-8 AHI and GOES-16 ABI multispectral data. DeepSAT's CloudCNN consists of an encoder-decoder based architecture for binary-class pixel wise segmentation. We train CloudCNN on multi-GPU Nvidia Devbox cluster, and deploy the prediction pipeline on NASA Earth Exchange (NEX) Pleiades supercomputer. We achieved an overall accuracy of 93.29% on test samples. Since, the predictions take only a few seconds to segment a full multi-spectral GOES-16 or Himawari-8 Full Disk image, the developed framework can be used for real-time cloud detection, cyclone detection, or extreme weather event predictions.
NASA Astrophysics Data System (ADS)
Li, Ke; Chen, Jianping; Sofia, Giulia; Tarolli, Paolo
2014-05-01
Moon surface features have great significance in understanding and reconstructing the lunar geological evolution. Linear structures like rilles and ridges are closely related to the internal forced tectonic movement. The craters widely distributed on the moon are also the key research targets for external forced geological evolution. The extremely rare availability of samples and the difficulty for field works make remote sensing the most important approach for planetary studies. New and advanced lunar probes launched by China, U.S., Japan and India provide nowadays a lot of high-quality data, especially in the form of high-resolution Digital Terrain Models (DTMs), bringing new opportunities and challenges for feature extraction on the moon. The aim of this study is to recognize and extract lunar features using geomorphometric analysis based on multi-scale parameters and multi-resolution DTMs. The considered digital datasets include CE1-LAM (Chang'E One, Laser AltiMeter) data with resolution of 500m/pix, LRO-WAC (Lunar Reconnaissance Orbiter, Wide Angle Camera) data with resolution of 100m/pix, LRO-LOLA (Lunar Reconnaissance Orbiter, Lunar Orbiter Laser Altimeter) data with resolution of 60m/pix, and LRO-NAC (Lunar Reconnaissance Orbiter, Narrow Angle Camera) data with resolution of 2-5m/pix. We considered surface derivatives to recognize the linear structures including Rilles and Ridges. Different window scales and thresholds for are considered for feature extraction. We also calculated the roughness index to identify the erosion/deposits area within craters. The results underline the suitability of the adopted methods for feature recognition on the moon surface. The roughness index is found to be a useful tool to distinguish new craters, with higher roughness, from the old craters, which present a smooth and less rough surface.
MPN estimation of qPCR target sequence recoveries from whole cell calibrator samples
DNA extracts from enumerated target organism cells (calibrator samples) have been used for estimating Enterococcus cell equivalent densities in surface waters by a comparative cycle threshold (Ct) qPCR analysis method. To compare surface water Enterococcus density estimates from ...
Miles, Jeffrey Hilton
2011-05-01
Combustion noise from turbofan engines has become important, as the noise from sources like the fan and jet are reduced. An aligned and un-aligned coherence technique has been developed to determine a threshold level for the coherence and thereby help to separate the coherent combustion noise source from other noise sources measured with far-field microphones. This method is compared with a statistics based coherence threshold estimation method. In addition, the un-aligned coherence procedure at the same time also reveals periodicities, spectral lines, and undamped sinusoids hidden by broadband turbofan engine noise. In calculating the coherence threshold using a statistical method, one may use either the number of independent records or a larger number corresponding to the number of overlapped records used to create the average. Using data from a turbofan engine and a simulation this paper shows that applying the Fisher z-transform to the un-aligned coherence can aid in making the proper selection of samples and produce a reasonable statistics based coherence threshold. Examples are presented showing that the underlying tonal and coherent broad band structure which is buried under random broadband noise and jet noise can be determined. The method also shows the possible presence of indirect combustion noise.
NASA Astrophysics Data System (ADS)
Yun, Lingtong; Zhao, Hongzhong; Du, Mengyuan
2018-04-01
Quadrature and multi-channel amplitude-phase error have to be compensated in the I/Q quadrature sampling and signal through multi-channel. A new method that it doesn't need filter and standard signal is presented in this paper. And it can combined estimate quadrature and multi-channel amplitude-phase error. The method uses cross-correlation and amplitude ratio between the signal to estimate the two amplitude-phase errors simply and effectively. And the advantages of this method are verified by computer simulation. Finally, the superiority of the method is also verified by measure data of outfield experiments.
Speeding up Coarse Point Cloud Registration by Threshold-Independent Baysac Match Selection
NASA Astrophysics Data System (ADS)
Kang, Z.; Lindenbergh, R.; Pu, S.
2016-06-01
This paper presents an algorithm for the automatic registration of terrestrial point clouds by match selection using an efficiently conditional sampling method -- threshold-independent BaySAC (BAYes SAmpling Consensus) and employs the error metric of average point-to-surface residual to reduce the random measurement error and then approach the real registration error. BaySAC and other basic sampling algorithms usually need to artificially determine a threshold by which inlier points are identified, which leads to a threshold-dependent verification process. Therefore, we applied the LMedS method to construct the cost function that is used to determine the optimum model to reduce the influence of human factors and improve the robustness of the model estimate. Point-to-point and point-to-surface error metrics are most commonly used. However, point-to-point error in general consists of at least two components, random measurement error and systematic error as a result of a remaining error in the found rigid body transformation. Thus we employ the measure of the average point-to-surface residual to evaluate the registration accuracy. The proposed approaches, together with a traditional RANSAC approach, are tested on four data sets acquired by three different scanners in terms of their computational efficiency and quality of the final registration. The registration results show the st.dev of the average point-to-surface residuals is reduced from 1.4 cm (plain RANSAC) to 0.5 cm (threshold-independent BaySAC). The results also show that, compared to the performance of RANSAC, our BaySAC strategies lead to less iterations and cheaper computational cost when the hypothesis set is contaminated with more outliers.
NASA Astrophysics Data System (ADS)
Petrie, Kyle G.
Composites of multi-walled carbon nanotubes (MWCNTs) with polypropylene (PP) and thermoplastic olefins (TPOs) were prepared by melt compounding. Two non-covalent functionalization methods were employed to improve nanotube dispersion and the resulting composite properties are reported. The first functionalization approach involved partial coating of the surface of the nanotubes with a hyperbranched polyethylene (HBPE). MWCNT functionalization with HBPE was only moderately successful in breaking up the large aggregates that formed upon melt mixing with PP. In spite of the formation of large aggregates, the samples were conductive above a percolation threshold of 7.3 wt%. MWCNT functionalization did not disrupt the electrical conductivity of the nanotubes. The composite strength was improved with addition of nanotubes, but ductility was severely compromised because of the existence of aggregates. The second method involved PP matrix functionalization with aromatic moieties capable of pi-pi interaction with MWCNT sidewalls. Various microscopy techniques revealed the addition of only 25 wt% of PP-g-pyridine (Py) to the neat PP was capable of drastically reducing nanotube aggregate size and amount. Raman spectroscopy confirmed improved polymer/nanotube interaction with the PP-g-Py matrix. Electrical percolation threshold was obtained at a MWCNT loading of approximately 1.2 wt%. Electrical conductivity on the order of 10 -2 S/m was achieved, suggesting possible use in semi-conducting applications. Composite strength was improved upon addition of MWCNTs. The matrix functionalization with Py resulted in a significant improvement in composite ductility when filled with MWCNTs in comparison to its maleic anhydride (MA) counterpart. Preliminary investigations suggest that the use of alternating current (AC) electric fields may be effective in aligning nanotubes in PP to reduce the filler loading required for electrical percolation. Composites containing MWCNT within PP/ethylene-octene copolymer (EOC) blends were prepared. Microscopy revealed that MWCNTs localized preferentially in the EOC phase. This was explained by the tendency of the system to minimize interfacial energy when the MWCNTs reside in the thermodynamically preferential phase. A kinetic approach, which involved pre-mixing the MWCNTs with PP and adding the EOC phase subsequently was attempted to monitor the migration of MWCNTs. MWCNTs began to migrate after two minutes of melt mixing with the EOC. The PP-g-Py matrix functionalization appears to slightly delay the migration. A reduction in electrical percolation threshold to 0.5 wt% MWCNTs was achieved with a co-continuous blend morphology, consisting of a 50/50 by weight ratio of PP and EOC.
Guo, Xiaoting; Sun, Changku; Wang, Peng
2017-08-01
This paper investigates the multi-rate inertial and vision data fusion problem in nonlinear attitude measurement systems, where the sampling rate of the inertial sensor is much faster than that of the vision sensor. To fully exploit the high frequency inertial data and obtain favorable fusion results, a multi-rate CKF (Cubature Kalman Filter) algorithm with estimated residual compensation is proposed in order to adapt to the problem of sampling rate discrepancy. During inter-sampling of slow observation data, observation noise can be regarded as infinite. The Kalman gain is unknown and approaches zero. The residual is also unknown. Therefore, the filter estimated state cannot be compensated. To obtain compensation at these moments, state error and residual formulas are modified when compared with the observation data available moments. Self-propagation equation of the state error is established to propagate the quantity from the moments with observation to the moments without observation. Besides, a multiplicative adjustment factor is introduced as Kalman gain, which acts on the residual. Then the filter estimated state can be compensated even when there are no visual observation data. The proposed method is tested and verified in a practical setup. Compared with multi-rate CKF without residual compensation and single-rate CKF, a significant improvement is obtained on attitude measurement by using the proposed multi-rate CKF with inter-sampling residual compensation. The experiment results with superior precision and reliability show the effectiveness of the proposed method.
Ankerst, Donna Pauler; Gelfond, Jonathan; Goros, Martin; Herrera, Jesus; Strobl, Andreas; Thompson, Ian M.; Hernandez, Javier; Leach, Robin J.
2016-01-01
PURPOSE To characterize the diagnostic properties of serial percent-free prostate-specific antigen (PSA) in relation to PSA in a multi-ethnic, multi-racial cohort of healthy men. MATERIALS AND METHODS 6,982 percent-free PSA and PSA measures were obtained from participants in a 12 year+ Texas screening study comprising 1625 men who never underwent biopsy, 497 who underwent one or more biopsies negative for prostate cancer, and 61 diagnosed with prostate cancer. Area underneath the receiver-operating-characteristic-curve (AUC) for percent-free PSA, and the proportion of patients with fluctuating values across multiple visits were determined according to two thresholds (under 15% versus 25%) were evaluated. The proportion of cancer cases where percent-free PSA indicated a positive test before PSA > 4 ng/mL did and the number of negative biopsies that would have been spared by percent-free PSA testing negative were computed. RESULTS Percent-free PSA fluctuated around its threshold of < 25% (< 15%) in 38.3% (78.1%), 42.2% (20.9%), and 11.4% (25.7%) of patients never biopsied, with negative and positive biopsies, respectively. At the same thresholds, percent-free PSA tested positive earlier than PSA in 71.4% (34.2%) of cancer cases, and among men with multiple negative biopsies and a PSA > 4 ng/mL, percent-free PSA would have tested negative in 31.6% (65.8%) instances. CONCLUSIONS Percent-free PSA should accompany PSA testing in order to potentially spare unnecessary biopsies or detect cancer earlier. When near the threshold, both tests should be repeated due to commonly observed fluctuation. PMID:26979652
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-06
... Analytic Methods and Sampling Procedures for the United States National Residue Program for Meat, Poultry... implementing several multi-residue methods for analyzing samples of meat, poultry, and egg products for animal.... These modern, high-efficiency methods will conserve resources and provide useful and reliable results...
The Cost and Threshold Analysis of Retention in Care (RiC): A Multi-Site National HIV Care Program.
Maulsby, Catherine; Jain, Kriti M; Weir, Brian W; Enobun, Blessing; Riordan, Maura; Charles, Vignetta E; Holtgrave, David R
2017-03-01
Persons diagnosed with HIV but not retained in HIV medical care accounted for the majority of HIV transmissions in 2009 in the United States (US). There is an urgent need to implement and disseminate HIV retention in care programs; however little is known about the costs associated with implementing retention in care programs. We assessed the costs and cost-saving thresholds for seven Retention in Care (RiC) programs implemented in the US using standard methods recommended by the US Panel on Cost-effectiveness in Health and Medicine. Data were gathered from accounting and program implementation records, entered into a standardized RiC economic analysis spreadsheet, and standardized to a 12 month time frame. Total program costs for from the societal perspective ranged from $47,919 to $423,913 per year or $146 to $2,752 per participant. Cost-saving thresholds ranged from 0.13 HIV transmissions averted to 1.18 HIV transmission averted per year. We estimated that these cost-saving thresholds could be achieved through 1 to 16 additional person-years of viral suppression. Across a range of program models, retention in care interventions had highly achievable cost-saving thresholds, suggesting that retention in care programs are a judicious use of resources.
Multidimensional Normalization to Minimize Plate Effects of Suspension Bead Array Data.
Hong, Mun-Gwan; Lee, Woojoo; Nilsson, Peter; Pawitan, Yudi; Schwenk, Jochen M
2016-10-07
Enhanced by the growing number of biobanks, biomarker studies can now be performed with reasonable statistical power by using large sets of samples. Antibody-based proteomics by means of suspension bead arrays offers one attractive approach to analyze serum, plasma, or CSF samples for such studies in microtiter plates. To expand measurements beyond single batches, with either 96 or 384 samples per plate, suitable normalization methods are required to minimize the variation between plates. Here we propose two normalization approaches utilizing MA coordinates. The multidimensional MA (multi-MA) and MA-loess both consider all samples of a microtiter plate per suspension bead array assay and thus do not require any external reference samples. We demonstrate the performance of the two MA normalization methods with data obtained from the analysis of 384 samples including both serum and plasma. Samples were randomized across 96-well sample plates, processed, and analyzed in assay plates, respectively. Using principal component analysis (PCA), we could show that plate-wise clusters found in the first two components were eliminated by multi-MA normalization as compared with other normalization methods. Furthermore, we studied the correlation profiles between random pairs of antibodies and found that both MA normalization methods substantially reduced the inflated correlation introduced by plate effects. Normalization approaches using multi-MA and MA-loess minimized batch effects arising from the analysis of several assay plates with antibody suspension bead arrays. In a simulated biomarker study, multi-MA restored associations lost due to plate effects. Our normalization approaches, which are available as R package MDimNormn, could also be useful in studies using other types of high-throughput assay data.
NASA Technical Reports Server (NTRS)
Gliese, U.; Avanov, L. A.; Barrie, A. C.; Kujawski, J. T.; Mariano, A. J.; Tucker, C. J.; Chornay, D. J.; Cao, N. T.; Gershman, D. J.; Dorelli, J. C.;
2015-01-01
The Fast Plasma Investigation (FPI) on NASAs Magnetospheric MultiScale (MMS) mission employs 16 Dual Electron Spectrometers (DESs) and 16 Dual Ion Spectrometers (DISs) with 4 of each type on each of 4 spacecraft to enable fast (30 ms for electrons; 150 ms for ions) and spatially differentiated measurements of the full 3D particle velocity distributions. This approach presents a new and challenging aspect to the calibration and operation of these instruments on ground and in flight. The response uniformity, the reliability of their calibration and the approach to handling any temporal evolution of these calibrated characteristics all assume enhanced importance in this application, where we attempt to understand the meaning of particle distributions within the ion and electron diffusion regions of magnetically reconnecting plasmas. Traditionally, the micro-channel plate (MCP) based detection systems for electrostatic particle spectrometers have been calibrated using the plateau curve technique. In this, a fixed detection threshold is set. The detection system count rate is then measured as a function of MCP voltage to determine the MCP voltage that ensures the count rate has reached a constant value independent of further variation in the MCP voltage. This is achieved when most of the MCP pulse height distribution (PHD) is located at higher values (larger pulses) than the detection system discrimination threshold. This method is adequate in single-channel detection systems and in multi-channel detection systems with very low crosstalk between channels. However, in dense multi-channel systems, it can be inadequate. Furthermore, it fails to fully describe the behavior of the detection system and individually characterize each of its fundamental parameters. To improve this situation, we have developed a detailed phenomenological description of the detection system, its behavior and its signal, crosstalk and noise sources. Based on this, we have devised a new detection system calibration method that enables accurate and repeatable measurement and calibration of MCP gain, MCP efficiency, signal loss due to variation in gain and efficiency, crosstalk from effects both above and below the MCP, noise margin, and stability margin in one single measurement. More precise calibration is highly desirable as the instruments will produce higher quality raw data that will require less post-acquisition data correction using results from in-flight pitch angle distribution measurements and ground calibration measurements. The detection system description and the fundamental concepts of this new calibration method, named threshold scan, will be presented. It will be shown how to derive all the individual detection system parameters and how to choose the optimum detection system operating point. This new method has been successfully applied to achieve a highly accurate calibration of the DESs and DISs of the MMS mission. The practical application of the method will be presented together with the achieved calibration results and their significance. Finally, it will be shown that, with further detailed modeling, this method can be extended for use in flight to achieve and maintain a highly accurate detection system calibration across a large number of instruments during the mission.
Mode synthesizing atomic force microscopy and mode-synthesizing sensing
Passian, Ali; Thundat, Thomas George; Tetard, Laurene
2013-05-17
A method of analyzing a sample that includes applying a first set of energies at a first set of frequencies to a sample and applying, simultaneously with the applying the first set of energies, a second set of energies at a second set of frequencies, wherein the first set of energies and the second set of energies form a multi-mode coupling. The method further includes detecting an effect of the multi-mode coupling.
Mode-synthesizing atomic force microscopy and mode-synthesizing sensing
Passain, Ali; Thundat, Thomas George; Tetard, Laurene
2014-07-22
A method of analyzing a sample that includes applying a first set of energies at a first set of frequencies to a sample and applying, simultaneously with the applying the first set of energies, a second set of energies at a second set of frequencies, wherein the first set of energies and the second set of energies form a multi-mode coupling. The method further includes detecting an effect of the multi-mode coupling.
Sb-related defects in Sb-doped ZnO thin film grown by pulsed laser deposition
NASA Astrophysics Data System (ADS)
Luo, Caiqin; Ho, Lok-Ping; Azad, Fahad; Anwand, Wolfgang; Butterling, Maik; Wagner, Andreas; Kuznetsov, Andrej; Zhu, Hai; Su, Shichen; Ling, Francis Chi-Chung
2018-04-01
Sb-doped ZnO films were fabricated on c-plane sapphire using the pulsed laser deposition method and characterized by Hall effect measurement, X-ray photoelectron spectroscopy, X-ray diffraction, photoluminescence, and positron annihilation spectroscopy. Systematic studies on the growth conditions with different Sb composition, oxygen pressure, and post-growth annealing were conducted. If the Sb doping concentration is lower than the threshold ˜8 × 1020 cm-3, the as-grown films grown with an appropriate oxygen pressure could be n˜4 × 1020 cm-3. The shallow donor was attributed to the SbZn related defect. Annealing these samples led to the formation of the SbZn-2VZn shallow acceptor which subsequently compensated for the free carrier. For samples with Sb concentration exceeding the threshold, the yielded as-grown samples were highly resistive. X-ray diffraction results showed that the Sb dopant occupied the O site rather than the Zn site as the Sb doping exceeded the threshold, whereas the SbO related deep acceptor was responsible for the high resistivity of the samples.
Estimating daily climatologies for climate indices derived from climate model data and observations
Mahlstein, Irina; Spirig, Christoph; Liniger, Mark A; Appenzeller, Christof
2015-01-01
Climate indices help to describe the past, present, and the future climate. They are usually closer related to possible impacts and are therefore more illustrative to users than simple climate means. Indices are often based on daily data series and thresholds. It is shown that the percentile-based thresholds are sensitive to the method of computation, and so are the climatological daily mean and the daily standard deviation, which are used for bias corrections of daily climate model data. Sample size issues of either the observed reference period or the model data lead to uncertainties in these estimations. A large number of past ensemble seasonal forecasts, called hindcasts, is used to explore these sampling uncertainties and to compare two different approaches. Based on a perfect model approach it is shown that a fitting approach can improve substantially the estimates of daily climatologies of percentile-based thresholds over land areas, as well as the mean and the variability. These improvements are relevant for bias removal in long-range forecasts or predictions of climate indices based on percentile thresholds. But also for climate change studies, the method shows potential for use. Key Points More robust estimates of daily climate characteristics Statistical fitting approach Based on a perfect model approach PMID:26042192
Diffraction measurements using the LHC Beam Loss Monitoring System
NASA Astrophysics Data System (ADS)
Kalliokoski, Matti
2017-03-01
The Beam Loss Monitoring (BLM) system of the Large Hadron Collider protects the machine from beam induced damage by measuring the absorbed dose rates of beam losses, and by triggering beam dump if the rates increase above the allowed threshold limits. Although the detection time scales are optimized for multi-turn losses, information on fast losses can be recovered from the loss data. In this paper, methods in using the BLM system in diffraction studies are discussed.
NASA Technical Reports Server (NTRS)
Menanteau, Felipe; Gonzalez, Jorge; Juin, Jean-Baptiste; Marriage, Tobias; Reese, Erik D.; Acquaviva, Viviana; Aguirre, Paula; Appel, John Willam; Baker, Andrew J.; Barrientos, L. Felipe;
2010-01-01
We present optical and X-ray properties for the first confirmed galaxy cluster sample selected by the Sunyaev-Zel'dovich Effect from 148 GHz maps over 455 square degrees of sky made with the Atacama Cosmology Telescope. These maps. coupled with multi-band imaging on 4-meter-class optical telescopes, have yielded a sample of 23 galaxy clusters with redshifts between 0.118 and 1.066. Of these 23 clusters, 10 are newly discovered. The selection of this sample is approximately mass limited and essentially independent of redshift. We provide optical positions, images, redshifts and X-ray fluxes and luminosities for the full sample, and X-ray temperatures of an important subset. The mass limit of the full sample is around 8.0 x 10(exp 14) Stellar Mass. with a number distribution that peaks around a redshift of 0.4. For the 10 highest significance SZE-selected cluster candidates, all of which are optically confirmed, the mass threshold is 1 x 10(exp 15) Stellar Mass and the redshift range is 0.167 to 1.066. Archival observations from Chandra, XMM-Newton. and ROSAT provide X-ray luminosities and temperatures that are broadly consistent with this mass threshold. Our optical follow-up procedure also allowed us to assess the purity of the ACT cluster sample. Eighty (one hundred) percent of the 148 GHz candidates with signal-to-noise ratios greater than 5.1 (5.7) are confirmed as massive clusters. The reported sample represents one of the largest SZE-selected sample of massive clusters over all redshifts within a cosmologically-significant survey volume, which will enable cosmological studies as well as future studies on the evolution, morphology, and stellar populations in the most massive clusters in the Universe.
NASA Astrophysics Data System (ADS)
Wang, Andong; Jiang, Lan; Li, Xiaowei; Wang, Zhi; Du, Kun; Lu, Yongfeng
2018-05-01
Ultrafast laser pulse temporal shaping has been widely applied in various important applications such as laser materials processing, coherent control of chemical reactions, and ultrafast imaging. However, temporal pulse shaping has been limited to only-in-lab technique due to the high cost, low damage threshold, and polarization dependence. Herein we propose a novel design of ultrafast laser pulse train generation device, which consists of multiple polarization-independent parallel-aligned thin films. Various pulse trains with controllable temporal profile can be generated flexibly by multi-reflections within the splitting films. Compared with other pulse train generation techniques, this method has advantages of compact structure, low cost, high damage threshold and polarization independence. These advantages endow it with high potential for broad utilization in ultrafast applications.
Resonance Extraction from the Finite Volume
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doring, Michael; Molina Peralta, Raquel
2016-06-01
The spectrum of excited hadrons becomes accessible in simulations of Quantum Chromodynamics on the lattice. Extensions of Lüscher's method allow to address multi-channel scattering problems using moving frames or modified boundary conditions to obtain more eigenvalues in finite volume. As these are at different energies, interpolations are needed to relate different eigenvalues and to help determine the amplitude. Expanding the T- or the K-matrix locally provides a controlled scheme by removing the known non-analyticities of thresholds. This can be stabilized by using Chiral Perturbation Theory. Different examples to determine resonance pole parameters and to disentangle resonances from thresholds are dis-more » cussed, like the scalar meson f0(980) and the excited baryons N(1535)1/2^- and Lambda(1405)1/2^-.« less
Liu, Qian-qian; Wang, Chun-yan; Shi, Xiao-feng; Li, Wen-dong; Luan, Xiao-ning; Hou, Shi-lin; Zhang, Jin-liang; Zheng, Rong-er
2012-04-01
In this paper, a new method was developed to differentiate the spill oil samples. The synchronous fluorescence spectra in the lower nonlinear concentration range of 10(-2) - 10(-1) g x L(-1) were collected to get training data base. Radial basis function artificial neural network (RBF-ANN) was used to identify the samples sets, along with principal component analysis (PCA) as the feature extraction method. The recognition rate of the closely-related oil source samples is 92%. All the results demonstrated that the proposed method could identify the crude oil samples effectively by just one synchronous spectrum of the spill oil sample. The method was supposed to be very suitable to the real-time spill oil identification, and can also be easily applied to the oil logging and the analysis of other multi-PAHs or multi-fluorescent mixtures.
Kafka, Kyle R. P.; Hoffman, Brittany N.; Papernov, Semyon; ...
2017-12-11
The laser-induced damage threshold of fused-silica samples processed via magnetorheological finishing is investigated for polishing compounds depending on the type of abrasive material and the post-polishing surface roughness. The effectiveness of laser conditioning is examined using a ramped pre-exposure with the same 351-nm, 3-ns Gaussian pulses. Lastly, we examine chemical etching of the surface and correlate the resulting damage threshold to the etching protocol. A combination of etching and laser conditioning is found to improve the damage threshold by a factor of ~3, while maintaining <1-nm surface roughness.
NASA Astrophysics Data System (ADS)
Kafka, K. R. P.; Hoffman, B.; Papernov, S.; DeMarco, M. A.; Hall, C.; Marshall, K. L.; Demos, S. G.
2017-12-01
The laser-induced damage threshold of fused-silica samples processed via magnetorheological finishing is investigated for polishing compounds depending on the type of abrasive material and the post-polishing surface roughness. The effectiveness of laser conditioning is examined using a ramped pre-exposure with the same 351-nm, 3-ns Gaussian pulses. Finally, we examine chemical etching of the surface and correlate the resulting damage threshold to the etching protocol. A combination of etching and laser conditioning is found to improve the damage threshold by a factor of 3, while maintaining <1-nm surface roughness.
Visualizing the deep end of sound: plotting multi-parameter results from infrasound data analysis
NASA Astrophysics Data System (ADS)
Perttu, A. B.; Taisne, B.
2016-12-01
Infrasound is sound below the threshold of human hearing: approximately 20 Hz. The field of infrasound research, like other waveform based fields relies on several standard processing methods and data visualizations, including waveform plots and spectrograms. The installation of the International Monitoring System (IMS) global network of infrasound arrays, contributed to the resurgence of infrasound research. Array processing is an important method used in infrasound research, however, this method produces data sets with a large number of parameters, and requires innovative plotting techniques. The goal in designing new figures is to be able to present easily comprehendible, and information-rich plots by careful selection of data density and plotting methods.
Berlin, Conny; Blanch, Carles; Lewis, David J; Maladorno, Dionigi D; Michel, Christiane; Petrin, Michael; Sarp, Severine; Close, Philippe
2012-06-01
The detection of safety signals with medicines is an essential activity to protect public health. Despite widespread acceptance, it is unclear whether recently applied statistical algorithms provide enhanced performance characteristics when compared with traditional systems. Novartis has adopted a novel system for automated signal detection on the basis of disproportionality methods within a safety data mining application (Empirica™ Signal System [ESS]). ESS uses two algorithms for routine analyses: empirical Bayes Multi-item Gamma Poisson Shrinker and logistic regression (LR). A model was developed comprising 14 medicines, categorized as "new" or "established." A standard was prepared on the basis of safety findings selected from traditional sources. ESS results were compared with the standard to calculate the positive predictive value (PPV), specificity, and sensitivity. PPVs of the lower one-sided 5% and 0.05% confidence limits of the Bayes geometric mean (EB05) and of the LR odds ratio (LR0005) almost coincided for all the drug-event combinations studied. There was no obvious difference comparing the PPV of the leading Medical Dictionary for Regulatory Activities (MedDRA) terms to the PPV for all terms. The PPV of narrow MedDRA query searches was higher than that for broad searches. The widely used threshold value of EB05 = 2.0 or LR0005 = 2.0 together with more than three spontaneous reports of the drug-event combination produced balanced results for PPV, sensitivity, and specificity. Consequently, performance characteristics were best for leading terms with narrow MedDRA query searches irrespective of applying Multi-item Gamma Poisson Shrinker or LR at a threshold value of 2.0. This research formed the basis for the configuration of ESS for signal detection at Novartis. Copyright © 2011 John Wiley & Sons, Ltd.
Statistical methods for convergence detection of multi-objective evolutionary algorithms.
Trautmann, H; Wagner, T; Naujoks, B; Preuss, M; Mehnen, J
2009-01-01
In this paper, two approaches for estimating the generation in which a multi-objective evolutionary algorithm (MOEA) shows statistically significant signs of convergence are introduced. A set-based perspective is taken where convergence is measured by performance indicators. The proposed techniques fulfill the requirements of proper statistical assessment on the one hand and efficient optimisation for real-world problems on the other hand. The first approach accounts for the stochastic nature of the MOEA by repeating the optimisation runs for increasing generation numbers and analysing the performance indicators using statistical tools. This technique results in a very robust offline procedure. Moreover, an online convergence detection method is introduced as well. This method automatically stops the MOEA when either the variance of the performance indicators falls below a specified threshold or a stagnation of their overall trend is detected. Both methods are analysed and compared for two MOEA and on different classes of benchmark functions. It is shown that the methods successfully operate on all stated problems needing less function evaluations while preserving good approximation quality at the same time.
Multi-criteria decision making approaches for quality control of genome-wide association studies.
Malovini, Alberto; Rognoni, Carla; Puca, Annibale; Bellazzi, Riccardo
2009-03-01
Experimental errors in the genotyping phases of a Genome-Wide Association Study (GWAS) can lead to false positive findings and to spurious associations. An appropriate quality control phase could minimize the effects of this kind of errors. Several filtering criteria can be used to perform quality control. Currently, no formal methods have been proposed for taking into account at the same time these criteria and the experimenter's preferences. In this paper we propose two strategies for setting appropriate genotyping rate thresholds for GWAS quality control. These two approaches are based on the Multi-Criteria Decision Making theory. We have applied our method on a real dataset composed by 734 individuals affected by Arterial Hypertension (AH) and 486 nonagenarians without history of AH. The proposed strategies appear to deal with GWAS quality control in a sound way, as they lead to rationalize and make explicit the experimenter's choices thus providing more reproducible results.
Wang, Peilong; Wang, Xiao; Zhang, Wei; Su, Xiaoou
2014-02-01
A novel and efficient determination method for multi-class compounds including β-agonists, sedatives, nitro-imidazoles and aflatoxins in porcine formula feed based on a fast "one-pot" extraction/multifunction impurity adsorption (MFIA) clean-up procedure has been developed. 23 target analytes belonging to four different class compounds could be determined simultaneously in a single run. Conditions for "one-pot" extraction were studied in detail. Under the optimized conditions, the multi-class compounds in porcine formula feed samples were extracted and purified with methanol contained ammonia and absorbents by one step. The compounds in extracts were purified by using multi types of absorbent based on MFIA in one pot. The multi-walled carbon nanotubes were employed to improved clean-up efficiency. Shield BEH C18 column was used to separate 23 target analytes, followed by tandem mass spectrometry (MS/MS) detection using an electro-spray ionization source in positive mode. Recovery studies were done at three fortification levels. Overall average recoveries of target compounds in porcine formula feed at each levels were >51.6% based on matrix fortified calibration with coefficients of variation from 2.7% to 13.2% (n=6). The limit of determination (LOD) of these compounds in porcine formula feed sample matrix was <5.0 μg/kg. This method was successfully applied in screening and confirmation of target drugs in >30 porcine formula feed samples. It was demonstrated that the integration of the MFIA protocol with the MS/MS instrument could serve as a valuable strategy for rapid screening and reliable confirmatory analysis of multi-class compounds in real samples. Copyright © 2013 Elsevier B.V. All rights reserved.
2012-01-01
Background Biomarker panels derived separately from genomic and proteomic data and with a variety of computational methods have demonstrated promising classification performance in various diseases. An open question is how to create effective proteo-genomic panels. The framework of ensemble classifiers has been applied successfully in various analytical domains to combine classifiers so that the performance of the ensemble exceeds the performance of individual classifiers. Using blood-based diagnosis of acute renal allograft rejection as a case study, we address the following question in this paper: Can acute rejection classification performance be improved by combining individual genomic and proteomic classifiers in an ensemble? Results The first part of the paper presents a computational biomarker development pipeline for genomic and proteomic data. The pipeline begins with data acquisition (e.g., from bio-samples to microarray data), quality control, statistical analysis and mining of the data, and finally various forms of validation. The pipeline ensures that the various classifiers to be combined later in an ensemble are diverse and adequate for clinical use. Five mRNA genomic and five proteomic classifiers were developed independently using single time-point blood samples from 11 acute-rejection and 22 non-rejection renal transplant patients. The second part of the paper examines five ensembles ranging in size from two to 10 individual classifiers. Performance of ensembles is characterized by area under the curve (AUC), sensitivity, and specificity, as derived from the probability of acute rejection for individual classifiers in the ensemble in combination with one of two aggregation methods: (1) Average Probability or (2) Vote Threshold. One ensemble demonstrated superior performance and was able to improve sensitivity and AUC beyond the best values observed for any of the individual classifiers in the ensemble, while staying within the range of observed specificity. The Vote Threshold aggregation method achieved improved sensitivity for all 5 ensembles, but typically at the cost of decreased specificity. Conclusion Proteo-genomic biomarker ensemble classifiers show promise in the diagnosis of acute renal allograft rejection and can improve classification performance beyond that of individual genomic or proteomic classifiers alone. Validation of our results in an international multicenter study is currently underway. PMID:23216969
Günther, Oliver P; Chen, Virginia; Freue, Gabriela Cohen; Balshaw, Robert F; Tebbutt, Scott J; Hollander, Zsuzsanna; Takhar, Mandeep; McMaster, W Robert; McManus, Bruce M; Keown, Paul A; Ng, Raymond T
2012-12-08
Biomarker panels derived separately from genomic and proteomic data and with a variety of computational methods have demonstrated promising classification performance in various diseases. An open question is how to create effective proteo-genomic panels. The framework of ensemble classifiers has been applied successfully in various analytical domains to combine classifiers so that the performance of the ensemble exceeds the performance of individual classifiers. Using blood-based diagnosis of acute renal allograft rejection as a case study, we address the following question in this paper: Can acute rejection classification performance be improved by combining individual genomic and proteomic classifiers in an ensemble? The first part of the paper presents a computational biomarker development pipeline for genomic and proteomic data. The pipeline begins with data acquisition (e.g., from bio-samples to microarray data), quality control, statistical analysis and mining of the data, and finally various forms of validation. The pipeline ensures that the various classifiers to be combined later in an ensemble are diverse and adequate for clinical use. Five mRNA genomic and five proteomic classifiers were developed independently using single time-point blood samples from 11 acute-rejection and 22 non-rejection renal transplant patients. The second part of the paper examines five ensembles ranging in size from two to 10 individual classifiers. Performance of ensembles is characterized by area under the curve (AUC), sensitivity, and specificity, as derived from the probability of acute rejection for individual classifiers in the ensemble in combination with one of two aggregation methods: (1) Average Probability or (2) Vote Threshold. One ensemble demonstrated superior performance and was able to improve sensitivity and AUC beyond the best values observed for any of the individual classifiers in the ensemble, while staying within the range of observed specificity. The Vote Threshold aggregation method achieved improved sensitivity for all 5 ensembles, but typically at the cost of decreased specificity. Proteo-genomic biomarker ensemble classifiers show promise in the diagnosis of acute renal allograft rejection and can improve classification performance beyond that of individual genomic or proteomic classifiers alone. Validation of our results in an international multicenter study is currently underway.
Aghamohammadi, Amirhossein; Ang, Mei Choo; A Sundararajan, Elankovan; Weng, Ng Kok; Mogharrebi, Marzieh; Banihashem, Seyed Yashar
2018-01-01
Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods.
2018-01-01
Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods. PMID:29438421
An Energy-Efficient Multi-Tier Architecture for Fall Detection Using Smartphones.
Guvensan, M Amac; Kansiz, A Oguz; Camgoz, N Cihan; Turkmen, H Irem; Yavuz, A Gokhan; Karsligil, M Elif
2017-06-23
Automatic detection of fall events is vital to providing fast medical assistance to the causality, particularly when the injury causes loss of consciousness. Optimization of the energy consumption of mobile applications, especially those which run 24/7 in the background, is essential for longer use of smartphones. In order to improve energy-efficiency without compromising on the fall detection performance, we propose a novel 3-tier architecture that combines simple thresholding methods with machine learning algorithms. The proposed method is implemented on a mobile application, called uSurvive, for Android smartphones. It runs as a background service and monitors the activities of a person in daily life and automatically sends a notification to the appropriate authorities and/or user defined contacts when it detects a fall. The performance of the proposed method was evaluated in terms of fall detection performance and energy consumption. Real life performance tests conducted on two different models of smartphone demonstrate that our 3-tier architecture with feature reduction could save up to 62% of energy compared to machine learning only solutions. In addition to this energy saving, the hybrid method has a 93% of accuracy, which is superior to thresholding methods and better than machine learning only solutions.
Soto, Marcelo A; Ricchiuti, Amelia Lavinia; Zhang, Liang; Barrera, David; Sales, Salvador; Thévenaz, Luc
2014-11-17
A technique to enhance the response and performance of Brillouin distributed fiber sensors is proposed and experimentally validated. The method consists in creating a multi-frequency pump pulse interacting with a matching multi-frequency continuous-wave probe. To avoid nonlinear cross-interaction between spectral lines, the method requires that the distinct pump pulse components and temporal traces reaching the photo-detector are subject to wavelength-selective delaying. This way the total pump and probe powers launched into the fiber can be incrementally boosted beyond the thresholds imposed by nonlinear effects. As a consequence of the multiplied pump-probe Brillouin interactions occurring along the fiber, the sensor response can be enhanced in exact proportion to the number of spectral components. The method is experimentally validated in a 50 km-long distributed optical fiber sensor augmented to 3 pump-probe spectral pairs, demonstrating a signal-to-noise ratio enhancement of 4.8 dB.
Multi-Omics Factor Analysis-a framework for unsupervised integration of multi-omics data sets.
Argelaguet, Ricard; Velten, Britta; Arnol, Damien; Dietrich, Sascha; Zenz, Thorsten; Marioni, John C; Buettner, Florian; Huber, Wolfgang; Stegle, Oliver
2018-06-20
Multi-omics studies promise the improved characterization of biological processes across molecular layers. However, methods for the unsupervised integration of the resulting heterogeneous data sets are lacking. We present Multi-Omics Factor Analysis (MOFA), a computational method for discovering the principal sources of variation in multi-omics data sets. MOFA infers a set of (hidden) factors that capture biological and technical sources of variability. It disentangles axes of heterogeneity that are shared across multiple modalities and those specific to individual data modalities. The learnt factors enable a variety of downstream analyses, including identification of sample subgroups, data imputation and the detection of outlier samples. We applied MOFA to a cohort of 200 patient samples of chronic lymphocytic leukaemia, profiled for somatic mutations, RNA expression, DNA methylation and ex vivo drug responses. MOFA identified major dimensions of disease heterogeneity, including immunoglobulin heavy-chain variable region status, trisomy of chromosome 12 and previously underappreciated drivers, such as response to oxidative stress. In a second application, we used MOFA to analyse single-cell multi-omics data, identifying coordinated transcriptional and epigenetic changes along cell differentiation. © 2018 The Authors. Published under the terms of the CC BY 4.0 license.
Kim, S-J; Kim, D-K; Kang, D-H
2016-04-01
We investigated and compared the efficacy of a new apparatus for detaching micro-organisms from meat samples. The efficacy of Spindle and stomacher in detaching micro-organisms from meat samples was evaluated. Also, evaluation of appropriateness of suspensions generated by both methods for carrying out molecular biological analysis was implemented. A nearly identical correlation and high R(2) were obtained between Spindle and stomacher in Aerobic Plate Count (APC), and no significant differences were observed in detachment of three major foodborne pathogens. The suspension generated by the Spindle showed lower turbidity and total protein concentration. Also, significantly different threshold cycles were observed in Real-time PCR analysis using suspensions generated by both methods. The Spindle shows nearly identical efficacy with stomacher treatment in detaching micro-organisms from meat samples. Furthermore, the high quality of suspensions generated by the Spindle, in terms of turbidity and total protein assay, allows for a lower threshold cycle than stomached suspension in Real-time PCR. The Spindle could be an alternative method for detaching micro-organisms, yielding a higher quality of suspensions which may be better suited for further molecular microbiological analysis. © 2016 The Society for Applied Microbiology.
Method for improving instrument response
Hahn, David W.; Hencken, Kenneth R.; Johnsen, Howard A.; Flower, William L.
2000-01-01
This invention pertains generally to a method for improving the accuracy of particle analysis under conditions of discrete particle loading and particularly to a method for improving signal-to-noise ratio and instrument response in laser spark spectroscopic analysis of particulate emissions. Under conditions of low particle density loading (particles/m.sup.3) resulting from low overall metal concentrations and/or large particle size uniform sampling can not be guaranteed. The present invention discloses a technique for separating laser sparks that arise from sample particles from those that do not; that is, a process for systematically "gating" the instrument response arising from "sampled" particles from those responses which do not, is dislosed as a solution to his problem. The disclosed approach is based on random sampling combined with a conditional analysis of each pulse. A threshold value is determined for the ratio of the intensity of a spectral line for a given element to a baseline region. If the threshold value is exceeded, the pulse is classified as a "hit" and that data is collected and an average spectrum is generated from an arithmetic average of "hits". The true metal concentration is determined from the averaged spectrum.
Spectrometer capillary vessel and method of making same
Linehan, John C.; Yonker, Clement R.; Zemanian, Thomas S.; Franz, James A.
1995-01-01
The present invention is an arrangement of a glass capillary tube for use in spectroscopy. In particular, the invention is a capillary arranged in a manner permitting a plurality or multiplicity of passes of a sample material through a spectroscopic measurement zone. In a preferred embodiment, the multi-pass capillary is insertable within a standard NMR sample tube. The present invention further includes a method of making the multi-pass capillary tube and an apparatus for spinning the tube.
Accurate aging of juvenile salmonids using fork lengths
Sethi, Suresh; Gerken, Jonathon; Ashline, Joshua
2017-01-01
Juvenile salmon life history strategies, survival, and habitat interactions may vary by age cohort. However, aging individual juvenile fish using scale reading is time consuming and can be error prone. Fork length data are routinely measured while sampling juvenile salmonids. We explore the performance of aging juvenile fish based solely on fork length data, using finite Gaussian mixture models to describe multimodal size distributions and estimate optimal age-discriminating length thresholds. Fork length-based ages are compared against a validation set of juvenile coho salmon, Oncorynchus kisutch, aged by scales. Results for juvenile coho salmon indicate greater than 95% accuracy can be achieved by aging fish using length thresholds estimated from mixture models. Highest accuracy is achieved when aged fish are compared to length thresholds generated from samples from the same drainage, time of year, and habitat type (lentic versus lotic), although relatively high aging accuracy can still be achieved when thresholds are extrapolated to fish from populations in different years or drainages. Fork length-based aging thresholds are applicable for taxa for which multiple age cohorts coexist sympatrically. Where applicable, the method of aging individual fish is relatively quick to implement and can avoid ager interpretation bias common in scale-based aging.
An adaptive multi-level simulation algorithm for stochastic biological systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lester, C., E-mail: lesterc@maths.ox.ac.uk; Giles, M. B.; Baker, R. E.
2015-01-14
Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Montemore » Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the efficiency of our method using a number of examples.« less
Sampling western spruce budworm larvae by frequency of occurrence on lower crown branches.
R.R. Mason; R.C. Beckwith
1990-01-01
A sampling method was derived whereby budworm density can be estimated by the frequency of occurrence of larvae over a given threshold number instead of by direct counts on branch samples. The model used for converting frequencies to mean densities is appropriate for nonrandom as well as random distributions and, therefore, is applicable to all population densities of...
NASA Astrophysics Data System (ADS)
Tang, Jian; Qiao, Junfei; Wu, ZhiWei; Chai, Tianyou; Zhang, Jian; Yu, Wen
2018-01-01
Frequency spectral data of mechanical vibration and acoustic signals relate to difficult-to-measure production quality and quantity parameters of complex industrial processes. A selective ensemble (SEN) algorithm can be used to build a soft sensor model of these process parameters by fusing valued information selectively from different perspectives. However, a combination of several optimized ensemble sub-models with SEN cannot guarantee the best prediction model. In this study, we use several techniques to construct mechanical vibration and acoustic frequency spectra of a data-driven industrial process parameter model based on selective fusion multi-condition samples and multi-source features. Multi-layer SEN (MLSEN) strategy is used to simulate the domain expert cognitive process. Genetic algorithm and kernel partial least squares are used to construct the inside-layer SEN sub-model based on each mechanical vibration and acoustic frequency spectral feature subset. Branch-and-bound and adaptive weighted fusion algorithms are integrated to select and combine outputs of the inside-layer SEN sub-models. Then, the outside-layer SEN is constructed. Thus, "sub-sampling training examples"-based and "manipulating input features"-based ensemble construction methods are integrated, thereby realizing the selective information fusion process based on multi-condition history samples and multi-source input features. This novel approach is applied to a laboratory-scale ball mill grinding process. A comparison with other methods indicates that the proposed MLSEN approach effectively models mechanical vibration and acoustic signals.
Multi-stage methodology to detect health insurance claim fraud.
Johnson, Marina Evrim; Nagarur, Nagen
2016-09-01
Healthcare costs in the US, as well as in other countries, increase rapidly due to demographic, economic, social, and legal changes. This increase in healthcare costs impacts both government and private health insurance systems. Fraudulent behaviors of healthcare providers and patients have become a serious burden to insurance systems by bringing unnecessary costs. Insurance companies thus develop methods to identify fraud. This paper proposes a new multistage methodology for insurance companies to detect fraud committed by providers and patients. The first three stages aim at detecting abnormalities among providers, services, and claim amounts. Stage four then integrates the information obtained in the previous three stages into an overall risk measure. Subsequently, a decision tree based method in stage five computes risk threshold values. The final decision stating whether the claim is fraudulent is made by comparing the risk value obtained in stage four with the risk threshold value from stage five. The research methodology performs well on real-world insurance data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yiyi; Wang, Junli; Qi, Shengli
In this report, a series of composite films consisting of polyimide as the matrix and multi-wall carbon nanotubes as the filler (PI/MWCNTs) were prepared in a water-based method with the use of triethylamine. Their dielectric properties were tested under frequency of between 100 Hz and 10 MHz, and it was revealed that the permittivity value behaved interestingly around the percolation threshold (8.01% in volume). The water-based method ensured that fillers had high dispersibility in the matrix before percolation, which led to a relatively high dielectric constant (284.28). However, the overlapping caused by excess MWCNTs created pathways for electrons inside the matrix, turningmore » the permittivity to negative. The former phenomenon was highly congruent with the percolation power law, while the latter could be explained by the Drude Model. AC conductivity was measured for more supportive information. Additionally, scanning electron microscopy and transmission electron microscopy were employed to record MWCNTs' microscopic distribution and morphology at the percolation threshold.« less
Drakesmith, M; Caeyenberghs, K; Dutt, A; Lewis, G; David, A S; Jones, D K
2015-09-01
Graph theory (GT) is a powerful framework for quantifying topological features of neuroimaging-derived functional and structural networks. However, false positive (FP) connections arise frequently and influence the inferred topology of networks. Thresholding is often used to overcome this problem, but an appropriate threshold often relies on a priori assumptions, which will alter inferred network topologies. Four common network metrics (global efficiency, mean clustering coefficient, mean betweenness and smallworldness) were tested using a model tractography dataset. It was found that all four network metrics were significantly affected even by just one FP. Results also show that thresholding effectively dampens the impact of FPs, but at the expense of adding significant bias to network metrics. In a larger number (n=248) of tractography datasets, statistics were computed across random group permutations for a range of thresholds, revealing that statistics for network metrics varied significantly more than for non-network metrics (i.e., number of streamlines and number of edges). Varying degrees of network atrophy were introduced artificially to half the datasets, to test sensitivity to genuine group differences. For some network metrics, this atrophy was detected as significant (p<0.05, determined using permutation testing) only across a limited range of thresholds. We propose a multi-threshold permutation correction (MTPC) method, based on the cluster-enhanced permutation correction approach, to identify sustained significant effects across clusters of thresholds. This approach minimises requirements to determine a single threshold a priori. We demonstrate improved sensitivity of MTPC-corrected metrics to genuine group effects compared to an existing approach and demonstrate the use of MTPC on a previously published network analysis of tractography data derived from a clinical population. In conclusion, we show that there are large biases and instability induced by thresholding, making statistical comparisons of network metrics difficult. However, by testing for effects across multiple thresholds using MTPC, true group differences can be robustly identified. Copyright © 2015. Published by Elsevier Inc.
van der Hoek, Yntze; Renfrew, Rosalind; Manne, Lisa L.
2013-01-01
Background Identifying persistence and extinction thresholds in species-habitat relationships is a major focal point of ecological research and conservation. However, one major concern regarding the incorporation of threshold analyses in conservation is the lack of knowledge on the generality and transferability of results across species and regions. We present a multi-region, multi-species approach of modeling threshold responses, which we use to investigate whether threshold effects are similar across species and regions. Methodology/Principal Findings We modeled local persistence and extinction dynamics of 25 forest-associated breeding birds based on detection/non-detection data, which were derived from repeated breeding bird atlases for the state of Vermont. We did not find threshold responses to be particularly well-supported, with 9 species supporting extinction thresholds and 5 supporting persistence thresholds. This contrasts with a previous study based on breeding bird atlas data from adjacent New York State, which showed that most species support persistence and extinction threshold models (15 and 22 of 25 study species respectively). In addition, species that supported a threshold model in both states had associated average threshold estimates of 61.41% (SE = 6.11, persistence) and 66.45% (SE = 9.15, extinction) in New York, compared to 51.08% (SE = 10.60, persistence) and 73.67% (SE = 5.70, extinction) in Vermont. Across species, thresholds were found at 19.45–87.96% forest cover for persistence and 50.82–91.02% for extinction dynamics. Conclusions/Significance Through an approach that allows for broad-scale comparisons of threshold responses, we show that species vary in their threshold responses with regard to habitat amount, and that differences between even nearby regions can be pronounced. We present both ecological and methodological factors that may contribute to the different model results, but propose that regardless of the reasons behind these differences, our results merit a warning that threshold values cannot simply be transferred across regions or interpreted as clear-cut targets for ecosystem management and conservation. PMID:23409106
Normalization, bias correction, and peak calling for ChIP-seq
Diaz, Aaron; Park, Kiyoub; Lim, Daniel A.; Song, Jun S.
2012-01-01
Next-generation sequencing is rapidly transforming our ability to profile the transcriptional, genetic, and epigenetic states of a cell. In particular, sequencing DNA from the immunoprecipitation of protein-DNA complexes (ChIP-seq) and methylated DNA (MeDIP-seq) can reveal the locations of protein binding sites and epigenetic modifications. These approaches contain numerous biases which may significantly influence the interpretation of the resulting data. Rigorous computational methods for detecting and removing such biases are still lacking. Also, multi-sample normalization still remains an important open problem. This theoretical paper systematically characterizes the biases and properties of ChIP-seq data by comparing 62 separate publicly available datasets, using rigorous statistical models and signal processing techniques. Statistical methods for separating ChIP-seq signal from background noise, as well as correcting enrichment test statistics for sequence-dependent and sonication biases, are presented. Our method effectively separates reads into signal and background components prior to normalization, improving the signal-to-noise ratio. Moreover, most peak callers currently use a generic null model which suffers from low specificity at the sensitivity level requisite for detecting subtle, but true, ChIP enrichment. The proposed method of determining a cell type-specific null model, which accounts for cell type-specific biases, is shown to be capable of achieving a lower false discovery rate at a given significance threshold than current methods. PMID:22499706
Comparison between ABR with click and narrow band chirp stimuli in children.
Zirn, Stefan; Louza, Julia; Reiman, Viktor; Wittlinger, Natalie; Hempel, John-Martin; Schuster, Maria
2014-08-01
Click and chirp-evoked auditory brainstem responses (ABR) are applied for the estimation of hearing thresholds in children. The present study analyzes ABR thresholds across a large sample of children's ears obtained with both methods. The aim was to demonstrate the correlation between both methods using narrow band chirp and click stimuli. Click and chirp evoked ABRs were measured in 253 children aged from 0 to 18 years to determine their individual auditory threshold. The delay-compensated stimuli were narrow band CE chirps with either 2000 Hz or 4000 Hz center frequencies. Measurements were performed consecutively during natural sleep, and under sedation or general anesthesia. Threshold estimation was performed for each measurement by two experienced audiologists. Pearson-correlation analysis revealed highly significant correlations (r=0.94) between click and chirp derived thresholds for both 2 kHz and 4 kHz chirps. No considerable differences were observed either between different age ranges or gender. Comparing the thresholds estimated using ABR with click stimuli and chirp stimuli, only 0.8-2% for the 2000 Hz NB-chirp and 0.4-1.2% of the 4000 Hz NB-chirp measurements differed more than 15 dB for different degrees of hearing loss or normal hearing. The results suggest that either NB-chirp or click ABR is sufficient for threshold estimation. This holds for the chirp frequencies of 2000 Hz and 4000 Hz. The use of either click- or chirp-evoked ABR allows a reduction of recording time in young infants. Nevertheless, to cross-check the results of one of the methods, we recommend measurements with the other method as well. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Decision Tree Repository and Rule Set Based Mingjiang River Estuarine Wetlands Classifaction
NASA Astrophysics Data System (ADS)
Zhang, W.; Li, X.; Xiao, W.
2018-05-01
The increasing urbanization and industrialization have led to wetland losses in estuarine area of Mingjiang River over past three decades. There has been increasing attention given to produce wetland inventories using remote sensing and GIS technology. Due to inconsistency training site and training sample, traditionally pixel-based image classification methods can't achieve a comparable result within different organizations. Meanwhile, object-oriented image classification technique shows grate potential to solve this problem and Landsat moderate resolution remote sensing images are widely used to fulfill this requirement. Firstly, the standardized atmospheric correct, spectrally high fidelity texture feature enhancement was conducted before implementing the object-oriented wetland classification method in eCognition. Secondly, we performed the multi-scale segmentation procedure, taking the scale, hue, shape, compactness and smoothness of the image into account to get the appropriate parameters, using the top and down region merge algorithm from single pixel level, the optimal texture segmentation scale for different types of features is confirmed. Then, the segmented object is used as the classification unit to calculate the spectral information such as Mean value, Maximum value, Minimum value, Brightness value and the Normalized value. The Area, length, Tightness and the Shape rule of the image object Spatial features and texture features such as Mean, Variance and Entropy of image objects are used as classification features of training samples. Based on the reference images and the sampling points of on-the-spot investigation, typical training samples are selected uniformly and randomly for each type of ground objects. The spectral, texture and spatial characteristics of each type of feature in each feature layer corresponding to the range of values are used to create the decision tree repository. Finally, with the help of high resolution reference images, the random sampling method is used to conduct the field investigation, achieve an overall accuracy of 90.31 %, and the Kappa coefficient is 0.88. The classification method based on decision tree threshold values and rule set developed by the repository, outperforms the results obtained from the traditional methodology. Our decision tree repository and rule set based object-oriented classification technique was an effective method for producing comparable and consistency wetlands data set.
Nakayasu, Ernesto S.; Nicora, Carrie D.; Sims, Amy C.; Burnum-Johnson, Kristin E.; Kim, Young-Mo; Kyle, Jennifer E.; Matzke, Melissa M.; Shukla, Anil K.; Chu, Rosalie K.; Schepmoes, Athena A.; Jacobs, Jon M.; Baric, Ralph S.; Webb-Robertson, Bobbie-Jo; Smith, Richard D.
2016-01-01
ABSTRACT Integrative multi-omics analyses can empower more effective investigation and complete understanding of complex biological systems. Despite recent advances in a range of omics analyses, multi-omic measurements of the same sample are still challenging and current methods have not been well evaluated in terms of reproducibility and broad applicability. Here we adapted a solvent-based method, widely applied for extracting lipids and metabolites, to add proteomics to mass spectrometry-based multi-omics measurements. The metabolite, protein, and lipid extraction (MPLEx) protocol proved to be robust and applicable to a diverse set of sample types, including cell cultures, microbial communities, and tissues. To illustrate the utility of this protocol, an integrative multi-omics analysis was performed using a lung epithelial cell line infected with Middle East respiratory syndrome coronavirus, which showed the impact of this virus on the host glycolytic pathway and also suggested a role for lipids during infection. The MPLEx method is a simple, fast, and robust protocol that can be applied for integrative multi-omic measurements from diverse sample types (e.g., environmental, in vitro, and clinical). IMPORTANCE In systems biology studies, the integration of multiple omics measurements (i.e., genomics, transcriptomics, proteomics, metabolomics, and lipidomics) has been shown to provide a more complete and informative view of biological pathways. Thus, the prospect of extracting different types of molecules (e.g., DNAs, RNAs, proteins, and metabolites) and performing multiple omics measurements on single samples is very attractive, but such studies are challenging due to the fact that the extraction conditions differ according to the molecule type. Here, we adapted an organic solvent-based extraction method that demonstrated broad applicability and robustness, which enabled comprehensive proteomics, metabolomics, and lipidomics analyses from the same sample. Author Video: An author video summary of this article is available. PMID:27822525
Object Manifold Alignment for Multi-Temporal High Resolution Remote Sensing Images Classification
NASA Astrophysics Data System (ADS)
Gao, G.; Zhang, M.; Gu, Y.
2017-05-01
Multi-temporal remote sensing images classification is very useful for monitoring the land cover changes. Traditional approaches in this field mainly face to limited labelled samples and spectral drift of image information. With spatial resolution improvement, "pepper and salt" appears and classification results will be effected when the pixelwise classification algorithms are applied to high-resolution satellite images, in which the spatial relationship among the pixels is ignored. For classifying the multi-temporal high resolution images with limited labelled samples, spectral drift and "pepper and salt" problem, an object-based manifold alignment method is proposed. Firstly, multi-temporal multispectral images are cut to superpixels by simple linear iterative clustering (SLIC) respectively. Secondly, some features obtained from superpixels are formed as vector. Thirdly, a majority voting manifold alignment method aiming at solving high resolution problem is proposed and mapping the vector data to alignment space. At last, all the data in the alignment space are classified by using KNN method. Multi-temporal images from different areas or the same area are both considered in this paper. In the experiments, 2 groups of multi-temporal HR images collected by China GF1 and GF2 satellites are used for performance evaluation. Experimental results indicate that the proposed method not only has significantly outperforms than traditional domain adaptation methods in classification accuracy, but also effectively overcome the problem of "pepper and salt".
NASA Astrophysics Data System (ADS)
Chung-Wei, Li; Gwo-Hshiung, Tzeng
To deal with complex problems, structuring them through graphical representations and analyzing causal influences can aid in illuminating complex issues, systems, or concepts. The DEMATEL method is a methodology which can be used for researching and solving complicated and intertwined problem groups. The end product of the DEMATEL process is a visual representation—the impact-relations map—by which respondents organize their own actions in the world. The applicability of the DEMATEL method is widespread, ranging from analyzing world problematique decision making to industrial planning. The most important property of the DEMATEL method used in the multi-criteria decision making (MCDM) field is to construct interrelations between criteria. In order to obtain a suitable impact-relations map, an appropriate threshold value is needed to obtain adequate information for further analysis and decision-making. In this paper, we propose a method based on the entropy approach, the maximum mean de-entropy algorithm, to achieve this purpose. Using real cases to find the interrelationships between the criteria for evaluating effects in E-learning programs as an examples, we will compare the results obtained from the respondents and from our method, and discuss that the different impact-relations maps from these two methods.
Online Mapping and Perception Algorithms for Multi-robot Teams Operating in Urban Environments
2015-01-01
each method on a 2.53 GHz Intel i5 laptop. All our algorithms are hand-optimized, implemented in Java and single threaded. To determine which algorithm...approach would be to label all the pixels in the image with an x, y, z point. However, the angular resolution of the camera is finer than that of the...edge criterion. That is, each edge is either present or absent. In [42], edge existence is further screened by a fixed threshold for angular
Development of a thresholding algorithm for calcium classification at multiple CT energies
NASA Astrophysics Data System (ADS)
Ng, LY.; Alssabbagh, M.; Tajuddin, A. A.; Shuaib, I. L.; Zainon, R.
2017-05-01
The objective of this study was to develop a thresholding method for calcium classification with different concentration using single-energy computed tomography. Five different concentrations of calcium chloride were filled in PMMA tubes and placed inside a water-filled PMMA phantom (diameter 10 cm). The phantom was scanned at 70, 80, 100, 120 and 140 kV using a SECT. CARE DOSE 4D was used and the slice thickness was set to 1 mm for all energies. ImageJ software inspired by the National Institute of Health (NIH) was used to measure the CT numbers for each calcium concentration from the CT images. The results were compared with a developed algorithm for verification. The percentage differences between the measured CT numbers obtained from the developed algorithm and the ImageJ show similar results. The multi-thresholding algorithm was found to be able to distinguish different concentrations of calcium chloride. However, it was unable to detect low concentrations of calcium chloride and iron (III) nitrate with CT numbers between 25 HU and 65 HU. The developed thresholding method used in this study may help to differentiate between calcium plaques and other types of plaques in blood vessels as it is proven to have a good ability to detect the high concentration of the calcium chloride. However, the algorithm needs to be improved to solve the limitations of detecting calcium chloride solution which has a similar CT number with iron (III) nitrate solution.
Spectrometer capillary vessel and method of making same
Linehan, J.C.; Yonker, C.R.; Zemanian, T.S.; Franz, J.A.
1995-11-21
The present invention is an arrangement of a glass capillary tube for use in spectroscopy. In particular, the invention is a capillary arranged in a manner permitting a plurality or multiplicity of passes of a sample material through a spectroscopic measurement zone. In a preferred embodiment, the multi-pass capillary is insertable within a standard NMR sample tube. The present invention further includes a method of making the multi-pass capillary tube and an apparatus for spinning the tube. 13 figs.
Goldrath, Dara A.; Wright, Michael T.; Belitz, Kenneth
2010-01-01
Groundwater quality in the 188-square-mile Colorado River Study unit (COLOR) was investigated October through December 2007 as part of the Priority Basin Project of the California State Water Resources Control Board (SWRCB) Groundwater Ambient Monitoring and Assessment (GAMA) Program. The GAMA Priority Basin Project was developed in response to the Groundwater Quality Monitoring Act of 2001, and the U.S. Geological Survey (USGS) is the technical project lead. The Colorado River study was designed to provide a spatially unbiased assessment of the quality of raw groundwater used for public water supplies within COLOR, and to facilitate statistically consistent comparisons of groundwater quality throughout California. Samples were collected from 28 wells in three study areas in San Bernardino, Riverside, and Imperial Counties. Twenty wells were selected using a spatially distributed, randomized grid-based method to provide statistical representation of the Study unit; these wells are termed 'grid wells'. Eight additional wells were selected to evaluate specific water-quality issues in the study area; these wells are termed `understanding wells.' The groundwater samples were analyzed for organic constituents (volatile organic compounds [VOC], gasoline oxygenates and degradates, pesticides and pesticide degradates, pharmaceutical compounds), constituents of special interest (perchlorate, 1,4-dioxane, and 1,2,3-trichlorpropane [1,2,3-TCP]), naturally occurring inorganic constituents (nutrients, major and minor ions, and trace elements), and radioactive constituents. Concentrations of naturally occurring isotopes (tritium, carbon-14, and stable isotopes of hydrogen and oxygen in water), and dissolved noble gases also were measured to help identify the sources and ages of the sampled groundwater. In total, approximately 220 constituents and water-quality indicators were investigated. Quality-control samples (blanks, replicates, and matrix spikes) were collected at approximately 30 percent of the wells, and the results were used to evaluate the quality of the data obtained from the groundwater samples. Field blanks rarely contained detectable concentrations of any constituent, suggesting that contamination was not a significant source of bias in the data. Differences between replicate samples were within acceptable ranges and matrix-spike recoveries were within acceptable ranges for most compounds. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, raw groundwater typically is treated, disinfected, or blended with other waters to maintain acceptable water quality. Regulatory thresholds apply to water that is served to the consumer, not to raw groundwater. However, to provide some context for the results, concentrations of constituents measured in the raw groundwater were compared to regulatory and nonregulatory health-based thresholds established by the U.S. Environmental Protection Agency (USEPA) and the California Department of Public Health (CDPH) and to thresholds established for aesthetic concerns by CDPH. Comparisons between data collected for this study and drinking-water thresholds are for illustrative purposes only and do not indicate compliance or noncompliance with those thresholds. The concentrations of most constituents detected in groundwater samples were below drinking-water thresholds. Volatile organic compounds (VOC) were detected in approximately 35 percent of grid well samples; all concentrations were below health-based thresholds. Pesticides and pesticide degradates were detected in about 20 percent of all samples; detections were below health-based thresholds. No concentrations of constituents of special interest or nutrients were detected above health-based thresholds. Most of the major and minor ion constituents sampled do not have health-based thresholds; the exception is chloride. Concentrations of chloride, sulfate, and total dis
NASA Astrophysics Data System (ADS)
Gallego, Eva; Teixidor, Pilar; Roca, Francisco Javier; Perales, José Francisco; Gadea, Enrique
2018-06-01
A comparison was made between the relative performance of active and passive sampling methods for the analysis of 1,3-butadiene in outdoor air. Active and passive sampling was conducted using multi-sorbent bed tubes (Carbotrap, Carbopack X, Carboxen 569) and RAD141 Radiello® diffusive samplers (filled with Carbopack X), respectively. Daily duplicate samples of multi-sorbent bed tubes were taken over a period of 14 days (9 + 5 days) at El Morell (Tarragona, Spain), near the petrochemical area. As 1,3-butadiene is a reactive pollutant and can be rapidly oxidized, half of the samplers were equipped with ozone scrubbers. Samples consisted in two tubes connected in series (front and back) to allow the determination of breakthrough. Quadruplicate samples of Radiello® tubes were taken over a period of 14 days (9 days and 5 days), too. During those days, ozone concentration was measured using RAD172 Radiello® samplers. In addition to this, daily duplicate samples of multi-sorbent bed tubes were taken in the city of Barcelona over a period of 8 days. Simultaneously, 4 samples of Radiello® tubes were exposed to outdoor air. Sampling was done throughout June and July 2017. Analysis was performed by thermal desorption coupled with gas chromatography/mass spectrometry. Analytical performance of the two sampling methods was evaluated by describing several quality assurance parameters, with results showing that performances are quite similar. They display low detection limits, good precision, linearity and desorption efficiency, low levels of blank values, and low breakthrough for multi-sorbent bed tubes. However, Radiello® samplers were not able to uptake episodic 1,3-butadiene high concentrations, leading to underestimation of real values. Hence, we can conclude that Radiello® samplers can be used for baseline 1,3-butadiene levels whereas multi-sorbent bed tubes would be advisable when relevant episodes are expected.
Beale, D J; Crosswell, J; Karpe, A V; Ahmed, W; Williams, M; Morrison, P D; Metcalfe, S; Staley, C; Sadowsky, M J; Palombo, E A; Steven, A D L
2017-12-31
The impact of anthropogenic factors arising from point and non-point pollution sources at a multi commodity marine port and its surrounding ecosystems were studied using sediment samples collected from a number of onshore (Gladstone Harbour and Facing Island) and offshore (Heron Island and Fitzroy Reefs) sites in Australia's Central Queensland. Sediment samples were analyzed for trace metals, organic carbon, polycyclic aromatic hydrocarbons (PAH), emerging chemicals of concern (ECC) and sterols. Similarly, the biological and biochemical interaction between the reef and its environment was analyzed by the multi-omic tools of next-generation sequencing characterization of the bacterial community and microbial community metabolic profiling. Overall, the trace elements were observed at the lower end of the Australian environmental guideline values at the offshore sites, while higher values were observed for the onshore locations Nickel and copper were observed above the high trigger value threshold at the onshore sites. The levels of PAH were below limits of detection across all sites. However, some of the ECC and sterols were observed at higher concentrations at both onshore and offshore locations, notably, the cholesterol family sterols and 17α-ethynylestradiol. Multi-omic analyses also indicated possible thermal and photo irradiation stressors on the bacterial communities at all the tested sites. The observed populations of γ-proteobacteria were found in combination with an increased pool of fatty acids that indicate fatty acid synthesis and utilisation of the intermediates of the shikimate pathways. This study demonstrates the value of applying a multi-omics approach for ecological assessments, in which a more detailed assessment of physical and chemical contaminants and their impact on the community bacterial biome is obtained. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ward, V. L.; Singh, R.; Reed, P. M.; Keller, K.
2014-12-01
As water resources problems typically involve several stakeholders with conflicting objectives, multi-objective evolutionary algorithms (MOEAs) are now key tools for understanding management tradeoffs. Given the growing complexity of water planning problems, it is important to establish if an algorithm can consistently perform well on a given class of problems. This knowledge allows the decision analyst to focus on eliciting and evaluating appropriate problem formulations. This study proposes a multi-objective adaptation of the classic environmental economics "Lake Problem" as a computationally simple but mathematically challenging MOEA benchmarking problem. The lake problem abstracts a fictional town on a lake which hopes to maximize its economic benefit without degrading the lake's water quality to a eutrophic (polluted) state through excessive phosphorus loading. The problem poses the challenge of maintaining economic activity while confronting the uncertainty of potentially crossing a nonlinear and potentially irreversible pollution threshold beyond which the lake is eutrophic. Objectives for optimization are maximizing economic benefit from lake pollution, maximizing water quality, maximizing the reliability of remaining below the environmental threshold, and minimizing the probability that the town will have to drastically change pollution policies in any given year. The multi-objective formulation incorporates uncertainty with a stochastic phosphorus inflow abstracting non-point source pollution. We performed comprehensive diagnostics using 6 algorithms: Borg, MOEAD, eMOEA, eNSGAII, GDE3, and NSGAII to ascertain their controllability, reliability, efficiency, and effectiveness. The lake problem abstracts elements of many current water resources and climate related management applications where there is the potential for crossing irreversible, nonlinear thresholds. We show that many modern MOEAs can fail on this test problem, indicating its suitability as a useful and nontrivial benchmarking problem.
Pinchi, Vilma; Pradella, Francesco; Vitale, Giulia; Rugo, Dario; Nieri, Michele; Norelli, Gian-Aristide
2016-01-01
The age threshold of 14 years is relevant in Italy as the minimum age for criminal responsibility. It is of utmost importance to evaluate the diagnostic accuracy of every odontological method for age evaluation considering the sensitivity, or the ability to estimate the true positive cases, and the specificity, or the ability to estimate the true negative cases. The research aims to compare the specificity and sensitivity of four commonly adopted methods of dental age estimation - Demirjian, Haavikko, Willems and Cameriere - in a sample of Italian children aged between 11 and 16 years, with an age threshold of 14 years, using receiver operating characteristic curves and the area under the curve (AUC). In addition, new decision criteria are developed to increase the accuracy of the methods. Among the four odontological methods for age estimation adopted in the research, the Cameriere method showed the highest AUC in both female and male cohorts. The Cameriere method shows a high degree of accuracy at the age threshold of 14 years. To adopt the Cameriere method to estimate the 14-year age threshold more accurately, however, it is suggested - according to the Youden index - that the decision criterion be set at the lower value of 12.928 for females and 13.258 years for males, obtaining a sensitivity of 85% and specificity of 88% in females, and a sensitivity of 77% and specificity of 92% in males. If a specificity level >90% is needed, the cut-off point should be set at 12.959 years (82% sensitivity) for females. © The Author(s) 2015.
Probabilistic peak detection in CE-LIF for STR DNA typing.
Woldegebriel, Michael; van Asten, Arian; Kloosterman, Ate; Vivó-Truyols, Gabriel
2017-07-01
In this work, we present a novel probabilistic peak detection algorithm based on a Bayesian framework for forensic DNA analysis. The proposed method aims at an exhaustive use of raw electropherogram data from a laser-induced fluorescence multi-CE system. As the raw data are informative up to a single data point, the conventional threshold-based approaches discard relevant forensic information early in the data analysis pipeline. Our proposed method assigns a posterior probability reflecting the data point's relevance with respect to peak detection criteria. Peaks of low intensity generated from a truly existing allele can thus constitute evidential value instead of fully discarding them and contemplating a potential allele drop-out. This way of working utilizes the information available within each individual data point and thus avoids making early (binary) decisions on the data analysis that can lead to error propagation. The proposed method was tested and compared to the application of a set threshold as is current practice in forensic STR DNA profiling. The new method was found to yield a significant improvement in the number of alleles identified, regardless of the peak heights and deviation from Gaussian shape. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Threshold-dependent sample sizes for selenium assessment with stream fish tissue
Hitt, Nathaniel P.; Smith, David R.
2015-01-01
Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4 to 8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and Type I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of eight fish could detect an increase of approximately 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of approximately 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2, this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of approximately 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated for by increased precision of composites for estimating mean conditions. However, low sample sizes (<5 fish) did not achieve 80% power to detect near-threshold values (i.e., <1 mg Se/kg) under any scenario we evaluated. This analysis can assist the sampling design and interpretation of Se assessments from fish tissue by accounting for natural variation in stream fish populations.
Multi-temporal clustering of continental floods and associated atmospheric circulations
NASA Astrophysics Data System (ADS)
Liu, Jianyu; Zhang, Yongqiang
2017-12-01
Investigating clustering of floods has important social, economic and ecological implications. This study examines the clustering of Australian floods at different temporal scales and its possible physical mechanisms. Flood series with different severities are obtained by peaks-over-threshold (POT) sampling in four flood thresholds. At intra-annual scale, Cox regression and monthly frequency methods are used to examine whether and when the flood clustering exists, respectively. At inter-annual scale, dispersion indices with four-time variation windows are applied to investigate the inter-annual flood clustering and its variation. Furthermore, the Kernel occurrence rate estimate and bootstrap resampling methods are used to identify flood-rich/flood-poor periods. Finally, seasonal variation of horizontal wind at 850 hPa and vertical wind velocity at 500 hPa are used to investigate the possible mechanisms causing the temporal flood clustering. Our results show that: (1) flood occurrences exhibit clustering at intra-annual scale, which are regulated by climate indices representing the impacts of the Pacific and Indian Oceans; (2) the flood-rich months occur from January to March over northern Australia, and from July to September over southwestern and southeastern Australia; (3) stronger inter-annual clustering takes place across southern Australia than northern Australia; and (4) Australian floods are characterised by regional flood-rich and flood-poor periods, with 1987-1992 identified as the flood-rich period across southern Australia, but the flood-poor period across northern Australia, and 2001-2006 being the flood-poor period across most regions of Australia. The intra-annual and inter-annual clustering and temporal variation of flood occurrences are in accordance with the variation of atmospheric circulation. These results provide relevant information for flood management under the influence of climate variability, and, therefore, are helpful for developing flood hazard mitigation schemes.
NASA Astrophysics Data System (ADS)
Noufal, Manthala Padannayil; Abdullah, Kallikuzhiyil Kochunny; Niyas, Puzhakkal; Subha, Pallimanhayil Abdul Raheem
2017-12-01
Aim: This study evaluates the impacts of using different evaluation criteria on gamma pass rates in two commercially available QA methods employed for the verification of VMAT plans using different hypothetical planning target volumes (PTVs) and anatomical regions. Introduction: Volumetric modulated arc therapy (VMAT) is a widely accepted technique to deliver highly conformal treatment in a very efficient manner. As their level of complexity is high in comparison to intensity-modulated radiotherapy (IMRT), the implementation of stringent quality assurance (QA) before treatment delivery is of paramount importance. Material and Methods: Two sets of VMAT plans were generated using Eclipse planning systems, one with five different complex hypothetical three-dimensional PTVs and one including three anatomical regions. The verification of these plans was performed using a MatriXX ionization chamber array embedded inside a MultiCube phantom and a Varian EPID dosimetric system attached to a Clinac iX. The plans were evaluated based on the 3%/3 mm, 2%/2 mm, and 1%/1 mm global gamma criteria and with three low-dose threshold values (0%, 10%, and 20%). Results: The gamma pass rates were above 95% in all VMAT plans, when the 3%/3mm gamma criterion was used and no threshold was applied. In both systems, the pass rates decreased as the criteria become stricter. Higher pass rates were observed when no threshold was applied and they tended to decrease for 10% and 20% thresholds. Conclusion: The results confirm the suitability of the equipments used and the validity of the plans. The study also confirmed that the threshold settings greatly affect the gamma pass rates, especially for lower gamma criteria.
Gustafson, Samantha; Pittman, Andrea; Fanning, Robert
2013-06-01
This tutorial demonstrates the effects of tubing length and coupling type (i.e., foam tip or personal earmold) on hearing threshold and real-ear-to-coupler difference (RECD) measures. Hearing thresholds from 0.25 kHz through 8 kHz are reported at various tubing lengths for 28 normal-hearing adults between the ages of 22 and 31 years. RECD values are reported for 14 of the adults. All measures were made with an insert earphone coupled to a standard foam tip and with an insert earphone coupled to each participant's personal earmold. Threshold and RECD measures obtained with a personal earmold were significantly different from those obtained with a foam tip on repeated measures analyses of variance. One-sample t tests showed these differences to vary systematically with increasing tubing length, with the largest average differences (7-8 dB) occurring at 4 kHz. This systematic examination demonstrates the equal and opposite effects of tubing length on threshold and acoustic measures. Specifically, as tubing length increased, sound pressure level in the ear canal decreased, affecting both hearing thresholds and the real-ear portion of the RECDs. This demonstration shows that when the same coupling method is used to obtain the hearing thresholds and RECD, equal and accurate estimates of real-ear sound pressure level are obtained.
Li, Wenwen; Janardhan, Ajit H.; Fedorov, Vadim V.; Sha, Qun; Schuessler, Richard B.; Efimov, Igor R.
2011-01-01
Background Implantable device therapy of atrial fibrillation (AF) is limited by pain from high-energy shocks. We developed a low-energy multi-stage defibrillation therapy and tested it in a canine model of AF. Methods and Results AF was induced by burst pacing during vagus nerve stimulation. Our novel defibrillation therapy consisted of three stages: ST1 (1-4 low energy biphasic shocks), ST2 (6-10 ultra-low energy monophasic shocks), and ST3 (anti-tachycardia pacing). Firstly, ST1 testing compared single or multiple monophasic (MP) and biphasic (BP) shocks. Secondly, several multi-stage therapies were tested: ST1 versus ST1+ST3 versus ST1+ST2+ST3. Thirdly, three shock vectors were compared: superior vena cava to distal coronary sinus (SVC>CSd), proximal coronary sinus to left atrial appendage (CSp>LAA) and right atrial appendage to left atrial appendage (RAA>LAA). The atrial defibrillation threshold (DFT) of 1BP shock was less than 1MP shock (0.55 ± 0.1 versus 1.38 ± 0.31 J; p =0.003). 2-3 BP shocks terminated AF with lower peak voltage than 1BP or 1MP shock and with lower atrial DFT than 4 BP shocks. Compared to ST1 therapy alone, ST1+ST3 lowered the atrial DFT moderately (0.51 ± 0.46 versus 0.95 ± 0.32 J; p = 0.036) while a three-stage therapy, ST1+ST2+ST3, dramatically lowered the atrial DFT (0.19 ± 0.12 J versus 0.95 ± 0.32 J for ST1 alone, p=0.0012). Finally, the three-stage therapy ST1+ST2+ST3 was equally effective for all studied vectors. Conclusions Three-stage electrotherapy significantly reduces the AF defibrillation threshold and opens the door to low energy atrial defibrillation at or below the pain threshold. PMID:21980076
Liu, Zhihua; Yang, Jian; He, Hong S.
2013-01-01
The relative importance of fuel, topography, and weather on fire spread varies at different spatial scales, but how the relative importance of these controls respond to changing spatial scales is poorly understood. We designed a “moving window” resampling technique that allowed us to quantify the relative importance of controls on fire spread at continuous spatial scales using boosted regression trees methods. This quantification allowed us to identify the threshold value for fire size at which the dominant control switches from fuel at small sizes to weather at large sizes. Topography had a fluctuating effect on fire spread across the spatial scales, explaining 20–30% of relative importance. With increasing fire size, the dominant control switched from bottom-up controls (fuel and topography) to top-down controls (weather). Our analysis suggested that there is a threshold for fire size, above which fires are driven primarily by weather and more likely lead to larger fire size. We suggest that this threshold, which may be ecosystem-specific, can be identified using our “moving window” resampling technique. Although the threshold derived from this analytical method may rely heavily on the sampling technique, our study introduced an easily implemented approach to identify scale thresholds in wildfire regimes. PMID:23383247
NASA Astrophysics Data System (ADS)
Wang, Shengling; Cui, Yong; Koodli, Rajeev; Hou, Yibin; Huang, Zhangqin
Due to the dynamics of topology and resources, Call Admission Control (CAC) plays a significant role for increasing resource utilization ratio and guaranteeing users' QoS requirements in wireless/mobile networks. In this paper, a dynamic multi-threshold CAC scheme is proposed to serve multi-class service in a wireless/mobile network. The thresholds are renewed at the beginning of each time interval to react to the changing mobility rate and network load. To find suitable thresholds, a reward-penalty model is designed, which provides different priorities between different service classes and call types through different reward/penalty policies according to network load and average call arrival rate. To speed up the running time of CAC, an Optimized Genetic Algorithm (OGA) is presented, whose components such as encoding, population initialization, fitness function and mutation etc., are all optimized in terms of the traits of the CAC problem. The simulation demonstrates that the proposed CAC scheme outperforms the similar schemes, which means the optimization is realized. Finally, the simulation shows the efficiency of OGA.
Automatic threshold selection for multi-class open set recognition
NASA Astrophysics Data System (ADS)
Scherreik, Matthew; Rigling, Brian
2017-05-01
Multi-class open set recognition is the problem of supervised classification with additional unknown classes encountered after a model has been trained. An open set classifer often has two core components. The first component is a base classifier which estimates the most likely class of a given example. The second component consists of open set logic which estimates if the example is truly a member of the candidate class. Such a system is operated in a feed-forward fashion. That is, a candidate label is first estimated by the base classifier, and the true membership of the example to the candidate class is estimated afterward. Previous works have developed an iterative threshold selection algorithm for rejecting examples from classes which were not present at training time. In those studies, a Platt-calibrated SVM was used as the base classifier, and the thresholds were applied to class posterior probabilities for rejection. In this work, we investigate the effectiveness of other base classifiers when paired with the threshold selection algorithm and compare their performance with the original SVM solution.
Multi-mycotoxin stable isotope dilution LC-MS/MS method for Fusarium toxins in beer.
Habler, Katharina; Gotthardt, Marina; Schüler, Jan; Rychlik, Michael
2017-03-01
A stable isotope dilution LC-MS/MS multi-mycotoxin method was developed for 12 different Fusarium toxins including modified mycotoxins in beer (deoxynivalenol-3-glucoside, deoxynivalenol, 3-acetyldeoxynivalenol, 15-acetyl-deoxynivalenol, HT2-toxin, T2-toxin, enniatin B, B1, A1, A, beauvericin and zearalenone). As sample preparation and purification of beer a combined solid phase extraction for trichothecenes, enniatins, beauvericin and zearalenone was firstly developed. The validation of the new method gave satisfying results: intra-day and inter-day precision and recoveries were 1-5%, 2-8% and 72-117%, respectively. In total, 61 different organic and conventional beer samples from Germany and all over the world were analyzed by using the newly developed multi-mycotoxin method. In summary, deoxynivalenol, deoxynivalenol-3-glucoside, 3-acetyldeoxynivaleneol and enniatin B were quantified in rather low contents in the investigated beer samples. None of the other monitored Fusarium toxins like 15-acetyldeoxynivalenol, HT2- and T2-toxin, zearalenone, enniatin B1, A1, A or beauvericin were detectable. Copyright © 2016 Elsevier Ltd. All rights reserved.
Improved modified energy ratio method using a multi-window approach for accurate arrival picking
NASA Astrophysics Data System (ADS)
Lee, Minho; Byun, Joongmoo; Kim, Dowan; Choi, Jihun; Kim, Myungsun
2017-04-01
To identify accurately the location of microseismic events generated during hydraulic fracture stimulation, it is necessary to detect the first break of the P- and S-wave arrival times recorded at multiple receivers. These microseismic data often contain high-amplitude noise, which makes it difficult to identify the P- and S-wave arrival times. The short-term-average to long-term-average (STA/LTA) and modified energy ratio (MER) methods are based on the differences in the energy densities of the noise and signal, and are widely used to identify the P-wave arrival times. The MER method yields more consistent results than the STA/LTA method for data with a low signal-to-noise (S/N) ratio. However, although the MER method shows good results regardless of the delay of the signal wavelet for signals with a high S/N ratio, it may yield poor results if the signal is contaminated by high-amplitude noise and does not have the minimum delay. Here we describe an improved MER (IMER) method, whereby we apply a multiple-windowing approach to overcome the limitations of the MER method. The IMER method contains calculations of an additional MER value using a third window (in addition to the original MER window), as well as the application of a moving average filter to each MER data point to eliminate high-frequency fluctuations in the original MER distributions. The resulting distribution makes it easier to apply thresholding. The proposed IMER method was applied to synthetic and real datasets with various S/N ratios and mixed-delay wavelets. The results show that the IMER method yields a high accuracy rate of around 80% within five sample errors for the synthetic datasets. Likewise, in the case of real datasets, 94.56% of the P-wave picking results obtained by the IMER method had a deviation of less than 0.5 ms (corresponding to 2 samples) from the manual picks.
Evaluation of periodontitis in hospital outpatients with major depressive disorder
Solis, A. C. O.; Marques, A. H.; Pannuti, C. M.; Lotufo, R. F. M.; Lotufo-Neto, F.
2013-01-01
Background and Objective Major depressive disorder (MDD) has been associated with alterations in the neuroendocrine system and immune function and may be associated with an increased susceptibility to cardiovascular disease, cancer and autoimmune/inflammatory disease. This study was conducted to investigate the relationship between periodontitis and MDD in a convenience sample of hospital outpatients. Material and Methods The sample consisted of 72 physically healthy subjects (36 outpatients with MDD and 36 age-matched controls [± 3 years]). Patients with bipolar disorder, eating disorders and psychotic disorders were excluded. Probing pocket depth and clinical attachment level were recorded at six sites per tooth. Depression was assessed by means of Structured Clinical Interview for DSM-IV. Results Extent of clinical attachment level and probing pocket depth were not different between controls and subjects with depression for the following thresholds: ≥ 3 mm (Mann-Whitney, p = 0.927 and 0.756); ≥ 4 mm (Mann-Whitney, p = 0.656 and 0.373); ≥ 5 mm (Mann-Whitney, p = 0.518 and 0.870);, and ≥ 6 mm (Mann-Whitney, p = 0.994 and 0.879). Depression parameters were not associated with clinical attachment level ≥ 5 mm in this sample. Smoking was associated with loss of attachment ≥ 5 mm in the multi-variable logistic regression model (odds ratio = 6.99, 95% confidence interval = 2.00–24.43). Conclusions In this sample, periodontal clinical parameters were not different between patients with MDD and control subjects. There was no association between depression and periodontitis. PMID:23586804
Sabrina, Rabehi; Mossadak, Hamdi Taha; Bakir, Mamache; Asma, Meghezzi; Khaoula, Boushaba
2018-01-01
Aim: The aim of this study was to detect Brucella spp. DNA in milk samples collected from seronegative cows using the real-time polymerase chain reaction (PCR) assay for diagnosis of brucellosis in seronegative dairy cows to prevent transmission of disease to humans and to reduce economic losses in animal production. Materials and Methods: In this study, 65 milk samples were investigated for the detection of Brucella spp. The detection of the IS711 gene in all samples was done by real-time PCR assay by comparative cycle threshold method. Results: The results show that of the 65 DNA samples tested, 2 (3.08%) were positive for Brucella infection. The mean cyclic threshold values of IS711 real-time PCR test were 37.97 and 40.48, indicating a positive reaction. Conclusion: The results of the present study indicated that the real-time PCR appears to offer several advantages over serological tests. For this reason, the real-time PCR should be validated on representative numbers of Brucella-infected and free samples before being implemented in routine diagnosis in human and animal brucellosis for controlling this disease. PMID:29657430
Testing the applicability of the k0-NAA method at the MINT's TRIGA MARK II reactor
NASA Astrophysics Data System (ADS)
Siong, Wee Boon; Dung, Ho Manh; Wood, Ab. Khalik; Salim, Nazaratul Ashifa Abd.; Elias, Md. Suhaimi
2006-08-01
The Analytical Chemistry Laboratory at MINT is using the NAA technique since 1980s and is the only laboratory in Malaysia equipped with a research reactor, namely the TRIGA MARK II. Throughout the years the development of NAA technique has been very encouraging and was made applicable to a wide range of samples. At present, the k0 method has become the preferred standardization method of NAA ( k0-NAA) due to its multi-elemental analysis capability without using standards. Additionally, the k0 method describes NAA in physically and mathematically understandable definitions and is very suitable for computer evaluation. Eventually, the k0-NAA method has been adopted by MINT in 2003, in collaboration with the Nuclear Research Institute (NRI), Vietnam. The reactor neutron parameters ( α and f) for the pneumatic transfer system and for the rotary rack at various locations, as well as the detector efficiencies were determined. After calibration of the reactor and the detectors, the implemented k0 method was validated by analyzing some certified reference materials (including IAEA Soil 7, NIST 1633a, NIST 1632c, NIST 1646a and IAEA 140/TM). The analysis results of the CRMs showed an average u score well below the threshold value of 2 with a precision of better than ±10% for most of the elemental concentrations obtained, validating herewith the introduction of the k0-NAA method at the MINT.
Liu, Fang
2016-01-01
In both clinical development and post-marketing of a new therapy or a new treatment, incidence of an adverse event (AE) is always a concern. When sample sizes are small, large sample-based inferential approaches on an AE incidence proportion in a certain time period no longer apply. In this brief discussion, we introduce a simple Bayesian framework to quantify, in small sample studies and the rare AE case, (1) the confidence level that the incidence proportion of a particular AE p is over or below a threshold, (2) the lower or upper bounds on p with a certain level of confidence, and (3) the minimum required number of patients with an AE before we can be certain that p surpasses a specific threshold, or the maximum allowable number of patients with an AE after which we can no longer be certain that p is below a certain threshold, given a certain confidence level. The method is easy to understand and implement; the interpretation of the results is intuitive. This article also demonstrates the usefulness of simple Bayesian concepts when it comes to answering practical questions.
Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation
NASA Astrophysics Data System (ADS)
Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten
2015-04-01
Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.
Bikel, Shirley; Jacobo-Albavera, Leonor; Sánchez-Muñoz, Fausto; Cornejo-Granados, Fernanda; Canizales-Quinteros, Samuel; Soberón, Xavier; Sotelo-Mundo, Rogerio R.; del Río-Navarro, Blanca E.; Mendoza-Vargas, Alfredo; Sánchez, Filiberto
2017-01-01
Background In spite of the emergence of RNA sequencing (RNA-seq), microarrays remain in widespread use for gene expression analysis in the clinic. There are over 767,000 RNA microarrays from human samples in public repositories, which are an invaluable resource for biomedical research and personalized medicine. The absolute gene expression analysis allows the transcriptome profiling of all expressed genes under a specific biological condition without the need of a reference sample. However, the background fluorescence represents a challenge to determine the absolute gene expression in microarrays. Given that the Y chromosome is absent in female subjects, we used it as a new approach for absolute gene expression analysis in which the fluorescence of the Y chromosome genes of female subjects was used as the background fluorescence for all the probes in the microarray. This fluorescence was used to establish an absolute gene expression threshold, allowing the differentiation between expressed and non-expressed genes in microarrays. Methods We extracted the RNA from 16 children leukocyte samples (nine males and seven females, ages 6–10 years). An Affymetrix Gene Chip Human Gene 1.0 ST Array was carried out for each sample and the fluorescence of 124 genes of the Y chromosome was used to calculate the absolute gene expression threshold. After that, several expressed and non-expressed genes according to our absolute gene expression threshold were compared against the expression obtained using real-time quantitative polymerase chain reaction (RT-qPCR). Results From the 124 genes of the Y chromosome, three genes (DDX3Y, TXLNG2P and EIF1AY) that displayed significant differences between sexes were used to calculate the absolute gene expression threshold. Using this threshold, we selected 13 expressed and non-expressed genes and confirmed their expression level by RT-qPCR. Then, we selected the top 5% most expressed genes and found that several KEGG pathways were significantly enriched. Interestingly, these pathways were related to the typical functions of leukocytes cells, such as antigen processing and presentation and natural killer cell mediated cytotoxicity. We also applied this method to obtain the absolute gene expression threshold in already published microarray data of liver cells, where the top 5% expressed genes showed an enrichment of typical KEGG pathways for liver cells. Our results suggest that the three selected genes of the Y chromosome can be used to calculate an absolute gene expression threshold, allowing a transcriptome profiling of microarray data without the need of an additional reference experiment. Discussion Our approach based on the establishment of a threshold for absolute gene expression analysis will allow a new way to analyze thousands of microarrays from public databases. This allows the study of different human diseases without the need of having additional samples for relative expression experiments. PMID:29230367
Bahouth, George; Digges, Kennerly; Schulman, Carl
2012-01-01
This paper presents methods to estimate crash injury risk based on crash characteristics captured by some passenger vehicles equipped with Advanced Automatic Crash Notification technology. The resulting injury risk estimates could be used within an algorithm to optimize rescue care. Regression analysis was applied to the National Automotive Sampling System / Crashworthiness Data System (NASS/CDS) to determine how variations in a specific injury risk threshold would influence the accuracy of predicting crashes with serious injuries. The recommended thresholds for classifying crashes with severe injuries are 0.10 for frontal crashes and 0.05 for side crashes. The regression analysis of NASS/CDS indicates that these thresholds will provide sensitivity above 0.67 while maintaining a positive predictive value in the range of 0.20. PMID:23169132
Towards efficient multi-scale methods for monitoring sugarcane aphid infestations in sorghum
USDA-ARS?s Scientific Manuscript database
We discuss approaches and issues involved with developing optimal monitoring methods for sugarcane aphid infestations (SCA) in grain sorghum. We discuss development of sequential sampling methods that allow for estimation of the number of aphids per sample unit, and statistical decision making rela...
Ni, Yongnian; Lai, Yanhua; Brandes, Sarina; Kokot, Serge
2009-08-11
Multi-wavelength fingerprints of Cassia seed, a traditional Chinese medicine (TCM), were collected by high-performance liquid chromatography (HPLC) at two wavelengths with the use of diode array detection. The two data sets of chromatograms were combined by the data fusion-based method. This data set of fingerprints was compared separately with the two data sets collected at each of the two wavelengths. It was demonstrated with the use of principal component analysis (PCA), that multi-wavelength fingerprints provided a much improved representation of the differences in the samples. Thereafter, the multi-wavelength fingerprint data set was submitted for classification to a suite of chemometrics methods viz. fuzzy clustering (FC), SIMCA and the rank ordering MCDM PROMETHEE and GAIA. Each method highlighted different properties of the data matrix according to the fingerprints from different types of Cassia seeds. In general, the PROMETHEE and GAIA MCDM methods provided the most comprehensive information for matching and discrimination of the fingerprints, and appeared to be best suited for quality assurance purposes for these and similar types of sample.
Visual sensitivity to spatially sampled modulation in human observers
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.; Macleod, Donald I. A.
1991-01-01
Thresholds were measured for detecting spatial luminance modulation in regular lattices of visually discrete dots. Thresholds for modulation of a lattice are generally higher than the corresponding threshold for modulation of a continuous field, and the size of the threshold elevation, which depends on the spacing of the lattice elements, can be as large as a one log unit. The largest threshold elevations are seen when the sample spacing is 12 min arc or greater. Theories based on response compression cannot explain the further observation that the threshold elevations due to spatial sampling are also dependent on modulation frequency: the greatest elevations occur with higher modulation frequencies. The idea that this is due to masking of the modulation frequency by the spatial frequencies in the sampling lattice is considered.
Noguchi, Akio; Nakamura, Kosuke; Sakata, Kozue; Sato-Fukuda, Nozomi; Ishigaki, Takumi; Mano, Junichi; Takabatake, Reona; Kitta, Kazumi; Teshima, Reiko; Kondo, Kazunari; Nishimaki-Mogami, Tomoko
2016-04-19
A number of genetically modified (GM) maize events have been developed and approved worldwide for commercial cultivation. A screening method is needed to monitor GM maize approved for commercialization in countries that mandate the labeling of foods containing a specified threshold level of GM crops. In Japan, a screening method has been implemented to monitor approved GM maize since 2001. However, the screening method currently used in Japan is time-consuming and requires generation of a calibration curve and experimental conversion factor (C(f)) value. We developed a simple screening method that avoids the need for a calibration curve and C(f) value. In this method, ΔC(q) values between the target sequences and the endogenous gene are calculated using multiplex real-time PCR, and the ΔΔC(q) value between the analytical and control samples is used as the criterion for determining analytical samples in which the GM organism content is below the threshold level for labeling of GM crops. An interlaboratory study indicated that the method is applicable independently with at least two models of PCR instruments used in this study.
González, Juan R; Carrasco, Josep L; Armengol, Lluís; Villatoro, Sergi; Jover, Lluís; Yasui, Yutaka; Estivill, Xavier
2008-01-01
Background MLPA method is a potentially useful semi-quantitative method to detect copy number alterations in targeted regions. In this paper, we propose a method for the normalization procedure based on a non-linear mixed-model, as well as a new approach for determining the statistical significance of altered probes based on linear mixed-model. This method establishes a threshold by using different tolerance intervals that accommodates the specific random error variability observed in each test sample. Results Through simulation studies we have shown that our proposed method outperforms two existing methods that are based on simple threshold rules or iterative regression. We have illustrated the method using a controlled MLPA assay in which targeted regions are variable in copy number in individuals suffering from different disorders such as Prader-Willi, DiGeorge or Autism showing the best performace. Conclusion Using the proposed mixed-model, we are able to determine thresholds to decide whether a region is altered. These threholds are specific for each individual, incorporating experimental variability, resulting in improved sensitivity and specificity as the examples with real data have revealed. PMID:18522760
Wang, Fang; Zhang, Gai
2011-03-01
The basic principles and the application of hydride-generation multi-channel atomic fluorescence spectrometry (HG-MC-AFS) in soil analysis are described. It is generally understood that only one or two elements can be simultaneously detected by commonly used one- or two-channel HG-AFS. In this work, a new sample-sensitive and effective method for the analysis of arsenic, bismuth, tellurium, and selenium in soil samples by simultaneous detection using HG-MC-AFS was developed. The method detection limits for arsenic, bismuth, tellurium, and selenium are 0.19 μg/g, 0.10 μg/g, 0.11 μg/g, and 0.08 μg/g, respectively. This method was successfully applied to the simultaneous determination of arsenic, bismuth, tellurium, and selenium in soil samples.
System and method for assaying a radionuclide
Cadieux, James R; King, III, George S; Fugate, Glenn A
2014-12-23
A system for assaying a radionuclide includes a liquid scintillation detector, an analyzer connected to the liquid scintillation detector, and a delay circuit connected to the analyzer. A gamma detector and a multi-channel analyzer are connected to the delay circuit and the gamma detector. The multi-channel analyzer produces a signal reflective of the radionuclide in the sample. A method for assaying a radionuclide includes selecting a sample, detecting alpha or beta emissions from the sample with a liquid scintillation detector, producing a first signal reflective of the alpha or beta emissions, and delaying the first signal a predetermined time. The method further includes detecting gamma emissions from the sample, producing a second signal reflective of the gamma emissions, and combining the delayed first signal with the second signal to produce a third signal reflective of the radionuclide.
Development and test of photon counting lidar
NASA Astrophysics Data System (ADS)
Wang, Chun-hui; Wang, Ao-you; Tao, Yu-liang; Li, Xu; Peng, Huan; Meng, Pei-bei
2018-02-01
In order to satisfy the application requirements of spaceborne three dimensional imaging lidar , a prototype of nonscanning multi-channel lidar based on receiver field of view segmentation was designed and developed. High repetition frequency micro-pulse lasers, optics fiber array and Geiger-mode APD, combination with time-correlated single photon counting technology, were adopted to achieve multi-channel detection. Ranging experiments were carried out outdoors. In low echo photon condition, target photon counting showed time correlated and noise photon counting were random. Detection probability and range precision versus threshold were described and range precision increased from 0.44 to 0.11 when threshold increased from 4 to 8.
Sato, Masaya; Kajita, Shin; Yasuhara, Ryo; Ohno, Noriyasu; Tokitani, Masayuki; Yoshida, Naoaki; Tawara, Yuzuru
2013-04-22
Multi-pulse laser-induced damage threshold (LIDT) was experimentally investigated up to ~10(6) pulses for Cu, Ag mirrors. The surface roughness and the hardness dependence on the LIDT were also examined. The LIDT of OFHC-Cu decreased with the pulse number and was 1.0 J/cm(2) at 1.8 × 10(6) pulses. The expected LIDT of cutting Ag at 10(7) pulses was the highest; Ag mirror would be one of the best choices for ITER Thomson scattering system. For the roughness and hardness, material dependences of LIDT are discussed with experimental results.
Reducing Threshold of Multi Quantum Wells InGaN Laser Diode by Using InGaN/GaN Waveguide
NASA Astrophysics Data System (ADS)
Abdullah, Rafid A.; Ibrahim, Kamarulazizi
2010-07-01
ISE TCAD (Integrated System Engineering Technology Computer Aided Design) software simulation program has been utilized to help study the effect of using InGaN/GaN as a waveguide instead of conventional GaN waveguide for multi quantum wells violet InGaN laser diode (LD). Simulation results indicate that the threshold of the LD has been reduced by using InGaN/GaN waveguide where InGaN/GaN waveguide increases the optical confinement factor which leads to increase the confinement carriers at the active region of the LD.
NASA Astrophysics Data System (ADS)
Vlaisavljevich, Eli
Histotripsy is a noninvasive ultrasound therapy that controls acoustic cavitation to mechanically fractionate soft tissue. This dissertation investigates the physical thresholds to initiate cavitation and produce tissue damage in histotripsy and factors affecting these thresholds in order to develop novel strategies for targeted tissue ablation. In the first part of this dissertation, the effects of tissue properties on histotripsy cavitation thresholds and damage thresholds were investigated. Results demonstrated that the histotripsy shock scattering threshold using multi-cycle pulses increases in stiffer tissues, while the histotripsy intrinsic threshold using single-cycle pulses is independent of tissue stiffness. Further, the intrinsic threshold slightly decreases with lower frequencies and significantly decreases with increasing temperature. The effects of tissue properties on the susceptibility to histotripsy-induced tissue damage were also investigated, demonstrating that stiffer tissues are more resistant to histotripsy. Two strategies were investigated for increasing the effectiveness of histotripsy for the treatment of stiffer tissues, with results showing that thermal preconditioning may be used to alter tissue susceptibility to histotripsy and that lower frequency treatments may increase the efficiency of histotripsy tissue ablation due to enhanced bubble expansion. In the second part of this dissertation, the feasibility of using histotripsy for targeted liver ablation was investigated in an intact in vivo porcine model, with results demonstrating that histotripsy was capable of non-invasively creating precise lesions throughout the entire liver. Additionally, a tissue selective ablation approach was developed, where histotripsy completely fractionated the liver tissue surrounding the major hepatic vessels and gallbladder while being self-limited at the boundaries of these critical structures. Finally, the long-term effects of histotripsy liver ablation were investigated in an intact in vivo rodent model, showing that the liver homogenate resulting from histotripsy-induced tissue fractionation was completely resorbed over the course of 28 days. In the final part of this dissertation, a novel ablation method combining histotripsy with acoustically sensitive nanodroplets was developed for targeted cancer cell ablation, demonstrating the potential of using nanodroplet-mediated histotripsy (NMH) for targeted, multi-focal ablation. Studies demonstrated that lower frequency and higher boiling point perfluorocarbon droplets can improve NMH therapy. The role of positive and negative pressure on cavitation nucleation in NMH was also investigated, showing that NMH cavitation nucleation is caused directly from the peak negative pressure of the incident wave, similar to histotripsy bubbles generated above the intrinsic threshold. Overall, the results of this dissertation provide significant insight into the physical mechanisms underlying histotripsy tissue ablation and will help to guide the future development of histotripsy for clinical applications such as the treatment of liver cancer.
Utility of Decision Rules for Transcutaneous Bilirubin Measurements
Burgos, Anthony E.; Flaherman, Valerie; Chung, Esther K.; Simpson, Elizabeth A.; Goyal, Neera K.; Von Kohorn, Isabelle; Dhepyasuwan, Niramol
2016-01-01
BACKGROUND: Transcutaneous bilirubin (TcB) meters are widely used for screening newborns for jaundice, with a total serum bilirubin (TSB) measurement indicated when the TcB value is classified as “positive” by using a decision rule. The goal of our study was to assess the clinical utility of 3 recommended TcB screening decision rules. METHODS: Paired TcB/TSB measurements were collected at 34 newborn nursery sites. At 27 sites (sample 1), newborns were routinely screened with a TcB measurement. For sample 2, sites that typically screen with TSB levels also obtained a TcB measurement for the study. Three decision rules to define a positive TcB measurement were evaluated: ≥75th percentile on the Bhutani nomogram, 70% of the phototherapy level, and within 3 mg/dL of the phototherapy threshold. The primary outcome was a TSB level at/above the phototherapy threshold. The rate of false-negative TcB screens and percentage of blood draws avoided were calculated for each decision rule. RESULTS: For sample 1, data were analyzed on 911 paired TcB-TSB measurements from a total of 8316 TcB measurements. False-negative rates were <10% with all decision rules; none identified all 31 newborns with a TSB level at/above the phototherapy threshold. The percentage of blood draws avoided ranged from 79.4% to 90.7%. In sample 2, each rule correctly identified all 8 newborns with TSB levels at/above the phototherapy threshold. CONCLUSIONS: Although all of the decision rules can be used effectively to screen newborns for jaundice, each will “miss” some infants with a TSB level at/above the phototherapy threshold. PMID:27244792
Deviney, Frank A.; Rice, Karen; Brown, Donald E.
2012-01-01
Natural resource managers require information concerning the frequency, duration, and long-term probability of occurrence of water-quality indicator (WQI) violations of defined thresholds. The timing of these threshold crossings often is hidden from the observer, who is restricted to relatively infrequent observations. Here, a model for the hidden process is linked with a model for the observations, and the parameters describing duration, return period, and long-term probability of occurrence are estimated using Bayesian methods. A simulation experiment is performed to evaluate the approach under scenarios based on the equivalent of a total monitoring period of 5-30 years and an observation frequency of 1-50 observations per year. Given constant threshold crossing rate, accuracy and precision of parameter estimates increased with longer total monitoring period and more-frequent observations. Given fixed monitoring period and observation frequency, accuracy and precision of parameter estimates increased with longer times between threshold crossings. For most cases where the long-term probability of being in violation is greater than 0.10, it was determined that at least 600 observations are needed to achieve precise estimates. An application of the approach is presented using 22 years of quasi-weekly observations of acid-neutralizing capacity from Deep Run, a stream in Shenandoah National Park, Virginia. The time series also was sub-sampled to simulate monthly and semi-monthly sampling protocols. Estimates of the long-term probability of violation were unbiased despite sampling frequency; however, the expected duration and return period were over-estimated using the sub-sampled time series with respect to the full quasi-weekly time series.
Single shot multi-wavelength phase retrieval with coherent modulation imaging.
Dong, Xue; Pan, Xingchen; Liu, Cheng; Zhu, Jianqiang
2018-04-15
A single shot multi-wavelength phase retrieval method is proposed by combining common coherent modulation imaging (CMI) and a low rank mixed-state algorithm together. A radiation beam consisting of multi-wavelength is illuminated on the sample to be observed, and the exiting field is incident on a random phase plate to form speckle patterns, which is the incoherent superposition of diffraction patterns of each wavelength. The exiting complex amplitude of the sample including both the modulus and phase of each wavelength can be reconstructed simultaneously from the recorded diffraction intensity using a low rank mixed-state algorithm. The feasibility of this proposed method was verified with visible light experimentally. This proposed method not only makes CMI realizable with partially coherent illumination but also can extend its application to various traditionally unrelated fields, where several wavelengths should be considered simultaneously.
Ishikawa, Hiroshi; Kasahara, Kohei; Sato, Sumie; Shimakawa, Yasuhisa; Watanabe, Koichi
2014-05-16
Yeast contamination is a serious problem in the food industry and a major cause of food spoilage. Several yeasts, such as Filobasidiella neoformans, which cause cryptococcosis in humans, are also opportunistic pathogens, so a simple and rapid method for monitoring yeast contamination in food is essential. Here, we developed a simple and rapid method that utilizes loop-mediated isothermal amplification (LAMP) for the detection of F. neoformans. A set of five specific LAMP primers was designed that targeted the 5.8S-26S rDNA internal transcribed spacer 2 region of F. neoformans, and the primer set's specificity was confirmed. In a pure culture of F. neoformans, the LAMP assay had a lower sensitivity threshold of 10(2)cells/mL at a runtime of 60min. In a probiotic dairy product artificially contaminated with F. neoformans, the LAMP assay also had a lower sensitivity threshold of 10(2)cells/mL, which was comparable to the sensitivity of a quantitative PCR (qPCR) assay. We also developed a simple two-step method for the extraction of DNA from a probiotic dairy product that can be performed within 15min. This method involves initial protease treatment of the test sample at 45°C for 3min followed by boiling at 100°C for 5min under alkaline conditions. In a probiotic dairy product artificially contaminated with F. neoformans, analysis by means of our novel DNA extraction method followed by LAMP with our specific primer set had a lower sensitivity threshold of 10(3)cells/mL at a runtime of 60min. In contrast, use of our novel method of DNA extraction followed by qPCR assay had a lower sensitivity threshold of only 10(5)cells/mL at a runtime of 3 to 4h. Therefore, unlike the PCR assay, our LAMP assay can be used to quickly evaluate yeast contamination and is sensitive even for crude samples containing bacteria or background impurities. Our study provides a powerful tool for the primary screening of large numbers of food samples for yeast contamination. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ghrefat, Habes A.; Goodell, Philip C.
2011-08-01
The goal of this research is to map land cover patterns and to detect changes that occurred at Alkali Flat and Lake Lucero, White Sands using multispectral Landsat 7 Enhanced Thematic Mapper Plus (ETM+), Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Advanced Land Imager (ALI), and hyperspectral Hyperion and Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data. The other objectives of this study were: (1) to evaluate the information dimensionality limits of Landsat 7 ETM+, ASTER, ALI, Hyperion, and AVIRIS data with respect to signal-to-noise and spectral resolution, (2) to determine the spatial distribution and fractional abundances of land cover endmembers, and (3) to check ground correspondence with satellite data. A better understanding of the spatial and spectral resolution of these sensors, optimum spectral bands and their information contents, appropriate image processing methods, spectral signatures of land cover classes, and atmospheric effects are needed to our ability to detect and map minerals from space. Image spectra were validated using samples collected from various localities across Alkali Flat and Lake Lucero. These samples were measured in the laboratory using VNIR-SWIR (0.4-2.5 μm) spectra and X-ray Diffraction (XRD) method. Dry gypsum deposits, wet gypsum deposits, standing water, green vegetation, and clastic alluvial sediments dominated by mixtures of ferric iron (ferricrete) and calcite were identified in the study area using Minimum Noise Fraction (MNF), Pixel Purity Index (PPI), and n-D Visualization. The results of MNF confirm that AVIRIS and Hyperion data have higher information dimensionality thresholds exceeding the number of available bands of Landsat 7 ETM+, ASTER, and ALI data. ASTER and ALI data can be a reasonable alternative to AVIRIS and Hyperion data for the purpose of monitoring land cover, hydrology and sedimentation in the basin. The spectral unmixing analysis and dimensionality eigen analysis between the various datasets helped to uncover the most optimum spatial-spectral-temporal and radiometric-resolution sensor characteristics for remote sensing based on monitoring of seasonal land cover, surface water, groundwater, and alluvial sediment input changes within the basin. The results demonstrated good agreement between ground truth data and XRD analysis of samples, and the results of Matched Filtering (MF) mapping method.
Multiscale investigation of chemical interference in proteins
NASA Astrophysics Data System (ADS)
Samiotakis, Antonios; Homouz, Dirar; Cheung, Margaret S.
2010-05-01
We developed a multiscale approach (MultiSCAAL) that integrates the potential of mean force obtained from all-atomistic molecular dynamics simulations with a knowledge-based energy function for coarse-grained molecular simulations in better exploring the energy landscape of a small protein under chemical interference such as chemical denaturation. An excessive amount of water molecules in all-atomistic molecular dynamics simulations often negatively impacts the sampling efficiency of some advanced sampling techniques such as the replica exchange method and it makes the investigation of chemical interferences on protein dynamics difficult. Thus, there is a need to develop an effective strategy that focuses on sampling structural changes in protein conformations rather than solvent molecule fluctuations. In this work, we address this issue by devising a multiscale simulation scheme (MultiSCAAL) that bridges the gap between all-atomistic molecular dynamics simulation and coarse-grained molecular simulation. The two key features of this scheme are the Boltzmann inversion and a protein atomistic reconstruction method we previously developed (SCAAL). Using MultiSCAAL, we were able to enhance the sampling efficiency of proteins solvated by explicit water molecules. Our method has been tested on the folding energy landscape of a small protein Trp-cage with explicit solvent under 8M urea using both the all-atomistic replica exchange molecular dynamics and MultiSCAAL. We compared computational analyses on ensemble conformations of Trp-cage with its available experimental NOE distances. The analysis demonstrated that conformations explored by MultiSCAAL better agree with the ones probed in the experiments because it can effectively capture the changes in side-chain orientations that can flip out of the hydrophobic pocket in the presence of urea and water molecules. In this regard, MultiSCAAL is a promising and effective sampling scheme for investigating chemical interference which presents a great challenge when modeling protein interactions in vivo.
NASA Astrophysics Data System (ADS)
Saqib, Najam us; Faizan Mysorewala, Muhammad; Cheded, Lahouari
2017-12-01
In this paper, we propose a novel monitoring strategy for a wireless sensor networks (WSNs)-based water pipeline network. Our strategy uses a multi-pronged approach to reduce energy consumption based on the use of two types of vibration sensors and pressure sensors, all having different energy levels, and a hierarchical adaptive sampling mechanism to determine the sampling frequency. The sampling rate of the sensors is adjusted according to the bandwidth of the vibration signal being monitored by using a wavelet-based adaptive thresholding scheme that calculates the new sampling frequency for the following cycle. In this multimodal sensing scheme, the duty-cycling approach is used for all sensors to reduce the sampling instances, such that the high-energy, high-precision (HE-HP) vibration sensors have low duty cycles, and the low-energy, low-precision (LE-LP) vibration sensors have high duty cycles. The low duty-cycling (HE-HP) vibration sensor adjusts the sampling frequency of the high duty-cycling (LE-LP) vibration sensor. The simulated test bed considered here consists of a water pipeline network which uses pressure and vibration sensors, with the latter having different energy consumptions and precision levels, at various locations in the network. This is all the more useful for energy conservation for extended monitoring. It is shown that by using the novel features of our proposed scheme, a significant reduction in energy consumption is achieved and the leak is effectively detected by the sensor node that is closest to it. Finally, both the total energy consumed by monitoring as well as the time to detect the leak by a WSN node are computed, and show the superiority of our proposed hierarchical adaptive sampling algorithm over a non-adaptive sampling approach.
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2010-01-01
Combustion noise from turbofan engines has become important, as the noise from sources like the fan and jet are reduced. An aligned and un-aligned coherence technique has been developed to determine a threshold level for the coherence and thereby help to separate the coherent combustion noise source from other noise sources measured with far-field microphones. This method is compared with a statistics based coherence threshold estimation method. In addition, the un-aligned coherence procedure at the same time also reveals periodicities, spectral lines, and undamped sinusoids hidden by broadband turbofan engine noise. In calculating the coherence threshold using a statistical method, one may use either the number of independent records or a larger number corresponding to the number of overlapped records used to create the average. Using data from a turbofan engine and a simulation this paper shows that applying the Fisher z-transform to the un-aligned coherence can aid in making the proper selection of samples and produce a reasonable statistics based coherence threshold. Examples are presented showing that the underlying tonal and coherent broad band structure which is buried under random broadband noise and jet noise can be determined. The method also shows the possible presence of indirect combustion noise. Copyright 2011 Acoustical Society of America. This article may be downloaded for personal use only. Any other use requires prior permission of the author and the Acoustical Society of America.
Fram, Miranda S.; Munday, Cathy; Belitz, Kenneth
2009-01-01
Groundwater quality in the approximately 460-square-mile Tahoe-Martis study unit was investigated in June through September 2007 as part of the Priority Basin Project of the Groundwater Ambient Monitoring and Assessment (GAMA) Program. The GAMA Priority Basin Project was developed in response to the Groundwater Quality Monitoring Act of 2001 and is being conducted by the U.S. Geological Survey (USGS) in cooperation with the California State Water Resources Control Board (SWRCB). The study was designed to provide a spatially unbiased assessment of the quality of raw groundwater used for public water supplies within the Tahoe-Martis study unit (Tahoe-Martis) and to facilitate statistically consistent comparisons of groundwater quality throughout California. Samples were collected from 52 wells in El Dorado, Placer, and Nevada Counties. Forty-one of the wells were selected using a spatially distributed, randomized grid-based method to provide statistical representation of the study area (grid wells), and 11 were selected to aid in evaluation of specific water-quality issues (understanding wells). The groundwater samples were analyzed for a large number of synthetic organic constituents (volatile organic compounds [VOC], pesticides and pesticide degradates, and pharmaceutical compounds), constituents of special interest (perchlorate and N-nitrosodimethylamine [NDMA]), naturally occurring inorganic constituents (nutrients, major and minor ions, and trace elements), radioactive constituents, and microbial indicators. Naturally occurring isotopes (tritium, carbon-14, strontium isotope ratio, and stable isotopes of hydrogen and oxygen of water), and dissolved noble gases also were measured to help identify the sources and ages of the sampled groundwater. In total, 240 constituents and water-quality indicators were investigated. Three types of quality-control samples (blanks, replicates, and samples for matrix spikes) each were collected at 12 percent of the wells, and the results obtained from these samples were used to evaluate the quality of the data for the groundwater samples. Field blanks rarely contained detectable concentrations of any constituent, suggesting that data for the groundwater samples were not compromised by possible contamination during sample collection, handling or analysis. Differences between replicate samples were within acceptable ranges. Matrix spike recoveries were within acceptable ranges for most compounds. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, raw water typically is treated, disinfected, or blended with other waters to maintain water quality. Regulatory thresholds apply to water that is served to the consumer, not to raw groundwater. However, to provide some context for the results, concentrations of constituents measured in the raw groundwater were compared with regulatory and nonregulatory health-based thresholds established by the U.S. Environmental Protection Agency (USEPA) and the California Department of Public Health (CDPH), and with aesthetic and technical thresholds established by CDPH. Comparisons between data collected for this study and drinking-water thresholds are for illustrative purposes only and do not indicate of compliance or noncompliance with regulatory thresholds. The concentrations of most constituents detected in groundwater samples from the Tahoe-Martis wells were below drinking-water thresholds. Organic compounds (VOCs and pesticides) were detected in about 40 percent of the samples from grid wells, and most concentrations were less than 1/100th of regulatory and nonregulatory health-based thresholds, although the conentration of perchloroethene in one sample was above the USEPA maximum contaminant level (MCL-US). Concentrations of all trace elements and nutrients in samples from grid wells were below regulatory and nonregulatory health-based thresholds, with five exceptions. Concentra
Kakitani, Ayano; Inoue, Tomonori; Matsumoto, Keiko; Watanabe, Jun; Nagatomi, Yasushi; Mochizuki, Naoki
2014-01-01
An LC-MS/MS method was developed for the simultaneous determination of 15 water-soluble vitamins that are widely used as additives in beverages and dietary supplements. This combined method involves the following simple pre-treatment procedures: dietary supplement samples were prepared by centrifugation and filtration after an extraction step, whereas beverage samples were diluted prior to injection. Chromatographic analysis in this method utilised a multi-mode ODS column, which provided reverse-phase, anion- and cation-exchange capacities, and therefore improved the retention of highly polar analytes such as water-soluble vitamins. Additionally, the multi-mode ODS column did not require adding ion pair reagents to the mobile phase. We optimised the chromatographic separation of 15 water-soluble vitamins by adjusting the mobile phase pH and the organic solvent. We also conducted an analysis of a NIST Standard Reference Material (SRM 3280 Multi-vitamin/Multi-element tablets) using this method to verify its accuracy. In addition, the method was applied to identify the vitamins in commercial beverages and dietary supplements. By comparing results with the label values and results obtained by official methods, it was concluded that the method could be used for quality control and to compose nutrition labels for vitamin-enriched products.
Woodall, Christopher W; Rondeux, Jacques; Verkerk, Pieter J; Ståhl, Göran
2009-10-01
Efforts to assess forest ecosystem carbon stocks, biodiversity, and fire hazards have spurred the need for comprehensive assessments of forest ecosystem dead wood (DW) components around the world. Currently, information regarding the prevalence, status, and methods of DW inventories occurring in the world's forested landscapes is scattered. The goal of this study is to describe the status, DW components measured, sample methods employed, and DW component thresholds used by national forest inventories that currently inventory DW around the world. Study results indicate that most countries do not inventory forest DW. Globally, we estimate that about 13% of countries inventory DW using a diversity of sample methods and DW component definitions. A common feature among DW inventories was that most countries had only just begun DW inventories and employ very low sample intensities. There are major hurdles to harmonizing national forest inventories of DW: differences in population definitions, lack of clarity on sample protocols/estimation procedures, and sparse availability of inventory data/reports. Increasing database/estimation flexibility, developing common dimensional thresholds of DW components, publishing inventory procedures/protocols, releasing inventory data/reports to international peer review, and increasing communication (e.g., workshops) among countries inventorying DW are suggestions forwarded by this study to increase DW inventory harmonization.
Urine benzodiazepines screening of involuntarily drugged and robbed or raped patients.
Boussairi, A; Dupeyron, J P; Hernandez, B; Delaitre, D; Beugnet, L; Espinoza, P; Diamant-Berger, O
1996-01-01
This study involved 35 patients who claimed to have been drugged before being robbed or raped, despite urine negative toxicologic screening by immunoenzymatic methods. The urines were frozen for further investigations, including enzymatic hydrolysis of urinary conjugates, liquid-solid extraction and, finally, immunoenzymatic screening of concentrated urine extract. Urine benzodiazepines were analyzed by immunoenzymatic assay before and after enzymatic hydrolysis combined with extraction. On direct immunoenzymatic screening, 17 of the 35 urine samples were benzodiazepine positive. Enrichment of preserved specimens improved the detection threshold from 200 ng/mL to 50 ng/mL and 10 of the 18 negative urines became positive. This method allowed us to demonstrate the benzodiazepines in half of previously negative urine samples. Benzodiazepine screening is particularly problematic because of low dosage, rapid elimination, failure to detect conjugated metabolites by immunoenzymatic reagents and high threshold of sensitivity for certain substances.
Heterogeneous Multi-Metric Learning for Multi-Sensor Fusion
2011-07-01
distance”. One of the most widely used methods is the k-nearest neighbor ( KNN ) method [4], which labels an input data sample to be the class with majority...despite of its simplicity, it can be an effective candidate and can be easily extended to handle multiple sensors. Distance based method such as KNN relies...Neighbor (LMNN) method [21] which will be briefly reviewed in the sequel. LMNN method tries to learn an optimal metric specifically for KNN classifier. The
A population study of urine glycerol concentrations in elite athletes competing in North America.
Kelly, Brian N; Madsen, Myke; Sharpe, Ken; Nair, Vinod; Eichner, Daniel
2013-01-01
Glycerol is an endogenous substance that is on the World Anti-Doping Agency's list of prohibited threshold substances due to its potential use as a plasma volume expansion agent. The WADA has set the threshold for urine glycerol, including measurement uncertainty, at 1.3 mg/mL. Glycerol in circulation largely comes from metabolism of triglycerides in order to meet energy requirements and when the renal threshold is eclipsed, glycerol is excreted into urine. In part due to ethnic differences in postprandial triglyceride concentrations, we investigated urine glycerol concentrations in a population of elite athletes competing in North America and compared the results to those of athletes competing in Europe. 959 urine samples from elite athletes competing in North America collected for anti-doping purposes were analyzed for urine glycerol concentrations by a gas chromatography mass-spectrometry method. Samples were divided into groups according to: Timing (in- or out-of-competition), Class (strength, game, or endurance sports) and Gender. 333 (34.7%) samples had undetectable amounts of glycerol (<1 μg/mL). 861 (89.8%) of the samples had glycerol concentrations ≤20 μg/mL. The highest glycerol concentration observed was 652 μg/mL. Analysis of the data finds the effects of each category to be statistically significant. The largest estimate of the 99.9(th) percentile, from the in-competition, female, strength athlete samples, was 1813 μg/mL with a 95% confidence range from 774 to 4251 μg/mL. This suggests a conservative threshold of 4.3 mg/mL, which would result in a reasonable detection window for urine samples collected in-competition for all genders and sport classes. Copyright © 2013 John Wiley & Sons, Ltd.
Ludwin, Brian M; Bamonti, Patricia; Mulligan, Elizabeth A
2017-11-21
To describe a program evaluation of the interrelationship of adherence and treatment outcomes in a sample of veteran older adults with co-morbidities who participated in group-based cognitive behavioral therapy for insomnia. Retrospective data extraction was performed for 14 older adults. Adherence measures and sleep outcomes were measured with sleep diaries and Insomnia Severity Index. Demographic and clinical information was extracted through chart review. Adherence with prescribed time in bed, daily sleep diaries, and maintaining consistent time out of bed and time in bed was generally high. There were moderate, though not significant, improvements in consistency of time in bed and time out of bed over time. Adherence was not significantly associated with sleep outcomes despite improvements in most sleep outcomes. The non-significant relationship between sleep outcomes and adherence may reflect the moderating influence of co-morbidities or may suggest a threshold effect beyond which stricter adherence has a limited impact on outcomes. Development of multi-method adherence measures across all treatment components will be important to understand the influence of adherence on treatment outcomes as monitoring adherence to time in bed and time out of bed had limited utility for understanding treatment outcomes in our sample.
NASA Astrophysics Data System (ADS)
Bitenc, M.; Kieffer, D. S.; Khoshelham, K.
2015-08-01
The precision of Terrestrial Laser Scanning (TLS) data depends mainly on the inherent random range error, which hinders extraction of small details from TLS measurements. New post processing algorithms have been developed that reduce or eliminate the noise and therefore enable modelling details at a smaller scale than one would traditionally expect. The aim of this research is to find the optimum denoising method such that the corrected TLS data provides a reliable estimation of small-scale rock joint roughness. Two wavelet-based denoising methods are considered, namely Discrete Wavelet Transform (DWT) and Stationary Wavelet Transform (SWT), in combination with different thresholding procedures. The question is, which technique provides a more accurate roughness estimates considering (i) wavelet transform (SWT or DWT), (ii) thresholding method (fixed-form or penalised low) and (iii) thresholding mode (soft or hard). The performance of denoising methods is tested by two analyses, namely method noise and method sensitivity to noise. The reference data are precise Advanced TOpometric Sensor (ATOS) measurements obtained on 20 × 30 cm rock joint sample, which are for the second analysis corrupted by different levels of noise. With such a controlled noise level experiments it is possible to evaluate the methods' performance for different amounts of noise, which might be present in TLS data. Qualitative visual checks of denoised surfaces and quantitative parameters such as grid height and roughness are considered in a comparative analysis of denoising methods. Results indicate that the preferred method for realistic roughness estimation is DWT with penalised low hard thresholding.
Smalling, K.L.; Kuivila, K.M.
2008-01-01
A multi-residue method was developed for the simultaneous determination of 85 current-use and legacy organochlorine pesticides in a single sediment sample. After microwave-assisted extraction, clean-up of samples was optimized using gel permeation chromatography and either stacked carbon and alumina solid-phase extraction cartridges or a deactivated Florisil column. Analytes were determined by gas chromatography with ion-trap mass spectrometry and electron capture detection. Method detection limits ranged from 0.6 to 8.9 ??g/kg dry weight. Bed and suspended sediments from a variety of locations were analyzed to validate the method and 29 pesticides, including at least 1 from every class, were detected.
An Energy-Efficient Multi-Tier Architecture for Fall Detection on Smartphones
Guvensan, M. Amac; Kansiz, A. Oguz; Camgoz, N. Cihan; Turkmen, H. Irem; Yavuz, A. Gokhan; Karsligil, M. Elif
2017-01-01
Automatic detection of fall events is vital to providing fast medical assistance to the causality, particularly when the injury causes loss of consciousness. Optimization of the energy consumption of mobile applications, especially those which run 24/7 in the background, is essential for longer use of smartphones. In order to improve energy-efficiency without compromising on the fall detection performance, we propose a novel 3-tier architecture that combines simple thresholding methods with machine learning algorithms. The proposed method is implemented on a mobile application, called uSurvive, for Android smartphones. It runs as a background service and monitors the activities of a person in daily life and automatically sends a notification to the appropriate authorities and/or user defined contacts when it detects a fall. The performance of the proposed method was evaluated in terms of fall detection performance and energy consumption. Real life performance tests conducted on two different models of smartphone demonstrate that our 3-tier architecture with feature reduction could save up to 62% of energy compared to machine learning only solutions. In addition to this energy saving, the hybrid method has a 93% of accuracy, which is superior to thresholding methods and better than machine learning only solutions. PMID:28644378
Rejection thresholds in solid chocolate-flavored compound coating.
Harwood, Meriel L; Ziegler, Gregory R; Hayes, John E
2012-10-01
Classical detection thresholds do not predict liking, as they focus on the presence or absence of a sensation. Recently however, Prescott and colleagues described a new method, the rejection threshold, where a series of forced choice preference tasks are used to generate a dose-response function to determine hedonically acceptable concentrations. That is, how much is too much? To date, this approach has been used exclusively in liquid foods. Here, we determined group rejection thresholds in solid chocolate-flavored compound coating for bitterness. The influences of self-identified preferences for milk or dark chocolate, as well as eating style (chewers compared to melters) on rejection thresholds were investigated. Stimuli included milk chocolate-flavored compound coating spiked with increasing amounts of sucrose octaacetate, a bitter and generally recognized as safe additive. Paired preference tests (blank compared to spike) were used to determine the proportion of the group that preferred the blank. Across pairs, spiked samples were presented in ascending concentration. We were able to quantify and compare differences between 2 self-identified market segments. The rejection threshold for the dark chocolate preferring group was significantly higher than the milk chocolate preferring group (P= 0.01). Conversely, eating style did not affect group rejection thresholds (P= 0.14), although this may reflect the amount of chocolate given to participants. Additionally, there was no association between chocolate preference and eating style (P= 0.36). Present work supports the contention that this method can be used to examine preferences within specific market segments and potentially individual differences as they relate to ingestive behavior. This work makes use of the rejection threshold method to study market segmentation, extending its use to solid foods. We believe this method has broad applicability to the sensory specialist and product developer by providing a process to identify how much is too much when formulating products, even in the context of specific market segments. We illustrate this in solid chocolate-flavored compound coating, identifying substantial differences in the amount of acceptable bitterness in those who prefer milk chocolate compared to dark chocolate. This method provides a direct means to answer the question of how much is too much. © 2012 Institute of Food Technologists®
Automatic 3D power line reconstruction of multi-angular imaging power line inspection system
NASA Astrophysics Data System (ADS)
Zhang, Wuming; Yan, Guangjian; Wang, Ning; Li, Qiaozhi; Zhao, Wei
2007-06-01
We develop a multi-angular imaging power line inspection system. Its main objective is to monitor the relative distance between high voltage power line and around objects, and alert if the warning threshold is exceeded. Our multi-angular imaging power line inspection system generates DSM of the power line passage, which comprises ground surface and ground objects, for example trees and houses, etc. For the purpose of revealing the dangerous regions, where ground objects are too close to the power line, 3D power line information should be extracted at the same time. In order to improve the automation level of extraction, reduce labour costs and human errors, an automatic 3D power line reconstruction method is proposed and implemented. It can be achieved by using epipolar constraint and prior knowledge of pole tower's height. After that, the proper 3D power line information can be obtained by space intersection using found homologous projections. The flight experiment result shows that the proposed method can successfully reconstruct 3D power line, and the measurement accuracy of the relative distance satisfies the user requirement of 0.5m.
Kos, Gregor; Sieger, Markus; McMullin, David; Zahradnik, Celine; Sulyok, Michael; Öner, Tuba; Mizaikoff, Boris; Krska, Rudolf
2016-10-01
The rapid identification of mycotoxins such as deoxynivalenol and aflatoxin B 1 in agricultural commodities is an ongoing concern for food importers and processors. While sophisticated chromatography-based methods are well established for regulatory testing by food safety authorities, few techniques exist to provide a rapid assessment for traders. This study advances the development of a mid-infrared spectroscopic method, recording spectra with little sample preparation. Spectral data were classified using a bootstrap-aggregated (bagged) decision tree method, evaluating the protein and carbohydrate absorption regions of the spectrum. The method was able to classify 79% of 110 maize samples at the European Union regulatory limit for deoxynivalenol of 1750 µg kg -1 and, for the first time, 77% of 92 peanut samples at 8 µg kg -1 of aflatoxin B 1 . A subset model revealed a dependency on variety and type of fungal infection. The employed CRC and SBL maize varieties could be pooled in the model with a reduction of classification accuracy from 90% to 79%. Samples infected with Fusarium verticillioides were removed, leaving samples infected with F. graminearum and F. culmorum in the dataset improving classification accuracy from 73% to 79%. A 500 µg kg -1 classification threshold for deoxynivalenol in maize performed even better with 85% accuracy. This is assumed to be due to a larger number of samples around the threshold increasing representativity. Comparison with established principal component analysis classification, which consistently showed overlapping clusters, confirmed the superior performance of bagged decision tree classification.
Use of three-point taper systems in timber cruising
James W. Flewelling; Richard L. Ernst; Lawrence M. Raynes
2000-01-01
Tree volumes and profiles are often estimated as functions of total height and DBH. Alternative estimators include form-class methods, importance sampling, the centroid method, and multi-point profile (taper) estimation systems; all of these require some measurement or estimate of upper stem diameters. The multi-point profile system discussed here allows for upper stem...
Demirci, F. Yesim; Wang, Xingbin; Kelly, Jennifer A.; Morris, David L.; Barmada, M. Michael; Feingold, Eleanor; Kao, Amy H.; Sivils, Kathy L.; Bernatsky, Sasha; Pineau, Christian; Clarke, Ann; Ramsey-Goldman, Rosalind; Vyse, Timothy J.; Gaffney, Patrick M.; Manzi, Susan; Kamboh, M. Ilyas
2016-01-01
Objective Genome-wide association studies (GWASs) in individuals of European ancestry identified a number of systemic lupus erythematosus (SLE) susceptibility loci using earlier versions of high-density genotyping platforms. Follow-up studies on suggestive GWAS regions using larger samples and more markers identified additional SLE loci in European-descent subjects. Here we report the results of a multi-stage study that we performed to identify novel SLE loci. Methods In Stage 1, we conducted a new GWAS of SLE in a North American case-control sample of European ancestry (n=1,166) genotyped on Affymetrix Genome-Wide Human SNP Array 6.0. In Stage 2, we further investigated top new suggestive GWAS hits by in silico evaluation and meta-analysis using an additional dataset of European-descent subjects (>2,500 individuals), followed by replication of top meta-analysis findings in another dataset of European-descent subjects (>10,000 individuals) in Stage 3. Results As expected, our GWAS revealed most significant associations at the major histocompatibility complex locus (6p21), which easily surpassed genome-wide significance threshold (P<5×10−8). Several other SLE signals/loci previously implicated in Caucasians and/or Asians were also supported in Stage 1 discovery sample and strongest signals were observed at 2q32/STAT4 (P=3.6×10−7) and at 8p23/BLK (P=8.1×10−6). Stage 2 meta-analyses identified a new genome-wide significant SLE locus at 12q12 (meta P=3.1×10−8), which was replicated in Stage 3. Conclusion Our multi-stage study identified and replicated a new SLE locus that warrants further follow-up in additional studies. Publicly available databases suggest that this new SLE signal falls within a functionally relevant genomic region and near biologically important genes. PMID:26316170
Fram, Miranda S.; Belitz, Kenneth
2007-01-01
Ground-water quality in the approximately 1,800 square-mile Southern Sierra study unit (SOSA) was investigated in June 2006 as part of the Statewide Basin Assessment Project of the Groundwater Ambient Monitoring and Assessment (GAMA) Program. The GAMA Statewide Basin Assessment Project was developed in response to the Groundwater Quality Monitoring Act of 2001 and is being conducted by the U.S. Geological Survey (USGS) in cooperation with the California State Water Resources Control Board (SWRCB). The Southern Sierra study was designed to provide a spatially unbiased assessment of raw ground-water quality within SOSA, as well as a statistically consistent basis for comparing water quality throughout California. Samples were collected from fifty wells in Kern and Tulare Counties. Thirty-five of the wells were selected using a randomized grid-based method to provide statistical representation of the study area, and fifteen were selected to evaluate changes in water chemistry along ground-water flow paths. The ground-water samples were analyzed for a large number of synthetic organic constituents [volatile organic compounds (VOCs), pesticides and pesticide degradates, pharmaceutical compounds, and wastewater-indicator compounds], constituents of special interest [perchlorate, N-nitrosodimethylamine (NDMA), and 1,2,3-trichloropropane (1,2,3-TCP)], naturally occurring inorganic constituents [nutrients, major and minor ions, and trace elements], radioactive constituents, and microbial indicators. Naturally occurring isotopes [tritium, and carbon-14, and stable isotopes of hydrogen and oxygen in water], and dissolved noble gases also were measured to help identify the source and age of the sampled ground water. Quality-control samples (blanks, replicates, and samples for matrix spikes) were collected for approximately one-eighth of the wells, and the results for these samples were used to evaluate the quality of the data for the ground-water samples. Assessment of the quality-control information resulted in censoring of less than 0.2 percent of the data collected for ground-water samples. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, water typically is treated, disinfected, or blended with other waters to maintain acceptable water quality. Regulatory thresholds apply to treated water that is served to the consumer, not to raw ground water. However, to provide some context for the results, concentrations of constituents measured in the raw ground water were compared with health-based thresholds established by the U.S. Environmental Protection Agency (USEPA) and California Department of Public Health (CDPH) and thresholds established for aesthetic concerns (secondary maximum contaminant levels, SMCL-CA) by CDPH. VOCs and pesticides were detected in less than one-third of the grid wells, and all detections in samples from SOSA wells were below health-based thresholds. All detections of trace elements and nutrients in samples from SOSA wells were below health-based thresholds, with the exception of four detections of arsenic that were above the USEPA maximum contaminant level (MCL-US) and one detection of boron that was above the CDPH notification level (NL-CA). All detections of radioactive constituents were below health-based thresholds, although four samples had activities of radon-222 above the proposed MCL-US. Most of the samples from SOSA wells had concentrations of major elements, total dissolved solids, and trace elements below the non-enforceable thresholds set for aesthetic concerns. A few samples contained iron, manganese, or total dissolved solids at concentrations above the SMCL-CA thresholds.
NASA Astrophysics Data System (ADS)
Wu, M. F.; Sun, Z. C.; Yang, B.; Yu, S. S.
2016-11-01
In order to reduce the “salt and pepper” in pixel-based urban land cover classification and expand the application of fusion of multi-source data in the field of urban remote sensing, WorldView-2 imagery and airborne Light Detection and Ranging (LiDAR) data were used to improve the classification of urban land cover. An approach of object- oriented hierarchical classification was proposed in our study. The processing of proposed method consisted of two hierarchies. (1) In the first hierarchy, LiDAR Normalized Digital Surface Model (nDSM) image was segmented to objects. The NDVI, Costal Blue and nDSM thresholds were set for extracting building objects. (2) In the second hierarchy, after removing building objects, WorldView-2 fused imagery was obtained by Haze-ratio-based (HR) fusion, and was segmented. A SVM classifier was applied to generate road/parking lot, vegetation and bare soil objects. (3) Trees and grasslands were split based on an nDSM threshold (2.4 meter). The results showed that compared with pixel-based and non-hierarchical object-oriented approach, proposed method provided a better performance of urban land cover classification, the overall accuracy (OA) and overall kappa (OK) improved up to 92.75% and 0.90. Furthermore, proposed method reduced “salt and pepper” in pixel-based classification, improved the extraction accuracy of buildings based on LiDAR nDSM image segmentation, and reduced the confusion between trees and grasslands through setting nDSM threshold.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yin; Wang, Wen; Wysocki, Gerard, E-mail: gwysocki@princeton.edu
In this Letter, we present a method of performing broadband mid-infrared spectroscopy with conventional, free-running, continuous wave Fabry-Perot quantum cascade lasers (FP-QCLs). The measurement method is based on multi-heterodyne down-conversion of optical signals. The sample transmission spectrum probed by one multi-mode FP-QCL is down-converted to the radio-frequency domain through an optical multi-heterodyne process using a second FP-QCL as the local oscillator. Both a broadband multi-mode spectral measurement as well as high-resolution (∼15 MHz) spectroscopy of molecular absorption are demonstrated and show great potential for development of high performance FP-laser-based spectrometers for chemical sensing.
Land, Michael; Belitz, Kenneth
2008-01-01
Ground-water quality in the approximately 460 square mile San Fernando-San Gabriel study unit (SFSG) was investigated between May and July 2005 as part of the Priority Basin Assessment Project of the Groundwater Ambient Monitoring and Assessment (GAMA) Program. The GAMA Priority Basin Assessment Project was developed in response to the Groundwater Quality Monitoring Act of 2001 and is being conducted by the U.S. Geological Survey (USGS) in cooperation with the California State Water Resources Control Board (SWRCB). The San Fernando-San Gabriel study was designed to provide a spatially unbiased assessment of raw ground-water quality within SFSG, as well as a statistically consistent basis for comparing water quality throughout California. Samples were collected from 52 wells in Los Angeles County. Thirty-five of the wells were selected using a spatially distributed, randomized grid-based method to provide statistical representation of the study area (grid wells), and seventeen wells were selected to aid in the evaluation of specific water-quality issues or changes in water chemistry along a historic ground-water flow path (understanding wells). The ground-water samples were analyzed for a large number of synthetic organic constituents [volatile organic compounds (VOCs), pesticides and pesticide degradates], constituents of special interest [perchlorate, N-nitrosodimethylamine (NDMA), 1,2,3-trichloropropane (1,2,3-TCP), and 1,4-dioxane], naturally occurring inorganic constituents (nutrients, major and minor ions, and trace elements), radioactive constituents, and microbial indicators. Naturally occurring isotopes (tritium, and carbon-14, and stable isotopes of hydrogen, oxygen, and carbon), and dissolved noble gases also were measured to help identify the source and age of the sampled ground water. Quality-control samples (blanks, replicates, samples for matrix spikes) were collected at approximately one-fifth (11 of 52) of the wells, and the results for these samples were used to evaluate the quality of the data for the ground-water samples. Assessment of the quality-control results showed that the data had very little bias or variability and resulted in censoring of less than 0.7 percent (32 of 4,484 measurements) of the data collected for ground-water samples. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, water typically is treated, disinfected, or blended with other waters to maintain acceptable water quality. Regulatory thresholds apply to treated water that is served to the consumer, not to raw ground water. However, to provide some context for the results, concentrations of constituents measured in the raw ground water were compared with health-based thresholds established by the U.S. Environmental Protection Agency (USEPA) and California Department of Public Health (CDPH) and thresholds established for aesthetic concerns (secondary maximum contaminant levels, SMCL-CA) by CDPH. VOCs were detected in more than 90 percent (33 of 35) of grid wells. For all wells sampled for SFSG, nearly all VOC detections were below health-based thresholds, and most were less than one-tenth of the threshold values. Samples from seven wells had at least one detection of PCE, TCE, tetrachloromethane, NDMA, or 1,2,3-TCP at or above a health-based threshold. Pesticides were detected in about 90 percent (31 of 35) grid wells and all detections in samples from SFSG wells were below health-based thresholds. Major ions, trace elements, and nutrients in samples from 17 SFSG wells were all below health-based thresholds, with the exception of one detection of nitrate that was above the USEPA maximum contaminant level (MCL-US). With the exception of 14 samples having radon-222 above the proposed MCL-US, radioactive constituents were below health-based thresholds for 16 of the SFSG wells sampled. Total dissolved solids in 6 of the 24 SFSG wells that were sampled ha
NASA Astrophysics Data System (ADS)
Liu, Zhihui; Wang, Haitao; Dong, Tao; Yin, Jie; Zhang, Tingting; Guo, Hui; Li, Dequan
2018-02-01
In this paper, the cognitive multi-beam satellite system, i.e., two satellite networks coexist through underlay spectrum sharing, is studied, and the power and spectrum allocation method is employed for interference control and throughput maximization. Specifically, the multi-beam satellite with flexible payload reuses the authorized spectrum of the primary satellite, adjusting its transmission band as well as power for each beam to limit its interference on the primary satellite below the prescribed threshold and maximize its own achievable rate. This power and spectrum allocation problem is formulated as a mixed nonconvex programming. For effective solving, we first introduce the concept of signal to leakage plus noise ratio (SLNR) to decouple multiple transmit power variables in the both objective and constraint, and then propose a heuristic algorithm to assign spectrum sub-bands. After that, a stepwise plus slice-wise algorithm is proposed to implement the discrete power allocation. Finally, simulation results show that adopting cognitive technology can improve spectrum efficiency of the satellite communication.
Angst, Ueli M.; Boschmann, Carolina; Wagner, Matthias; Elsener, Bernhard
2017-01-01
The aging of reinforced concrete infrastructure in developed countries imposes an urgent need for methods to reliably assess the condition of these structures. Corrosion of the embedded reinforcing steel is the most frequent cause for degradation. While it is well known that the ability of a structure to withstand corrosion depends strongly on factors such as the materials used or the age, it is common practice to rely on threshold values stipulated in standards or textbooks. These threshold values for corrosion initiation (Ccrit) are independent of the actual properties of a certain structure, which clearly limits the accuracy of condition assessments and service life predictions. The practice of using tabulated values can be traced to the lack of reliable methods to determine Ccrit on-site and in the laboratory. Here, an experimental protocol to determine Ccrit for individual engineering structures or structural members is presented. A number of reinforced concrete samples are taken from structures and laboratory corrosion testing is performed. The main advantage of this method is that it ensures real conditions concerning parameters that are well known to greatly influence Ccrit, such as the steel-concrete interface, which cannot be representatively mimicked in laboratory-produced samples. At the same time, the accelerated corrosion test in the laboratory permits the reliable determination of Ccrit prior to corrosion initiation on the tested structure; this is a major advantage over all common condition assessment methods that only permit estimating the conditions for corrosion after initiation, i.e., when the structure is already damaged. The protocol yields the statistical distribution of Ccrit for the tested structure. This serves as a basis for probabilistic prediction models for the remaining time to corrosion, which is needed for maintenance planning. This method can potentially be used in material testing of civil infrastructures, similar to established methods used for mechanical testing. PMID:28892023
Angst, Ueli M; Boschmann, Carolina; Wagner, Matthias; Elsener, Bernhard
2017-08-31
The aging of reinforced concrete infrastructure in developed countries imposes an urgent need for methods to reliably assess the condition of these structures. Corrosion of the embedded reinforcing steel is the most frequent cause for degradation. While it is well known that the ability of a structure to withstand corrosion depends strongly on factors such as the materials used or the age, it is common practice to rely on threshold values stipulated in standards or textbooks. These threshold values for corrosion initiation (Ccrit) are independent of the actual properties of a certain structure, which clearly limits the accuracy of condition assessments and service life predictions. The practice of using tabulated values can be traced to the lack of reliable methods to determine Ccrit on-site and in the laboratory. Here, an experimental protocol to determine Ccrit for individual engineering structures or structural members is presented. A number of reinforced concrete samples are taken from structures and laboratory corrosion testing is performed. The main advantage of this method is that it ensures real conditions concerning parameters that are well known to greatly influence Ccrit, such as the steel-concrete interface, which cannot be representatively mimicked in laboratory-produced samples. At the same time, the accelerated corrosion test in the laboratory permits the reliable determination of Ccrit prior to corrosion initiation on the tested structure; this is a major advantage over all common condition assessment methods that only permit estimating the conditions for corrosion after initiation, i.e., when the structure is already damaged. The protocol yields the statistical distribution of Ccrit for the tested structure. This serves as a basis for probabilistic prediction models for the remaining time to corrosion, which is needed for maintenance planning. This method can potentially be used in material testing of civil infrastructures, similar to established methods used for mechanical testing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Liefeng, E-mail: fengliefeng@tju.edu.cn, E-mail: lihongru@nankai.edu.cn; Yang, Xiufang; Wang, Cunda
2015-04-15
The junction behavior of different narrow band-gap multi-quantum-well (MQW) laser diodes (LDs) confirmed that the jump in the junction voltage in the threshold region is a general characteristic of narrow band-gap LDs. The relative change in the 1310 nm LD is the most obvious. To analyze this sudden voltage change, the threshold region is divided into three stages by I{sub th}{sup l} and I{sub th}{sup u}, as shown in Fig. 2; I{sub th}{sup l} is the conventional threshold, and as long as the current is higher than this threshold, lasing exists and the IdV/dI-I plot drops suddenly; I{sub th}{sup u}more » is the steady lasing point, at which the separation of the quasi-Fermi levels of electron and holes across the active region (V{sub j}) is suddenly pinned. Based on the evolutionary model of dissipative structure theory, the rate equations of the photons in a single-mode LD were deduced in detail at I{sub th}{sup l} and I{sub th}{sup u}. The results proved that the observed behavior of stimulated emission suddenly substituting for spontaneous emission, in a manner similar to biological evolution, must lead to a sudden increase in the injection carriers in the threshold region, which then causes the sudden increase in the junction voltage in this region.« less
Multi-Criteria Decision Making Approaches for Quality Control of Genome-Wide Association Studies
Malovini, Alberto; Rognoni, Carla; Puca, Annibale; Bellazzi, Riccardo
2009-01-01
Experimental errors in the genotyping phases of a Genome-Wide Association Study (GWAS) can lead to false positive findings and to spurious associations. An appropriate quality control phase could minimize the effects of this kind of errors. Several filtering criteria can be used to perform quality control. Currently, no formal methods have been proposed for taking into account at the same time these criteria and the experimenter’s preferences. In this paper we propose two strategies for setting appropriate genotyping rate thresholds for GWAS quality control. These two approaches are based on the Multi-Criteria Decision Making theory. We have applied our method on a real dataset composed by 734 individuals affected by Arterial Hypertension (AH) and 486 nonagenarians without history of AH. The proposed strategies appear to deal with GWAS quality control in a sound way, as they lead to rationalize and make explicit the experimenter’s choices thus providing more reproducible results. PMID:21347174
The risk of water scarcity at different levels of global warming
NASA Astrophysics Data System (ADS)
Schewe, Jacob; Sharpe, Simon
2015-04-01
Water scarcity is a threat to human well-being and economic development in many countries today. Future climate change is expected to exacerbate the global water crisis by reducing renewable freshwater resources different world regions, many of which are already dry. Studies of future water scarcity often focus on most-likely, or highest-confidence, scenarios. However, multi-model projections of water resources reveal large uncertainty ranges, which are due to different types of processes (climate, hydrology, human) and are therefore not easy to reduce. Thus, central estimates or multi-model mean results may be insufficient to inform policy and management. Here we present an alternative, risk-based approach. We use an ensemble of multiple global climate and hydrological models to quantify the likelihood of crossing a given water scarcity threshold under different levels of global warming. This approach allows assessing the risk associated with any particular, pre-defined threshold (or magnitude of change that must be avoided), regardless of whether it lies in the center or in the tails of the uncertainty distribution. We show applications of this method on the country and river basin scale, illustrate the effects of societal processes on the resulting risk estimates, and discuss the further potential of this approach for research and stakeholder dialogue.
A wavelet and least square filter based spatial-spectral denoising approach of hyperspectral imagery
NASA Astrophysics Data System (ADS)
Li, Ting; Chen, Xiao-Mei; Chen, Gang; Xue, Bo; Ni, Guo-Qiang
2009-11-01
Noise reduction is a crucial step in hyperspectral imagery pre-processing. Based on sensor characteristics, the noise of hyperspectral imagery represents in both spatial and spectral domain. However, most prevailing denosing techniques process the imagery in only one specific domain, which have not utilized multi-domain nature of hyperspectral imagery. In this paper, a new spatial-spectral noise reduction algorithm is proposed, which is based on wavelet analysis and least squares filtering techniques. First, in the spatial domain, a new stationary wavelet shrinking algorithm with improved threshold function is utilized to adjust the noise level band-by-band. This new algorithm uses BayesShrink for threshold estimation, and amends the traditional soft-threshold function by adding shape tuning parameters. Comparing with soft or hard threshold function, the improved one, which is first-order derivable and has a smooth transitional region between noise and signal, could save more details of image edge and weaken Pseudo-Gibbs. Then, in the spectral domain, cubic Savitzky-Golay filter based on least squares method is used to remove spectral noise and artificial noise that may have been introduced in during the spatial denoising. Appropriately selecting the filter window width according to prior knowledge, this algorithm has effective performance in smoothing the spectral curve. The performance of the new algorithm is experimented on a set of Hyperion imageries acquired in 2007. The result shows that the new spatial-spectral denoising algorithm provides more significant signal-to-noise-ratio improvement than traditional spatial or spectral method, while saves the local spectral absorption features better.
Hanousek, Ondrej; Santner, Jakob; Mason, Sean; Berger, Torsten W; Wenzel, Walter W; Prohaska, Thomas
2016-11-01
A diffusive gradient in thin films (DGT) technique, based on a strongly basic anion exchange resin (Amberlite IRA-400), was successfully tested for 34 S/ 32 S analysis in labile soil sulfate. Separation of matrix elements (Na, K, and Ca) that potentially cause non-spectral interferences in 34 S/ 32 S analysis by MC ICP-MS (multi-collector inductively coupled plasma-mass spectrometry) during sampling of sulfate was demonstrated. No isotopic fractionation caused by diffusion or elution of sulfate was observed below a resin gel disc loading of ≤79 μg S. Above this threshold, fractionation towards 34 S was observed. The method was applied to 11 different topsoils and one mineral soil profile (0-100 cm depth) and compared with soil sulfate extraction by water. The S amount and isotopic ratio in DGT-S and water-extractable sulfate correlated significantly (r 2 = 0.89 and r 2 = 0.74 for the 11 topsoils, respectively). The systematically lower 34 S/ 32 S isotope ratios of the DGT-S were ascribed to mineralization of organic S.
Variational-based segmentation of bio-pores in tomographic images
NASA Astrophysics Data System (ADS)
Bauer, Benjamin; Cai, Xiaohao; Peth, Stephan; Schladitz, Katja; Steidl, Gabriele
2017-01-01
X-ray computed tomography (CT) combined with a quantitative analysis of the resulting volume images is a fruitful technique in soil science. However, the variations in X-ray attenuation due to different soil components keep the segmentation of single components within these highly heterogeneous samples a challenging problem. Particularly demanding are bio-pores due to their elongated shape and the low gray value difference to the surrounding soil structure. Recently, variational models in connection with algorithms from convex optimization were successfully applied for image segmentation. In this paper we apply these methods for the first time for the segmentation of bio-pores in CT images of soil samples. We introduce a novel convex model which enforces smooth boundaries of bio-pores and takes the varying attenuation values in the depth into account. Segmentation results are reported for different real-world 3D data sets as well as for simulated data. These results are compared with two gray value thresholding methods, namely indicator kriging and a global thresholding procedure, and with a morphological approach. Pros and cons of the methods are assessed by considering geometric features of the segmented bio-pore systems. The variational approach features well-connected smooth pores while not detecting smaller or shallower pores. This is an advantage in cases where the main bio-pores network is of interest and where infillings, e.g., excrements of earthworms, would result in losing pore connections as observed for the other thresholding methods.
To sort or not to sort: the impact of spike-sorting on neural decoding performance.
Todorova, Sonia; Sadtler, Patrick; Batista, Aaron; Chase, Steven; Ventura, Valérie
2014-10-01
Brain-computer interfaces (BCIs) are a promising technology for restoring motor ability to paralyzed patients. Spiking-based BCIs have successfully been used in clinical trials to control multi-degree-of-freedom robotic devices. Current implementations of these devices require a lengthy spike-sorting step, which is an obstacle to moving this technology from the lab to the clinic. A viable alternative is to avoid spike-sorting, treating all threshold crossings of the voltage waveform on an electrode as coming from one putative neuron. It is not known, however, how much decoding information might be lost by ignoring spike identity. We present a full analysis of the effects of spike-sorting schemes on decoding performance. Specifically, we compare how well two common decoders, the optimal linear estimator and the Kalman filter, reconstruct the arm movements of non-human primates performing reaching tasks, when receiving input from various sorting schemes. The schemes we tested included: using threshold crossings without spike-sorting; expert-sorting discarding the noise; expert-sorting, including the noise as if it were another neuron; and automatic spike-sorting using waveform features. We also decoded from a joint statistical model for the waveforms and tuning curves, which does not involve an explicit spike-sorting step. Discarding the threshold crossings that cannot be assigned to neurons degrades decoding: no spikes should be discarded. Decoding based on spike-sorted units outperforms decoding based on electrodes voltage crossings: spike-sorting is useful. The four waveform based spike-sorting methods tested here yield similar decoding efficiencies: a fast and simple method is competitive. Decoding using the joint waveform and tuning model shows promise but is not consistently superior. Our results indicate that simple automated spike-sorting performs as well as the more computationally or manually intensive methods used here. Even basic spike-sorting adds value to the low-threshold waveform-crossing methods often employed in BCI decoding.
To sort or not to sort: the impact of spike-sorting on neural decoding performance
NASA Astrophysics Data System (ADS)
Todorova, Sonia; Sadtler, Patrick; Batista, Aaron; Chase, Steven; Ventura, Valérie
2014-10-01
Objective. Brain-computer interfaces (BCIs) are a promising technology for restoring motor ability to paralyzed patients. Spiking-based BCIs have successfully been used in clinical trials to control multi-degree-of-freedom robotic devices. Current implementations of these devices require a lengthy spike-sorting step, which is an obstacle to moving this technology from the lab to the clinic. A viable alternative is to avoid spike-sorting, treating all threshold crossings of the voltage waveform on an electrode as coming from one putative neuron. It is not known, however, how much decoding information might be lost by ignoring spike identity. Approach. We present a full analysis of the effects of spike-sorting schemes on decoding performance. Specifically, we compare how well two common decoders, the optimal linear estimator and the Kalman filter, reconstruct the arm movements of non-human primates performing reaching tasks, when receiving input from various sorting schemes. The schemes we tested included: using threshold crossings without spike-sorting; expert-sorting discarding the noise; expert-sorting, including the noise as if it were another neuron; and automatic spike-sorting using waveform features. We also decoded from a joint statistical model for the waveforms and tuning curves, which does not involve an explicit spike-sorting step. Main results. Discarding the threshold crossings that cannot be assigned to neurons degrades decoding: no spikes should be discarded. Decoding based on spike-sorted units outperforms decoding based on electrodes voltage crossings: spike-sorting is useful. The four waveform based spike-sorting methods tested here yield similar decoding efficiencies: a fast and simple method is competitive. Decoding using the joint waveform and tuning model shows promise but is not consistently superior. Significance. Our results indicate that simple automated spike-sorting performs as well as the more computationally or manually intensive methods used here. Even basic spike-sorting adds value to the low-threshold waveform-crossing methods often employed in BCI decoding.
Gülay, Arda; Smets, Barth F
2015-09-01
Exploring the variation in microbial community diversity between locations (β diversity) is a central topic in microbial ecology. Currently, there is no consensus on how to set the significance threshold for β diversity. Here, we describe and quantify the technical components of β diversity, including those associated with the process of subsampling. These components exist for any proposed β diversity measurement procedure. Further, we introduce a strategy to set significance thresholds for β diversity of any group of microbial samples using rarefaction, invoking the notion of a meta-community. The proposed technique was applied to several in silico generated operational taxonomic unit (OTU) libraries and experimental 16S rRNA pyrosequencing libraries. The latter represented microbial communities from different biological rapid sand filters at a full-scale waterworks. We observe that β diversity, after subsampling, is inflated by intra-sample differences; this inflation is avoided in the proposed method. In addition, microbial community evenness (Gini > 0.08) strongly affects all β diversity estimations due to bias associated with rarefaction. Where published methods to test β significance often fail, the proposed meta-community-based estimator is more successful at rejecting insignificant β diversity values. Applying our approach, we reveal the heterogeneous microbial structure of biological rapid sand filters both within and across filters. © 2014 Society for Applied Microbiology and John Wiley & Sons Ltd.
Scene text detection via extremal region based double threshold convolutional network classification
Zhu, Wei; Lou, Jing; Chen, Longtao; Xia, Qingyuan
2017-01-01
In this paper, we present a robust text detection approach in natural images which is based on region proposal mechanism. A powerful low-level detector named saliency enhanced-MSER extended from the widely-used MSER is proposed by incorporating saliency detection methods, which ensures a high recall rate. Given a natural image, character candidates are extracted from three channels in a perception-based illumination invariant color space by saliency-enhanced MSER algorithm. A discriminative convolutional neural network (CNN) is jointly trained with multi-level information including pixel-level and character-level information as character candidate classifier. Each image patch is classified as strong text, weak text and non-text by double threshold filtering instead of conventional one-step classification, leveraging confident scores obtained via CNN. To further prune non-text regions, we develop a recursive neighborhood search algorithm to track credible texts from weak text set. Finally, characters are grouped into text lines using heuristic features such as spatial location, size, color, and stroke width. We compare our approach with several state-of-the-art methods, and experiments show that our method achieves competitive performance on public datasets ICDAR 2011 and ICDAR 2013. PMID:28820891
Bennett, Peter A.; Bennett, George L.; Belitz, Kenneth
2009-01-01
Groundwater quality in the approximately 1,180-square-mile Northern Sacramento Valley study unit (REDSAC) was investigated in October 2007 through January 2008 as part of the Priority Basin Project of the Groundwater Ambient Monitoring and Assessment (GAMA) Program. The GAMA Priority Basin Project was developed in response to the Groundwater Quality Monitoring Act of 2001, and is being conducted by the U.S. Geological Survey (USGS) in cooperation with the California State Water Resources Control Board (SWRCB). The study was designed to provide a spatially unbiased assessment of the quality of raw groundwater used for public water supplies within REDSAC and to facilitate statistically consistent comparisons of groundwater quality throughout California. Samples were collected from 66 wells in Shasta and Tehama Counties. Forty-three of the wells were selected using a spatially distributed, randomized grid-based method to provide statistical representation of the study area (grid wells), and 23 were selected to aid in evaluation of specific water-quality issues (understanding wells). The groundwater samples were analyzed for a large number of synthetic organic constituents (volatile organic compounds [VOC], pesticides and pesticide degradates, and pharmaceutical compounds), constituents of special interest (perchlorate and N-nitrosodimethylamine [NDMA]), naturally occurring inorganic constituents (nutrients, major and minor ions, and trace elements), radioactive constituents, and microbial constituents. Naturally occurring isotopes (tritium, and carbon-14, and stable isotopes of nitrogen and oxygen in nitrate, stable isotopes of hydrogen and oxygen of water), and dissolved noble gases also were measured to help identify the sources and ages of the sampled ground water. In total, over 275 constituents and field water-quality indicators were investigated. Three types of quality-control samples (blanks, replicates, and sampmatrix spikes) were collected at approximately 8 to 11 percent of the wells, and the results for these samples were used to evaluate the quality of the data obtained from the groundwater samples. Field blanks rarely contained detectable concentrations of any constituent, suggesting that contamination was not a noticeable source of bias in the data for the groundwater samples. Differences between replicate samples were within acceptable ranges for nearly all compounds, indicating acceptably low variability. Matrix-spike recoveries were within acceptable ranges for most compounds. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, raw groundwater typically is treated, disinfected, or blended with other waters to maintain water quality. Regulatory thresholds apply to water that is served to the consumer, not to raw ground water. However, to provide some context for the results, concentrations of constituents measured in the raw groundwater were compared with regulatory and nonregulatory health-based thresholds established by the U.S. Environmental Protection Agency (USEPA) and California Department of Public Health (CDPH) and with aesthetic and technical thresholds established by CDPH. Comparisons between data collected for this study and drinking-water thresholds are for illustrative purposes only and do not indicate compliance or noncompliance with those thresholds. The concentrations of most constituents detected in groundwater samples from REDSAC were below drinking-water thresholds. Volatile organic compounds (VOC) and pesticides were detected in less than one-quarter of the samples and were generally less than a hundredth of any health-based thresholds. NDMA was detected in one grid well above the NL-CA. Concentrations of all nutrients and trace elements in samples from REDSAC wells were below the health-based thresholds except those of arsenic in three samples, which were above the USEPA maximum contaminant level (MCL-US). However
Soft x-ray free-electron laser induced damage to inorganic scintillators
Burian, Tomáš; Hájková, Věra; Chalupský, Jaromír; ...
2015-01-07
An irreversible response of inorganic scintillators to intense soft x-ray laser radiation was investigated at the FLASH (Free-electron LASer in Hamburg) facility. Three ionic crystals, namely, Ce:YAG (cerium-doped yttrium aluminum garnet), PbWO4 (lead tungstate), and ZnO (zinc oxide), were exposed to single 4.6 nm ultra-short laser pulses of variable pulse energy (up to 12 μJ) under normal incidence conditions with tight focus. Damaged areas produced with various levels of pulse fluences, were analyzed on the surface of irradiated samples using differential interference contrast (DIC) and atomic force microscopy (AFM). The effective beam area of 22.2 ± 2.2 μm2 was determinedmore » by means of the ablation imprints method with the use of poly(methyl methacrylate) - PMMA. Applied to the three inorganic materials, this procedure gave almost the same values of an effective area. The single-shot damage threshold fluence was determined for each of these inorganic materials. The Ce:YAG sample seems to be the most radiation resistant under the given irradiation conditions, its damage threshold was determined to be as high as 660.8 ± 71.2 mJ/cm2. Contrary to that, the PbWO4 sample exhibited the lowest radiation resistance with a threshold fluence of 62.6 ± 11.9 mJ/cm2. The threshold for ZnO was found to be 167.8 ± 30.8 mJ/cm2. Both interaction and material characteristics responsible for the damage threshold difference are discussed in the article.« less
Sadeghi-Tehran, Pouria; Virlet, Nicolas; Sabermanesh, Kasra; Hawkesford, Malcolm J
2017-01-01
Accurately segmenting vegetation from the background within digital images is both a fundamental and a challenging task in phenotyping. The performance of traditional methods is satisfactory in homogeneous environments, however, performance decreases when applied to images acquired in dynamic field environments. In this paper, a multi-feature learning method is proposed to quantify vegetation growth in outdoor field conditions. The introduced technique is compared with the state-of the-art and other learning methods on digital images. All methods are compared and evaluated with different environmental conditions and the following criteria: (1) comparison with ground-truth images, (2) variation along a day with changes in ambient illumination, (3) comparison with manual measurements and (4) an estimation of performance along the full life cycle of a wheat canopy. The method described is capable of coping with the environmental challenges faced in field conditions, with high levels of adaptiveness and without the need for adjusting a threshold for each digital image. The proposed method is also an ideal candidate to process a time series of phenotypic information throughout the crop growth acquired in the field. Moreover, the introduced method has an advantage that it is not limited to growth measurements only but can be applied on other applications such as identifying weeds, diseases, stress, etc.
Basis Selection for Wavelet Regression
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Lau, Sonie (Technical Monitor)
1998-01-01
A wavelet basis selection procedure is presented for wavelet regression. Both the basis and the threshold are selected using cross-validation. The method includes the capability of incorporating prior knowledge on the smoothness (or shape of the basis functions) into the basis selection procedure. The results of the method are demonstrated on sampled functions widely used in the wavelet regression literature. The results of the method are contrasted with other published methods.
Fluctuation scaling in the visual cortex at threshold
NASA Astrophysics Data System (ADS)
Medina, José M.; Díaz, José A.
2016-05-01
Fluctuation scaling relates trial-to-trial variability to the average response by a power function in many physical processes. Here we address whether fluctuation scaling holds in sensory psychophysics and its functional role in visual processing. We report experimental evidence of fluctuation scaling in human color vision and form perception at threshold. Subjects detected thresholds in a psychophysical masking experiment that is considered a standard reference for studying suppression between neurons in the visual cortex. For all subjects, the analysis of threshold variability that results from the masking task indicates that fluctuation scaling is a global property that modulates detection thresholds with a scaling exponent that departs from 2, β =2.48 ±0.07 . We also examine a generalized version of fluctuation scaling between the sample kurtosis K and the sample skewness S of threshold distributions. We find that K and S are related and follow a unique quadratic form K =(1.19 ±0.04 ) S2+(2.68 ±0.06 ) that departs from the expected 4/3 power function regime. A random multiplicative process with weak additive noise is proposed based on a Langevin-type equation. The multiplicative process provides a unifying description of fluctuation scaling and the quadratic S -K relation and is related to on-off intermittency in sensory perception. Our findings provide an insight into how the human visual system interacts with the external environment. The theoretical methods open perspectives for investigating fluctuation scaling and intermittency effects in a wide variety of natural, economic, and cognitive phenomena.
An intelligent detection method for high-field asymmetric waveform ion mobility spectrometry.
Li, Yue; Yu, Jianwen; Ruan, Zhiming; Chen, Chilai; Chen, Ran; Wang, Han; Liu, Youjiang; Wang, Xiaozhi; Li, Shan
2018-04-01
In conventional high-field asymmetric waveform ion mobility spectrometry signal acquisition, multi-cycle detection is time consuming and limits somewhat the technique's scope for rapid field detection. In this study, a novel intelligent detection approach has been developed in which a threshold was set on the relative error of α parameters, which can eliminate unnecessary time spent on detection. In this method, two full-spectrum scans were made in advance to obtain the estimated compensation voltage at different dispersion voltages, resulting in a narrowing down of the whole scan area to just the peak area(s) of interest. This intelligent detection method can reduce the detection time to 5-10% of that of the original full-spectrum scan in a single cycle.
A Bayesian Approach to the Overlap Analysis of Epidemiologically Linked Traits.
Asimit, Jennifer L; Panoutsopoulou, Kalliope; Wheeler, Eleanor; Berndt, Sonja I; Cordell, Heather J; Morris, Andrew P; Zeggini, Eleftheria; Barroso, Inês
2015-12-01
Diseases often cooccur in individuals more often than expected by chance, and may be explained by shared underlying genetic etiology. A common approach to genetic overlap analyses is to use summary genome-wide association study data to identify single-nucleotide polymorphisms (SNPs) that are associated with multiple traits at a selected P-value threshold. However, P-values do not account for differences in power, whereas Bayes' factors (BFs) do, and may be approximated using summary statistics. We use simulation studies to compare the power of frequentist and Bayesian approaches with overlap analyses, and to decide on appropriate thresholds for comparison between the two methods. It is empirically illustrated that BFs have the advantage over P-values of a decreasing type I error rate as study size increases for single-disease associations. Consequently, the overlap analysis of traits from different-sized studies encounters issues in fair P-value threshold selection, whereas BFs are adjusted automatically. Extensive simulations show that Bayesian overlap analyses tend to have higher power than those that assess association strength with P-values, particularly in low-power scenarios. Calibration tables between BFs and P-values are provided for a range of sample sizes, as well as an approximation approach for sample sizes that are not in the calibration table. Although P-values are sometimes thought more intuitive, these tables assist in removing the opaqueness of Bayesian thresholds and may also be used in the selection of a BF threshold to meet a certain type I error rate. An application of our methods is used to identify variants associated with both obesity and osteoarthritis. © 2015 The Authors. *Genetic Epidemiology published by Wiley Periodicals, Inc.
A Ratiometric Threshold for Determining Presence of Cancer During Fluorescence-guided Surgery
Warram, Jason M; de Boer, Esther; Moore, Lindsay S.; Schmalbach, Cecelia E; Withrow, Kirk P; Carroll, William R; Richman, Joshua S; Morlandt, Anthony B; Brandwein-Gensler, Margaret; Rosenthal, Eben L
2015-01-01
Background&Objective Fluorescence-guided imaging to assist in identification of malignant margins has the potential to dramatically improve oncologic surgery. However a standardized method for quantitative assessment of disease-specific fluorescence has not been investigated. Introduced here is a ratiometric threshold derived from mean fluorescent tissue intensity that can be used to semi-quantitatively delineate tumor from normal tissue. Methods Open-field and a closed-field imaging devices were used to quantify fluorescence in punch biopsy tissues sampled from primary tumors collected during a phase 1 trial evaluating the safety of cetuximab-IRDye800 in patients (n=11) undergoing surgical intervention for head and neck cancer. Fluorescence ratios were calculated using mean fluorescence intensity (MFI) from punch biopsy normalized by MFI of patient-matched tissues. Ratios were compared to pathological assessment and a ratiometric threshold was established to predict presence of cancer. Results During open-field imaging using an intraoperative device, the threshold for muscle normalized tumor fluorescence was found to be 2.7, which produced a sensitivity of 90.5% and specificity of 78.6% for delineating disease tissue. The skin-normalized threshold generated greater sensitivity (92.9%) and specificity (81.0%). Conclusion Successful implementation of a semi-quantitative threshold can provide a scientific methodology for delineating disease from normal tissue during fluorescence-guided resection of cancer. PMID:26074273
Sutton, J A; Gillin, W P; Grattan, T J; Clarke, G D; Kilminster, S G
2002-01-01
Aims To discover whether a new infra-red laser method could detect a change in pain threshold after as mild an analgesic as paracetamol and whether an effervescent liquid formulation produced a faster onset of action than tablets. Methods This double-blind, placebo controlled randomized study used a portable, infra-red laser to measure ‘first pain’ thresholds on the nondominant forearm in 12 normal volunteers before and after 1 g of paracetamol or placebo. The mean of six recordings was determined three times before dosing, the first being used as a familiarization procedure, and 14 times after dosing. Results We detected a small (2%), statistically significant difference in pain threshold between a liquid formulation of paracetamol and placebo at 30 and 60 min (P = 0.004 and P = 0.001), but not between tablets and placebo. Liquid also increased the threshold significantly compared with tablets at 60 min (P = 0.01). Conclusions To detect such a small increase in pain threshold requires a highly consistent measure and the coefficient of variation was 2% for the study overall, surprisingly low for a subjective phenomenon. The reasons for this include minimizing reflectance by blacking the skin, using a nonhairy site, averaging six data points at each sample time and controlling closely the ambient conditions and the subjects’ preparation for studies. PMID:11849194
Accuracy of Cochlear Implant Recipients on Speech Reception in Background Music
Gfeller, Kate; Turner, Christopher; Oleson, Jacob; Kliethermes, Stephanie; Driscoll, Virginia
2012-01-01
Objectives This study (a) examined speech recognition abilities of cochlear implant (CI) recipients in the spectrally complex listening condition of three contrasting types of background music, and (b) compared performance based upon listener groups: CI recipients using conventional long-electrode (LE) devices, Hybrid CI recipients (acoustic plus electric stimulation), and normal-hearing (NH) adults. Methods We tested 154 LE CI recipients using varied devices and strategies, 21 Hybrid CI recipients, and 49 NH adults on closed-set recognition of spondees presented in three contrasting forms of background music (piano solo, large symphony orchestra, vocal solo with small combo accompaniment) in an adaptive test. Outcomes Signal-to-noise thresholds for speech in music (SRTM) were examined in relation to measures of speech recognition in background noise and multi-talker babble, pitch perception, and music experience. Results SRTM thresholds varied as a function of category of background music, group membership (LE, Hybrid, NH), and age. Thresholds for speech in background music were significantly correlated with measures of pitch perception and speech in background noise thresholds; auditory status was an important predictor. Conclusions Evidence suggests that speech reception thresholds in background music change as a function of listener age (with more advanced age being detrimental), structural characteristics of different types of music, and hearing status (residual hearing). These findings have implications for everyday listening conditions such as communicating in social or commercial situations in which there is background music. PMID:23342550
Rapid Isolation and Detection for RNA Biomarkers for TBI Diagnostics
2016-10-01
address the qualitative result of PCR by choosing the threshold crossover cycle (CT) as a surrogate measure of the RNA/DNA originally in the sample ...include developing DEP techniques for isolation of cell-free (cf) RNA from glioblastoma exosomes and TBI samples (IRB dependent); methods for on... Sample to Answer diagnostics. 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 19a. NAME OF
Sun, Yangbo; Chen, Long; Huang, Bisheng; Chen, Keli
2017-07-01
As a mineral, the traditional Chinese medicine calamine has a similar shape to many other minerals. Investigations of commercially available calamine samples have shown that there are many fake and inferior calamine goods sold on the market. The conventional identification method for calamine is complicated, therefore as a result of the large scale of calamine samples, a rapid identification method is needed. To establish a qualitative model using near-infrared (NIR) spectroscopy for rapid identification of various calamine samples, large quantities of calamine samples including crude products, counterfeits and processed products were collected and correctly identified using the physicochemical and powder X-ray diffraction method. The NIR spectroscopy method was used to analyze these samples by combining the multi-reference correlation coefficient (MRCC) method and the error back propagation artificial neural network algorithm (BP-ANN), so as to realize the qualitative identification of calamine samples. The accuracy rate of the model based on NIR and MRCC methods was 85%; in addition, the model, which took comprehensive multiple factors into consideration, can be used to identify crude calamine products, its counterfeits and processed products. Furthermore, by in-putting the correlation coefficients of multiple references as the spectral feature data of samples into BP-ANN, a BP-ANN model of qualitative identification was established, of which the accuracy rate was increased to 95%. The MRCC method can be used as a NIR-based method in the process of BP-ANN modeling.
NASA Astrophysics Data System (ADS)
Sztáray, Bálint; Voronova, Krisztina; Torma, Krisztián G.; Covert, Kyle J.; Bodi, Andras; Hemberger, Patrick; Gerber, Thomas; Osborn, David L.
2017-07-01
Photoelectron photoion coincidence (PEPICO) spectroscopy could become a powerful tool for the time-resolved study of multi-channel gas phase chemical reactions. Toward this goal, we have designed and tested electron and ion optics that form the core of a new PEPICO spectrometer, utilizing simultaneous velocity map imaging for both cations and electrons, while also achieving good cation mass resolution through space focusing. These optics are combined with a side-sampled, slow-flow chemical reactor for photolytic initiation of gas-phase chemical reactions. Together with a recent advance that dramatically increases the dynamic range in PEPICO spectroscopy [D. L. Osborn et al., J. Chem. Phys. 145, 164202 (2016)], the design described here demonstrates a complete prototype spectrometer and reactor interface to carry out time-resolved experiments. Combining dual velocity map imaging with cation space focusing yields tightly focused photoion images for translationally cold neutrals, while offering good mass resolution for thermal samples as well. The flexible optics design incorporates linear electric fields in the ionization region, surrounded by dual curved electric fields for velocity map imaging of ions and electrons. Furthermore, the design allows for a long extraction stage, which makes this the first PEPICO experiment to combine ion imaging with the unimolecular dissociation rate constant measurements of cations to detect and account for kinetic shifts. Four examples are shown to illustrate some capabilities of this new design. We recorded the threshold photoelectron spectrum of the propargyl and the iodomethyl radicals. While the former agrees well with a literature threshold photoelectron spectrum, we have succeeded in resolving the previously unobserved vibrational structure in the latter. We have also measured the bimolecular rate constant of the CH2I + O2 reaction and observed its product, the smallest Criegee intermediate, CH2OO. Finally, the second dissociative photoionization step of iodocyclohexane ions, the loss of ethylene from the cyclohexyl cation, is slow at threshold, as illustrated by the asymmetric threshold photoionization time-of-flight distributions.
Precluding nonlinear ISI in direct detection long-haul fiber optic systems
NASA Technical Reports Server (NTRS)
Swenson, Norman L.; Shoop, Barry L.; Cioffi, John M.
1991-01-01
Long-distance, high-rate fiber optic systems employing directly modulated 1.55-micron single-mode lasers and conventional single-mode fiber suffer severe intersymbol interference (ISI) with a large nonlinear component. A method of reducing the nonlinearity of the ISI, thereby making linear equalization more viable, is investigated. It is shown that the degree of nonlinearity is highly dependent on the choice of laser bias current, and that in some cases the ISI nonlinearity can be significantly reduced by biasing the laser substantially above threshold. Simulation results predict that an increase in signal-to-nonlinear-distortion ratio as high as 25 dB can be achieved for synchronously spaced samples at an optimal sampling phase by increasing the bias current from 1.2 times threshold to 3.5 times threshold. The high SDR indicates that a linear tapped delay line equalizer could be used to mitigate ISI. Furthermore, the shape of the pulse response suggests that partial response precoding and digital feedback equalization would be particularly effective for this channel.
The fragmentation threshold and implications for explosive eruptions
NASA Astrophysics Data System (ADS)
Kennedy, B.; Spieler, O.; Kueppers, U.; Scheu, B.; Mueller, S.; Taddeucci, J.; Dingwell, D.
2003-04-01
The fragmentation threshold is the minimum pressure differential required to cause a porous volcanic rock to form pyroclasts. This is a critical parameter when considering the shift from effusive to explosive eruptions. We fragmented a variety of natural volcanic rock samples at room temperature (20oC) and high temperature (850oC) using a shock tube modified after Aldibirov and Dingwell (1996). This apparatus creates a pressure differential which drives fragmentation. Pressurized gas in the vesicles of the rock suddenly expands, blowing the sample apart. For this reason, the porosity is the primary control on the fragmentation threshold. On a graph of porosity against fragmentation threshold, our results from a variety of natural samples at both low and high temperatures all plot on the same curve and show the threshold increasing steeply at low porosities. A sharp decrease in the fragmentation threshold occurs as porosity increases from 0- 15%, while a more gradual decrease is seen from 15- 85%. The high temperature experiments form a curve with less variability than the low temperature experiments. For this reason, we have chosen to model the high temperature thresholds. The curve can be roughly predicted by the tensile strength of glass (140 MPa) divided by the porosity. Fractured phenocrysts in the majority of our samples reduces the overall strength of the sample. For this reason, the threshold values can be more accurately predicted by % matrix x the tensile strength/ porosity. At very high porosities the fragmentation threshold varies significantly due to the effect of bubble shape and size distributions on the permeability (Mueller et al, 2003). For example, high thresholds are seen for samples with very high permeabilities, where gas flow reduces the local pressure differential. These results allow us to predict the fragmentation threshold for any volcanic rock for which the porosity and crystal contents are known. During explosive eruptions, the fragmentation threshold may be exceeded in two ways: (1) by building an overpressure within the vesicles above the fragmentation threshold or (2) by unloading and exposing lithostatically pressurised magma to lower pressures. Using this data, we can in principle estimate the height of dome collapse or amount of overpressure necessary to produce an explosive eruption.
Low Voltage Electrowetting-on-Dielectric Platform using Multi-Layer Insulators
Lin, Yan-You; Evans, Randall D.; Welch, Erin; Hsu, Bang-Ning; Madison, Andrew C.; Fair, Richard B.
2010-01-01
A low voltage, two-level-metal, and multi-layer insulator electrowetting-on-dielectric (EWD) platform is presented. Dispensing 300pl droplets from 140nl closed on-chip reservoirs was accomplished with as little as 11.4V solely through EWD forces, and the actuation threshold voltage was 7.2V with a 1Hz voltage switching rate between electrodes. EWD devices were fabricated with a multilayer insulator consisting of 135nm sputtered tantalum pentoxide (Ta2O5) and 180nm parylene C coated with 70nm of CYTOP. Furthermore, the minimum actuation threshold voltage followed a previously published scaling model for the threshold voltage, VT, which is proportional to (t/εr)1/2, where t and εr are the insulator thickness and dielectric constant respectively. Device threshold voltages are compared for several insulator thicknesses (200nm, 500nm, and 1µm), different dielectric materials (parylene C and tantalum pentoxide), and homogeneous versus heterogeneous compositions. Additionally, we used a two-level-metal fabrication process, which enables the fabrication of smaller and denser electrodes with high interconnect routing flexibility. We also have achieved low dispensing and actuation voltages for scaled devices with 30pl droplets. PMID:20953362
Multiple Fingers - One Gestalt.
Lezkan, Alexandra; Manuel, Steven G; Colgate, J Edward; Klatzky, Roberta L; Peshkin, Michael A; Drewing, Knut
2016-01-01
The Gestalt theory of perception offered principles by which distributed visual sensations are combined into a structured experience ("Gestalt"). We demonstrate conditions whereby haptic sensations at two fingertips are integrated in the perception of a single object. When virtual bumps were presented simultaneously to the right hand's thumb and index finger during lateral arm movements, participants reported perceiving a single bump. A discrimination task measured the bump's perceived location and perceptual reliability (assessed by differential thresholds) for four finger configurations, which varied in their adherence to the Gestalt principles of proximity (small versus large finger separation) and synchrony (virtual spring to link movements of the two fingers versus no spring). According to models of integration, reliability should increase with the degree to which multi-finger cues integrate into a unified percept. Differential thresholds were smaller in the virtual-spring condition (synchrony) than when fingers were unlinked. Additionally, in the condition with reduced synchrony, greater proximity led to lower differential thresholds. Thus, with greater adherence to Gestalt principles, thresholds approached values predicted for optimal integration. We conclude that the Gestalt principles of synchrony and proximity apply to haptic perception of surface properties and that these principles can interact to promote multi-finger integration.
NASA Astrophysics Data System (ADS)
Hendricks, Lorin; Spencer Guthrie, W.; Mazzeo, Brian
2018-04-01
An automated acoustic impact-echo testing device with seven channels has been developed for faster surveying of bridge decks. Due to potential variations in bridge deck overlay thickness, varying conditions between testing passes, and occasional imprecise equipment calibrations, a method that can account for variations in deck properties and testing conditions was necessary to correctly interpret the acoustic data. A new methodology involving statistical analyses was therefore developed. After acoustic impact-echo data are collected and analyzed, the results are normalized by the median for each channel, a Gaussian distribution is fit to the histogram of the data, and the Kullback-Leibler divergence test or Otsu's method is then used to determine the optimum threshold for differentiating between intact and delaminated concrete. The new methodology was successfully applied to individual channels of previously unusable acoustic impact-echo data obtained from a three-lane interstate bridge deck surfaced with a polymer overlay, and the resulting delamination map compared very favorably with the results of a manual deck sounding survey.
Inoue, K; Yoshimura, Y; Makino, T; Nakazawa, H
2000-11-01
Alkylphenols can affect human health because they disrupt the endocrine system. In this study, an analytical method for determining trace amounts of 4-nonylphenol (NP) and 4-octylphenol (OP) in human blood samples was developed. Reversed-phase HPLC with multi-electrode electrochemical coulometric-array detection was used for the determination of NP and OP in plasma and serum samples prepared with a solid-phase extraction method. The separation was achieved using an isocratic mobile phase of 0.7% phosphoric acid-acetonitrile with a C18 reversed phase column. The detection limits of NP and OP were 1.0 and 0.5 ng ml-1, respectively. The recoveries of NP and OP added to human plasma samples were above 70.0% with a relative standard deviation of less than 15.5%. The method was found to be applicable to the determination of NP and OP in various human blood samples such as serum and plasma.
GNSS software receiver sampling noise and clock jitter performance and impact analysis
NASA Astrophysics Data System (ADS)
Chen, Jian Yun; Feng, XuZhe; Li, XianBin; Wu, GuangYao
2015-02-01
In the design of a multi-frequency multi-constellation GNSS software defined radio receivers is becoming more and more popular due to its simple architecture, flexible configuration and good coherence in multi-frequency signal processing. It plays an important role in navigation signal processing and signal quality monitoring. In particular, GNSS software defined radio receivers driving the sampling clock of analogue-to-digital converter (ADC) by FPGA implies that a more flexible radio transceiver design is possible. According to the concept of software defined radio (SDR), the ideal is to digitize as close to the antenna as possible. Whereas the carrier frequency of GNSS signal is of the frequency of GHz, converting at this frequency is expensive and consumes more power. Band sampling method is a cheaper, more effective alternative. When using band sampling method, it is possible to sample a RF signal at twice the bandwidth of the signal. Unfortunately, as the other side of the coin, the introduction of SDR concept and band sampling method induce negative influence on the performance of the GNSS receivers. ADC's suffer larger sampling clock jitter generated by FPGA; and low sampling frequency introduces more noise to the receiver. Then the influence of sampling noise cannot be neglected. The paper analyzes the sampling noise, presents its influence on the carrier noise ratio, and derives the ranging error by calculating the synchronization error of the delay locked loop. Simulations aiming at each impact factors of sampling-noise-induced ranging error are performed. Simulation and experiment results show that if the target ranging accuracy is at the level of centimeter, the quantization length should be no less than 8 and the sampling clock jitter should not exceed 30ps.
Saxena, Sushil Kumar; Rangasamy, Rajesh; Krishnan, Anoop A; Singh, Dhirendra P; Uke, Sumedh P; Malekadi, Praveen Kumar; Sengar, Anoop S; Mohamed, D Peer; Gupta, Ananda
2018-09-15
An accurate, reliable and fast multi-residue, multi-class method using ultra-performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) was developed and validated for simultaneous determination and quantification of 24 pharmacologically active substances of three different classes (Quinolones including fluoroquinolones, sulphonamides and tetracyclines) in aquaculture shrimps. Sample preparation involves extraction with acetonitrile containing 0.1% formic acid and followed by clean up with n-hexane and 0.1% methanol in water by UPLC-MS/MS within 8 min. The method was validated according to European Commission Decision 2002/657. Acceptable values were obtained for linearity (5-200 μg kg -1 ), specificity, Limit of Quantification (5-10 μg kg -1 ), recovery (between 83 and 100%), repeatability (RSD < 9%), within lab reproducibility (RSD < 15%), reproducibility (RSD ≤ 22%), decision limit (105-116 μg kg -1 ) and detection capability (110-132 μg kg -1 ). The validated method was applied to aquaculture shrimp samples from India. Copyright © 2018 Elsevier Ltd. All rights reserved.
From Sample to Multi-Omics Conclusions in under 48 Hours
Navas-Molina, Jose A.; Hyde, Embriette R.; Vázquez-Baeza, Yoshiki; Humphrey, Greg; Gaffney, James; Minich, Jeremiah J.; Melnik, Alexey V.; Herschend, Jakob; DeReus, Jeff; Durant, Austin; Dutton, Rachel J.; Khosroheidari, Mahdieh; Green, Clifford; da Silva, Ricardo; Dorrestein, Pieter C.; Knight, Rob
2016-01-01
ABSTRACT Multi-omics methods have greatly advanced our understanding of the biological organism and its microbial associates. However, they are not routinely used in clinical or industrial applications, due to the length of time required to generate and analyze omics data. Here, we applied a novel integrated omics pipeline for the analysis of human and environmental samples in under 48 h. Human subjects that ferment their own foods provided swab samples from skin, feces, oral cavity, fermented foods, and household surfaces to assess the impact of home food fermentation on their microbial and chemical ecology. These samples were analyzed with 16S rRNA gene sequencing, inferred gene function profiles, and liquid chromatography-tandem mass spectrometry (LC-MS/MS) metabolomics through the Qiita, PICRUSt, and GNPS pipelines, respectively. The human sample microbiomes clustered with the corresponding sample types in the American Gut Project (http://www.americangut.org), and the fermented food samples produced a separate cluster. The microbial communities of the household surfaces were primarily sourced from the fermented foods, and their consumption was associated with increased gut microbial diversity. Untargeted metabolomics revealed that human skin and fermented food samples had separate chemical ecologies and that stool was more similar to fermented foods than to other sample types. Metabolites from the fermented foods, including plant products such as procyanidin and pheophytin, were present in the skin and stool samples of the individuals consuming the foods. Some food metabolites were modified during digestion, and others were detected in stool intact. This study represents a first-of-its-kind analysis of multi-omics data that achieved time intervals matching those of classic microbiological culturing. IMPORTANCE Polymicrobial infections are difficult to diagnose due to the challenge in comprehensively cultivating the microbes present. Omics methods, such as 16S rRNA sequencing, metagenomics, and metabolomics, can provide a more complete picture of a microbial community and its metabolite production, without the biases and selectivity of microbial culture. However, these advanced methods have not been applied to clinical or industrial microbiology or other areas where complex microbial dysbioses require immediate intervention. The reason for this is the length of time required to generate and analyze omics data. Here, we describe the development and application of a pipeline for multi-omics data analysis in time frames matching those of the culture-based approaches often used for these applications. This study applied multi-omics methods effectively in clinically relevant time frames and sets a precedent toward their implementation in clinical medicine and industrial microbiology. PMID:27822524
Densmore, Jill N.; Fram, Miranda S.; Belitz, Kenneth
2009-01-01
Ground-water quality in the approximately 1,630 square-mile Owens and Indian Wells Valleys study unit (OWENS) was investigated in September-December 2006 as part of the Priority Basin Project of Groundwater Ambient Monitoring and Assessment (GAMA) Program. The GAMA Priority Basin Project was developed in response to the Groundwater Quality Monitoring Act of 2001 and is being conducted by the U.S. Geological Survey (USGS) in collaboration with the California State Water Resources Control Board (SWRCB). The Owens and Indian Wells Valleys study was designed to provide a spatially unbiased assessment of raw ground-water quality within OWENS study unit, as well as a statistically consistent basis for comparing water quality throughout California. Samples were collected from 74 wells in Inyo, Kern, Mono, and San Bernardino Counties. Fifty-three of the wells were selected using a spatially distributed, randomized grid-based method to provide statistical representation of the study area (grid wells), and 21 wells were selected to evaluate changes in water chemistry in areas of interest (understanding wells). The ground-water samples were analyzed for a large number of synthetic organic constituents [volatile organic compounds (VOCs), pesticides and pesticide degradates, pharmaceutical compounds, and potential wastewater- indicator compounds], constituents of special interest [perchlorate, N-nitrosodimethylamine (NDMA), and 1,2,3- trichloropropane (1,2,3-TCP)], naturally occurring inorganic constituents [nutrients, major and minor ions, and trace elements], radioactive constituents, and microbial indicators. Naturally occurring isotopes [tritium, and carbon-14, and stable isotopes of hydrogen and oxygen in water], and dissolved noble gases also were measured to help identify the source and age of the sampled ground water. This study evaluated the quality of raw ground water in the aquifer in the OWENS study unit and did not attempt to evaluate the quality of treated water delivered to consumers. Water supplied to consumers typically is treated after withdrawal from the ground, disinfected, and blended with other waters to maintain acceptable water quality. Regulatory thresholds apply to treated water that is served to the consumer, not to raw ground water. However, to provide some context for the results, concentrations of constituents measured in the raw ground water were compared with regulatory and non-regulatory health-based thresholds established by the U.S. Environmental Protection Agency (USEPA) and California Department of Public Health (CDPH) and non-regulatory thresholds established for aesthetic concerns (secondary maximum contamination levels, SMCL-CA) by CDPH. VOCs and pesticides were detected in samples from less than one-third of the grid wells; all detections were below health-based thresholds, and most were less than one-one hundredth of threshold values. All detections of perchlorate and nutrients in samples from OWENS were below health-based thresholds. Most detections of trace elements in ground-water samples from OWENS wells were below health-based thresholds. In samples from the 53 grid wells, three constituents were detected at concentrations above USEPA maximum contaminant levels: arsenic in 5 samples, uranium in 4 samples, and fluoride in 1 sample. Two constituents were detected at concentrations above CDPH notification levels (boron in 9 samples and vanadium in 1 sample), and two were above USEPA lifetime health advisory levels (molybdenum in 3 samples and strontium in 1 sample). Most of the samples from OWENS wells had concentrations of major elements, TDS, and trace elements below the non-enforceable standards set for aesthetic concerns. Samples from nine grid wells had concentrations of manganese, iron, or TDS above the SMCL-CAs.
NASA Astrophysics Data System (ADS)
Yamaguchi, Atsuko; Ohashi, Takeyoshi; Kawasaki, Takahiro; Inoue, Osamu; Kawada, Hiroki
2013-04-01
A new method for calculating critical dimension (CDs) at the top and bottom of three-dimensional (3D) pattern profiles from a critical-dimension scanning electron microscope (CD-SEM) image, called as "T-sigma method", is proposed and evaluated. Without preparing a library of database in advance, T-sigma can estimate a feature of a pattern sidewall. Furthermore, it supplies the optimum edge-definition (i.e., threshold level for determining edge position from a CDSEM signal) to detect the top and bottom of the pattern. This method consists of three steps. First, two components of line-edge roughness (LER); noise-induced bias (i.e., LER bias) and unbiased component (i.e., bias-free LER) are calculated with set threshold level. Second, these components are calculated with various threshold values, and the threshold-dependence of these two components, "T-sigma graph", is obtained. Finally, the optimum threshold value for the top and the bottom edge detection are given by the analysis of T-sigma graph. T-sigma was applied to CD-SEM images of three kinds of resist-pattern samples. In addition, reference metrology was performed with atomic force microscope (AFM) and scanning transmission electron microscope (STEM). Sensitivity of CD measured by T-sigma to the reference CD was higher than or equal to that measured by the conventional edge definition. Regarding the absolute measurement accuracy, T-sigma showed better results than the conventional definition. Furthermore, T-sigma graphs were calculated from CD-SEM images of two kinds of resist samples and compared with corresponding STEM observation results. Both bias-free LER and LER bias increased as the detected edge point moved from the bottom to the top of the pattern in the case that the pattern had a straight sidewall and a round top. On the other hand, they were almost constant in the case that the pattern had a re-entrant profile. T-sigma will be able to reveal a re-entrant feature. From these results, it is found that T-sigma method can provide rough cross-sectional pattern features and achieve quick, easy and accurate measurements of top and bottom CD.
Conditioning and Robustness of RNA Boltzmann Sampling under Thermodynamic Parameter Perturbations.
Rogers, Emily; Murrugarra, David; Heitsch, Christine
2017-07-25
Understanding how RNA secondary structure prediction methods depend on the underlying nearest-neighbor thermodynamic model remains a fundamental challenge in the field. Minimum free energy (MFE) predictions are known to be "ill conditioned" in that small changes to the thermodynamic model can result in significantly different optimal structures. Hence, the best practice is now to sample from the Boltzmann distribution, which generates a set of suboptimal structures. Although the structural signal of this Boltzmann sample is known to be robust to stochastic noise, the conditioning and robustness under thermodynamic perturbations have yet to be addressed. We present here a mathematically rigorous model for conditioning inspired by numerical analysis, and also a biologically inspired definition for robustness under thermodynamic perturbation. We demonstrate the strong correlation between conditioning and robustness and use its tight relationship to define quantitative thresholds for well versus ill conditioning. These resulting thresholds demonstrate that the majority of the sequences are at least sample robust, which verifies the assumption of sampling's improved conditioning over the MFE prediction. Furthermore, because we find no correlation between conditioning and MFE accuracy, the presence of both well- and ill-conditioned sequences indicates the continued need for both thermodynamic model refinements and alternate RNA structure prediction methods beyond the physics-based ones. Copyright © 2017. Published by Elsevier Inc.
An automated approach towards detecting complex behaviours in deep brain oscillations.
Mace, Michael; Yousif, Nada; Naushahi, Mohammad; Abdullah-Al-Mamun, Khondaker; Wang, Shouyan; Nandi, Dipankar; Vaidyanathan, Ravi
2014-03-15
Extracting event-related potentials (ERPs) from neurological rhythms is of fundamental importance in neuroscience research. Standard ERP techniques typically require the associated ERP waveform to have low variance, be shape and latency invariant and require many repeated trials. Additionally, the non-ERP part of the signal needs to be sampled from an uncorrelated Gaussian process. This limits methods of analysis to quantifying simple behaviours and movements only when multi-trial data-sets are available. We introduce a method for automatically detecting events associated with complex or large-scale behaviours, where the ERP need not conform to the aforementioned requirements. The algorithm is based on the calculation of a detection contour and adaptive threshold. These are combined using logical operations to produce a binary signal indicating the presence (or absence) of an event with the associated detection parameters tuned using a multi-objective genetic algorithm. To validate the proposed methodology, deep brain signals were recorded from implanted electrodes in patients with Parkinson's disease as they participated in a large movement-based behavioural paradigm. The experiment involved bilateral recordings of local field potentials from the sub-thalamic nucleus (STN) and pedunculopontine nucleus (PPN) during an orientation task. After tuning, the algorithm is able to extract events achieving training set sensitivities and specificities of [87.5 ± 6.5, 76.7 ± 12.8, 90.0 ± 4.1] and [92.6 ± 6.3, 86.0 ± 9.0, 29.8 ± 12.3] (mean ± 1 std) for the three subjects, averaged across the four neural sites. Furthermore, the methodology has the potential for utility in real-time applications as only a single-trial ERP is required. Copyright © 2013 Elsevier B.V. All rights reserved.
Steuer, Andrea E; Forss, Anna-Maria; Dally, Annika M; Kraemer, Thomas
2014-11-01
In the context of driving under the influence of drugs (DUID), not only common drugs of abuse may have an influence, but also medications with similar mechanisms of action. Simultaneous quantification of a variety of drugs and medications relevant in this context allows faster and more effective analyses. Therefore, multi-analyte approaches have gained more and more popularity in recent years. Usually, calibration curves for such procedures contain a mixture of all analytes, which might lead to mutual interferences. In this study we investigated whether the use of such mixtures leads to reliable results for authentic samples containing only one or two analytes. Five hundred microliters of whole blood were extracted by routine solid-phase extraction (SPE, HCX). Analysis was performed on an ABSciex 3200 QTrap instrument with ESI+ in scheduled MRM mode. The method was fully validated according to international guidelines including selectivity, recovery, matrix effects, accuracy and precision, stabilities, and limit of quantification. The selected SPE provided recoveries >60% for all analytes except 6-monoacetylmorphine (MAM) with coefficients of variation (CV) below 15% or 20% for quality controls (QC) LOW and HIGH, respectively. Ion suppression >30% was found for benzoylecgonine, hydrocodone, hydromorphone, MDA, oxycodone, and oxymorphone at QC LOW, however CVs were always below 10% (n=6 different whole blood samples). Accuracy and precision criteria were fulfilled for all analytes except for MAM. Systematic investigation of accuracy determined for QC MED in a multi-analyte mixture compared to samples containing only single analytes revealed no relevant differences for any analyte, indicating that a multi-analyte calibration is suitable for the presented method. Comparison of approximately 60 samples to a former GC-MS method showed good correlation. The newly validated method was successfully applied to more than 1600 routine samples and 3 proficiency tests. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Zhao, Limian; Lucas, Derick; Long, David; Richter, Bruce; Stevens, Joan
2018-05-11
This study presents the development and validation of a quantitation method for the analysis of multi-class, multi-residue veterinary drugs using lipid removal cleanup cartridges, enhanced matrix removal lipid (EMR-Lipid), for different meat matrices by liquid chromatography tandem mass spectrometry detection. Meat samples were extracted using a two-step solid-liquid extraction followed by pass-through sample cleanup. The method was optimized based on the buffer and solvent composition, solvent additive additions, and EMR-Lipid cartridge cleanup. The developed method was then validated in five meat matrices, porcine muscle, bovine muscle, bovine liver, bovine kidney and chicken liver to evaluate the method performance characteristics, such as absolute recoveries and precision at three spiking levels, calibration curve linearity, limit of quantitation (LOQ) and matrix effect. The results showed that >90% of veterinary drug analytes achieved satisfactory recovery results of 60-120%. Over 97% analytes achieved excellent reproducibility results (relative standard deviation (RSD) < 20%), and the LOQs were 1-5 μg/kg in the evaluated meat matrices. The matrix co-extractive removal efficiency by weight provided by EMR-lipid cartridge cleanup was 42-58% in samples. The post column infusion study showed that the matrix ion suppression was reduced for samples with the EMR-Lipid cartridge cleanup. The reduced matrix ion suppression effect was also confirmed with <15% frequency of compounds with significant quantitative ion suppression (>30%) for all tested veterinary drugs in all of meat matrices. The results showed that the two-step solid-liquid extraction provides efficient extraction for the entire spectrum of veterinary drugs, including the difficult classes such as tetracyclines, beta-lactams etc. EMR-Lipid cartridges after extraction provided efficient sample cleanup with easy streamlined protocol and minimal impacts on analytes recovery, improving method reliability and consistency. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Shirazi, M. R.; Mohamed Taib, J.; De La Rue, R. M.; Harun, S. W.; Ahmad, H.
2015-03-01
Dynamic characteristics of a multi-wavelength Brillouin-Raman fiber laser (MBRFL) assisted by four-wave mixing have been investigated through the development of Stokes and anti-Stokes lines under different combinations of Brillouin and Raman pump power levels and different Raman pumping schemes in a ring cavity. For a Stokes line of order higher than three, the threshold power was less than the saturation power of its last-order Stokes line. By increasing the Brillouin pump power, the nth order anti-Stokes and the (n+4)th order Stokes power levels were unexpectedly increased almost the same before the Stokes line threshold power. It was also found out that the SBS threshold reduction (SBSTR) depended linearly on the gain factor for the 1st and 2nd Stokes lines, as the first set. This relation for the 3rd and 4th Stokes lines as the second set, however, was almost linear with the same slope before SBSTR -6 dB, then, it approached to the linear relation in the first set when the gain factor was increased to 50 dB. Therefore, the threshold power levels of Stokes lines for a given Raman gain can be readily estimated only by knowing the threshold power levels in which there is no Raman amplification.
Pan, Minghao; Yang, Yongmin; Guan, Fengjiao; Hu, Haifeng; Xu, Hailong
2017-01-01
The accurate monitoring of blade vibration under operating conditions is essential in turbo-machinery testing. Blade tip timing (BTT) is a promising non-contact technique for the measurement of blade vibrations. However, the BTT sampling data are inherently under-sampled and contaminated with several measurement uncertainties. How to recover frequency spectra of blade vibrations though processing these under-sampled biased signals is a bottleneck problem. A novel method of BTT signal processing for alleviating measurement uncertainties in recovery of multi-mode blade vibration frequency spectrum is proposed in this paper. The method can be divided into four phases. First, a single measurement vector model is built by exploiting that the blade vibration signals are sparse in frequency spectra. Secondly, the uniqueness of the nonnegative sparse solution is studied to achieve the vibration frequency spectrum. Thirdly, typical sources of BTT measurement uncertainties are quantitatively analyzed. Finally, an improved vibration frequency spectra recovery method is proposed to get a guaranteed level of sparse solution when measurement results are biased. Simulations and experiments are performed to prove the feasibility of the proposed method. The most outstanding advantage is that this method can prevent the recovered multi-mode vibration spectra from being affected by BTT measurement uncertainties without increasing the probe number. PMID:28758952
Identifying a Probabilistic Boolean Threshold Network From Samples.
Melkman, Avraham A; Cheng, Xiaoqing; Ching, Wai-Ki; Akutsu, Tatsuya
2018-04-01
This paper studies the problem of exactly identifying the structure of a probabilistic Boolean network (PBN) from a given set of samples, where PBNs are probabilistic extensions of Boolean networks. Cheng et al. studied the problem while focusing on PBNs consisting of pairs of AND/OR functions. This paper considers PBNs consisting of Boolean threshold functions while focusing on those threshold functions that have unit coefficients. The treatment of Boolean threshold functions, and triplets and -tuplets of such functions, necessitates a deepening of the theoretical analyses. It is shown that wide classes of PBNs with such threshold functions can be exactly identified from samples under reasonable constraints, which include: 1) PBNs in which any number of threshold functions can be assigned provided that all have the same number of input variables and 2) PBNs consisting of pairs of threshold functions with different numbers of input variables. It is also shown that the problem of deciding the equivalence of two Boolean threshold functions is solvable in pseudopolynomial time but remains co-NP complete.
Threshold regression to accommodate a censored covariate.
Qian, Jing; Chiou, Sy Han; Maye, Jacqueline E; Atem, Folefac; Johnson, Keith A; Betensky, Rebecca A
2018-06-22
In several common study designs, regression modeling is complicated by the presence of censored covariates. Examples of such covariates include maternal age of onset of dementia that may be right censored in an Alzheimer's amyloid imaging study of healthy subjects, metabolite measurements that are subject to limit of detection censoring in a case-control study of cardiovascular disease, and progressive biomarkers whose baseline values are of interest, but are measured post-baseline in longitudinal neuropsychological studies of Alzheimer's disease. We propose threshold regression approaches for linear regression models with a covariate that is subject to random censoring. Threshold regression methods allow for immediate testing of the significance of the effect of a censored covariate. In addition, they provide for unbiased estimation of the regression coefficient of the censored covariate. We derive the asymptotic properties of the resulting estimators under mild regularity conditions. Simulations demonstrate that the proposed estimators have good finite-sample performance, and often offer improved efficiency over existing methods. We also derive a principled method for selection of the threshold. We illustrate the approach in application to an Alzheimer's disease study that investigated brain amyloid levels in older individuals, as measured through positron emission tomography scans, as a function of maternal age of dementia onset, with adjustment for other covariates. We have developed an R package, censCov, for implementation of our method, available at CRAN. © 2018, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Feng, Liefeng; Wang, Shupeng; Li, Yang; Li, Ding; Wang, Cunda
2018-03-01
The opposite sudden change of electrical characteristics between narrow and wide bang-gap multi-quantum-well (MQW) laser diodes (LDs) in the threshold region (which is defined as a current region between two kinks of IdV/dI-I curve) shows an interesting phenomenon that the slope changes of IdV/dI-I or V j -I curve between two adjacent regions (‘below’ and ‘in’, or ‘in’ and ‘above’ threshold region) display an approximate e-exponential relationship with the wavelengths of LDs. After comparing the exciton binding energy in different MQW LDs, and analyzing the temperature dependence of V j -I and IdV/dI-I of GaN MQW LDs, we suggested that the fraction of exciton recombination into lasing is a reason causing the relationship of sudden changes of the electrical characteristics with wavelengths of LDs.
NASA Astrophysics Data System (ADS)
González-Torre, Iván; Losada, Juan Carlos; Falconer, Ruth; Hapca, Simona; Tarquis, Ana M.
2015-04-01
Soil structure may be defined as the spatial arrangement of soil particles, aggregates and pores. The geometry of each one of these elements, as well as their spatial arrangement, has a great influence on the transport of fluids and solutes through the soil. Fractal/Multifractal methods have been increasingly applied to quantify soil structure thanks to the advances in computer technology (Tarquis et al., 2003). There is no doubt that computed tomography (CT) has provided an alternative for observing intact soil structure. These CT techniques reduce the physical impact to sampling, providing three-dimensional (3D) information and allowing rapid scanning to study sample dynamics in near real-time (Houston et al., 2013a). However, several authors have dedicated attention to the appropriate pore-solid CT threshold (Elliot and Heck, 2007; Houston et al., 2013b) and the better method to estimate the multifractal parameters (Grau et al., 2006; Tarquis et al., 2009). The aim of the present study is to evaluate the effect of the algorithm applied in the multifractal method (box counting and box gliding) and the cube size on the calculation of generalized fractal dimensions (Dq) in grey images without applying any threshold. To this end, soil samples were extracted from different areas plowed with three tools (moldboard, chissel and plow). Soil samples for each of the tillage treatment were packed into polypropylene cylinders of 8 cm diameter and 10 cm high. These were imaged using an mSIMCT at 155keV and 25 mA. An aluminium filter (0.25 mm) was applied to reduce beam hardening and later several corrections where applied during reconstruction. References Elliot, T.R. and Heck, R.J. 2007. A comparison of 2D and 3D thresholding of CT imagery. Can. J. Soil Sci., 87(4), 405-412. Grau, J, Médez, V.; Tarquis, A.M., Saa, A. and Díaz, M.C.. 2006. Comparison of gliding box and box-counting methods in soil image analysis. Geoderma, 134, 349-359. González-Torres, Iván. Theory and application of multifractal analysis methods in images for the study of soil structure. Master thesis, UPM, 2014. Houston, A.N.; S. Schmidt, A.M. Tarquis, W. Otten, P.C. Baveye, S.M. Hapca. Effect of scanning and image reconstruction settings in X-ray computed tomography on soil image quality and segmentation performance. Geoderma, 207-208, 154-165, 2013a. Houston, A, Otten, W., Baveye, Ph., Hapca, S. Adaptive-Window Indicator Kriging: A Thresholding Method for Computed Tomography, Computers & Geosciences, 54, 239-248, 2013b. Tarquis, A.M., R.J. Heck, D. Andina, A. Alvarez and J.M. Antón. Multifractal analysis and thresholding of 3D soil images. Ecological Complexity, 6, 230-239, 2009. Tarquis, A.M.; D. Giménez, A. Saa, M.C. Díaz. and J.M. Gascó. Scaling and Multiscaling of Soil Pore Systems Determined by Image Analysis. Scaling Methods in Soil Systems. Pachepsky, Radcliffe and Selim Eds., 19-33, 2003. CRC Press, Boca Ratón, Florida. Acknowledgements First author acknowledges the financial support obtained from Soil Imaging Laboratory (University of Gueplh, Canada) in 2014.
An Adaptive S-Method to Analyze Micro-Doppler Signals for Human Activity Classification
Yang, Chao; Xia, Yuqing; Ma, Xiaolin; Zhang, Tao; Zhou, Zhou
2017-01-01
In this paper, we propose the multiwindow Adaptive S-method (AS-method) distribution approach used in the time-frequency analysis for radar signals. Based on the results of orthogonal Hermite functions that have good time-frequency resolution, we vary the length of window to suppress the oscillating component caused by cross-terms. This method can bring a better compromise in the auto-terms concentration and cross-terms suppressing, which contributes to the multi-component signal separation. Finally, the effective micro signal is extracted by threshold segmentation and envelope extraction. To verify the proposed method, six states of motion are separated by a classifier of a support vector machine (SVM) trained to the extracted features. The trained SVM can detect a human subject with an accuracy of 95.4% for two cases without interference. PMID:29186075
An Adaptive S-Method to Analyze Micro-Doppler Signals for Human Activity Classification.
Li, Fangmin; Yang, Chao; Xia, Yuqing; Ma, Xiaolin; Zhang, Tao; Zhou, Zhou
2017-11-29
In this paper, we propose the multiwindow Adaptive S-method (AS-method) distribution approach used in the time-frequency analysis for radar signals. Based on the results of orthogonal Hermite functions that have good time-frequency resolution, we vary the length of window to suppress the oscillating component caused by cross-terms. This method can bring a better compromise in the auto-terms concentration and cross-terms suppressing, which contributes to the multi-component signal separation. Finally, the effective micro signal is extracted by threshold segmentation and envelope extraction. To verify the proposed method, six states of motion are separated by a classifier of a support vector machine (SVM) trained to the extracted features. The trained SVM can detect a human subject with an accuracy of 95.4% for two cases without interference.
NASA Astrophysics Data System (ADS)
Meneghini, Robert
1998-09-01
A method is proposed for estimating the area-average rain-rate distribution from attenuating-wavelength spaceborne or airborne radar data. Because highly attenuated radar returns yield unreliable estimates of the rain rate, these are eliminated by means of a proxy variable, Q, derived from the apparent radar reflectivity factors and a power law relating the attenuation coefficient and the reflectivity factor. In determining the probability distribution function of areawide rain rates, the elimination of attenuated measurements at high rain rates and the loss of data at light rain rates, because of low signal-to-noise ratios, leads to truncation of the distribution at the low and high ends. To estimate it over all rain rates, a lognormal distribution is assumed, the parameters of which are obtained from a nonlinear least squares fit to the truncated distribution. Implementation of this type of threshold method depends on the method used in estimating the high-resolution rain-rate estimates (e.g., either the standard Z-R or the Hitschfeld-Bordan estimate) and on the type of rain-rate estimate (either point or path averaged). To test the method, measured drop size distributions are used to characterize the rain along the radar beam. Comparisons with the standard single-threshold method or with the sample mean, taken over the high-resolution estimates, show that the present method usually provides more accurate determinations of the area-averaged rain rate if the values of the threshold parameter, QT, are chosen in the range from 0.2 to 0.4.
ERIC Educational Resources Information Center
Mahmoudi, Hojjat; Brown, Monica R.; Amani Saribagloo, Javad; Dadashzadeh, Shiva
2018-01-01
This aim of this current research was a multi-level analysis of the relationship between school culture, basic psychological needs, and adolescents' academic alienation. One thousand twenty-nine (N = 1,029) high school students from Qom City were randomly selected through a multi-phase cluster sampling method and answered questions regarding…
Sieracki, M E; Reichenbach, S E; Webb, K L
1989-01-01
The accurate measurement of bacterial and protistan cell biomass is necessary for understanding their population and trophic dynamics in nature. Direct measurement of fluorescently stained cells is often the method of choice. The tedium of making such measurements visually on the large numbers of cells required has prompted the use of automatic image analysis for this purpose. Accurate measurements by image analysis require an accurate, reliable method of segmenting the image, that is, distinguishing the brightly fluorescing cells from a dark background. This is commonly done by visually choosing a threshold intensity value which most closely coincides with the outline of the cells as perceived by the operator. Ideally, an automated method based on the cell image characteristics should be used. Since the optical nature of edges in images of light-emitting, microscopic fluorescent objects is different from that of images generated by transmitted or reflected light, it seemed that automatic segmentation of such images may require special considerations. We tested nine automated threshold selection methods using standard fluorescent microspheres ranging in size and fluorescence intensity and fluorochrome-stained samples of cells from cultures of cyanobacteria, flagellates, and ciliates. The methods included several variations based on the maximum intensity gradient of the sphere profile (first derivative), the minimum in the second derivative of the sphere profile, the minimum of the image histogram, and the midpoint intensity. Our results indicated that thresholds determined visually and by first-derivative methods tended to overestimate the threshold, causing an underestimation of microsphere size. The method based on the minimum of the second derivative of the profile yielded the most accurate area estimates for spheres of different sizes and brightnesses and for four of the five cell types tested. A simple model of the optical properties of fluorescing objects and the video acquisition system is described which explains how the second derivative best approximates the position of the edge. Images PMID:2516431
Optimal multi-community network modularity for information diffusion
NASA Astrophysics Data System (ADS)
Wu, Jiaocan; Du, Ruping; Zheng, Yingying; Liu, Dong
2016-02-01
Studies demonstrate that community structure plays an important role in information spreading recently. In this paper, we investigate the impact of multi-community structure on information diffusion with linear threshold model. We utilize extended GN network that contains four communities and analyze dynamic behaviors of information that spreads on it. And we discover the optimal multi-community network modularity for information diffusion based on the social reinforcement. Results show that, within the appropriate range, multi-community structure will facilitate information diffusion instead of hindering it, which accords with the results derived from two-community network.
Vlaisavljevich, Eli; Lin, Kuang-Wei; Maxwell, Adam; Warnez, Matthew; Mancia, Lauren; Singh, Rahul; Putnam, Andrew J.; Fowlkes, Brian; Johnsen, Eric; Cain, Charles; Xu, Zhen
2015-01-01
Histotripsy is an ultrasound ablation method that depends on the initiation of a cavitation bubble cloud to fractionate soft tissue. Previous work has demonstrated a cavitation cloud can be formed by a single pulse with one high amplitude negative cycle, when the negative pressure amplitude directly exceeds a pressure threshold intrinsic to the medium. We hypothesize that the intrinsic threshold in water-based tissues is determined by the properties of the water inside the tissue and changes in tissue stiffness or ultrasound frequency will have a minimal impact on the histotripsy intrinsic threshold. To test this hypothesis, the histotripsy intrinsic threshold was investigated both experimentally and theoretically. The probability of cavitation was measured by subjecting tissue phantoms with adjustable mechanical properties and ex vivo tissues to a histotripsy pulse of 1–2 cycles produced by 345 kHz, 500 kHz, 1.5 MHz, and 3 MHz histotripsy transducers. Cavitation was detected and characterized by passive cavitation detection and high-speed photography, from which the probability of cavitation was measured vs. pressure amplitude. The results demonstrated that the intrinsic threshold (the negative pressure at which probability=0.5) is independent of stiffness for Young’s moduli (E) < 1 MPa with only a small increase (~2–3 MPa) in the intrinsic threshold for tendon (E=380 MPa). Additionally, results for all samples showed only a small increase of ~2–3 MPa when the frequency was increased from 345 kHz to 3 MHz. The intrinsic threshold was measured to be between 24.7–30.6 MPa for all samples and frequencies tested in this study. Overall, the results of this study indicate that the intrinsic threshold to initiate a histotripsy bubble cloud is not significantly impacted by tissue stiffness or ultrasound frequency in hundreds of kHz to MHz range. PMID:25766571
Vlaisavljevich, Eli; Lin, Kuang-Wei; Maxwell, Adam; Warnez, Matthew T; Mancia, Lauren; Singh, Rahul; Putnam, Andrew J; Fowlkes, Brian; Johnsen, Eric; Cain, Charles; Xu, Zhen
2015-06-01
Histotripsy is an ultrasound ablation method that depends on the initiation of a cavitation bubble cloud to fractionate soft tissue. Previous work has indicated that a cavitation cloud can be formed by a single pulse with one high-amplitude negative cycle, when the negative pressure amplitude directly exceeds a pressure threshold intrinsic to the medium. We hypothesize that the intrinsic threshold in water-based tissues is determined by the properties of the water inside the tissue, and changes in tissue stiffness or ultrasound frequency will have a minimal impact on the histotripsy intrinsic threshold. To test this hypothesis, the histotripsy intrinsic threshold was investigated both experimentally and theoretically. The probability of cavitation was measured by subjecting tissue phantoms with adjustable mechanical properties and ex vivo tissues to a histotripsy pulse of 1-2 cycles produced by 345-kHz, 500-kHz, 1.5-MHz and 3-MHz histotripsy transducers. Cavitation was detected and characterized by passive cavitation detection and high-speed photography, from which the probability of cavitation was measured versus pressure amplitude. The results revealed that the intrinsic threshold (the negative pressure at which probability = 0.5) is independent of stiffness for Young's moduli (E) <1 MPa, with only a small increase (∼2-3 MPa) in the intrinsic threshold for tendon (E = 380 MPa). Additionally, results for all samples revealed only a small increase of ∼2-3 MPa when the frequency was increased from 345 kHz to 3 MHz. The intrinsic threshold was measured to be between 24.7 and 30.6 MPa for all samples and frequencies tested in this study. Overall, the results of this study indicate that the intrinsic threshold to initiate a histotripsy bubble cloud is not significantly affected by tissue stiffness or ultrasound frequency in the hundreds of kilohertz to megahertz range. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Miller, Julie M; Dewey, Marc; Vavere, Andrea L; Rochitte, Carlos E; Niinuma, Hiroyuki; Arbab-Zadeh, Armin; Paul, Narinder; Hoe, John; de Roos, Albert; Yoshioka, Kunihiro; Lemos, Pedro A; Bush, David E; Lardo, Albert C; Texter, John; Brinker, Jeffery; Cox, Christopher; Clouse, Melvin E; Lima, João A C
2009-04-01
Multislice computed tomography (MSCT) for the noninvasive detection of coronary artery stenoses is a promising candidate for widespread clinical application because of its non-invasive nature and high sensitivity and negative predictive value as found in several previous studies using 16 to 64 simultaneous detector rows. A multi-centre study of CT coronary angiography using 16 simultaneous detector rows has shown that 16-slice CT is limited by a high number of nondiagnostic cases and a high false-positive rate. A recent meta-analysis indicated a significant interaction between the size of the study sample and the diagnostic odds ratios suggestive of small study bias, highlighting the importance of evaluating MSCT using 64 simultaneous detector rows in a multi-centre approach with a larger sample size. In this manuscript we detail the objectives and methods of the prospective "CORE-64" trial ("Coronary Evaluation Using Multidetector Spiral Computed Tomography Angiography using 64 Detectors"). This multi-centre trial was unique in that it assessed the diagnostic performance of 64-slice CT coronary angiography in nine centres worldwide in comparison to conventional coronary angiography. In conclusion, the multi-centre, multi-institutional and multi-continental trial CORE-64 has great potential to ultimately assess the per-patient diagnostic performance of coronary CT angiography using 64 simultaneous detector rows.
NASA Astrophysics Data System (ADS)
Ghannadpour, Seyyed Saeed; Hezarkhani, Ardeshir
2016-03-01
The U-statistic method is one of the most important structural methods to separate the anomaly from the background. It considers the location of samples and carries out the statistical analysis of the data without judging from a geochemical point of view and tries to separate subpopulations and determine anomalous areas. In the present study, to use U-statistic method in three-dimensional (3D) condition, U-statistic is applied on the grade of two ideal test examples, by considering sample Z values (elevation). So far, this is the first time that this method has been applied on a 3D condition. To evaluate the performance of 3D U-statistic method and in order to compare U-statistic with one non-structural method, the method of threshold assessment based on median and standard deviation (MSD method) is applied on the two example tests. Results show that the samples indicated by U-statistic method as anomalous are more regular and involve less dispersion than those indicated by the MSD method. So that, according to the location of anomalous samples, denser areas of them can be determined as promising zones. Moreover, results show that at a threshold of U = 0, the total error of misclassification for U-statistic method is much smaller than the total error of criteria of bar {x}+n× s. Finally, 3D model of two test examples for separating anomaly from background using 3D U-statistic method is provided. The source code for a software program, which was developed in the MATLAB programming language in order to perform the calculations of the 3D U-spatial statistic method, is additionally provided. This software is compatible with all the geochemical varieties and can be used in similar exploration projects.
NASA Astrophysics Data System (ADS)
Li, Y. Chao; Ding, Q.; Gao, Y.; Ran, L. Ling; Yang, J. Ru; Liu, C. Yu; Wang, C. Hui; Sun, J. Feng
2014-07-01
This paper proposes a novel method of multi-beam laser heterodyne measurement for Young modulus. Based on Doppler effect and heterodyne technology, loaded the information of length variation to the frequency difference of the multi-beam laser heterodyne signal by the frequency modulation of the oscillating mirror, this method can obtain many values of length variation caused by mass variation after the multi-beam laser heterodyne signal demodulation simultaneously. Processing these values by weighted-average, it can obtain length variation accurately, and eventually obtain value of Young modulus of the sample by the calculation. This novel method is used to simulate measurement for Young modulus of wire under different mass by MATLAB, the obtained result shows that the relative measurement error of this method is just 0.3%.
Sahmetlioglu, Ertugrul; Yilmaz, Erkan; Aktas, Ece; Soylak, Mustafa
2014-02-01
A multi-walled carbon nanotubes-polypyrrole conducting polymer nanocomposite has been synthesized, characterized and used for the separation and preconcentration of lead at trace levels in water samples prior to its flame atomic absorption spectrometric detection. The analytical parameters like pH, sample volume, eluent, sample flow rate that were affected the retentions of lead(II) on the new nanocomposite were optimized. Matrix effects were also investigated. Limit of detection and preconcentration factors were 1.1 µg L(-1) and 200, respectively. The adsorption capacity of the nanocomposite was 25.0mg lead(II) per gram composite. The validation of the method was checked by using SPS-WW2 Waste water Level 2 certified reference material. The method was applied to the determination of lead in water samples with satisfactory results. © 2013 Elsevier B.V. All rights reserved.
Jaki, Thomas; Allacher, Peter; Horling, Frank
2016-09-05
Detecting and characterizing of anti-drug antibodies (ADA) against a protein therapeutic are crucially important to monitor the unwanted immune response. Usually a multi-tiered approach that initially rapidly screens for positive samples that are subsequently confirmed in a separate assay is employed for testing of patient samples for ADA activity. In this manuscript we evaluate the ability of different methods used to classify subject with screening and competition based confirmatory assays. We find that for the overall performance of the multi-stage process the method used for confirmation is most important where a t-test is best when differences are moderate to large. Moreover we find that, when differences between positive and negative samples are not sufficiently large, using a competition based confirmation step does yield poor classification of positive samples. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Wozniak, Aniela; Geoffroy, Enrique; Miranda, Carolina; Castillo, Claudia; Sanhueza, Francia; García, Patricia
2016-11-01
The choice of nucleic acids (NAs) extraction method for molecular diagnosis in microbiology is of major importance because of the low microbial load, different nature of microorganisms, and clinical specimens. The NA yield of different extraction methods has been mostly studied using spiked samples. However, information from real human clinical specimens is scarce. The purpose of this study was to compare the performance of a manual low-cost extraction method (Qiagen kit or salting-out extraction method) with the automated high-cost MagNAPure Compact method. According to cycle threshold values for different pathogens, MagNAPure is as efficient as Qiagen for NA extraction from noncomplex clinical specimens (nasopharyngeal swab, skin swab, plasma, respiratory specimens). In contrast, according to cycle threshold values for RNAseP, MagNAPure method may not be an appropriate method for NA extraction from blood. We believe that MagNAPure versatility reduced risk of cross-contamination and reduced hands-on time compensates its high cost. Copyright © 2016 Elsevier Inc. All rights reserved.
Multi-site field studies were conducted to evaluate the performance of sampling methods for measuring the coarse fraction of PM10 (PM10 2.5) in ambient air. The field studies involved the use of both time-integrated filter-based and direct continuous methods. Despite operationa...
ALLTEM Multi-Axis Electromagnetic Induction System Demonstration and Validation
2012-08-01
threshold T-high higher threshold TMGS Tensor Magnetic Gradiometer System TOI target of interest Tx ALLTEM transmitter USGS U.S. Geological...the Tensor Magnetic Gradiometer System ( TMGS ) and two prototype EMI instruments, the Very Early Time-domain ElectroMagnetic (VETEM) system and the...project one prototype magnetic system, the TMGS , and two prototype EMI instruments, VETEM and the High Frequency Sounder, were evaluated. Subsequent
From picture to porosity of river bed material using Structure-from-Motion with Multi-View-Stereo
NASA Astrophysics Data System (ADS)
Seitz, Lydia; Haas, Christian; Noack, Markus; Wieprecht, Silke
2018-04-01
Common methods for in-situ determination of porosity of river bed material are time- and effort-consuming. Although mathematical predictors can be used for estimation, they do not adequately represent porosities. The objective of this study was to assess a new approach for the determination of porosity of frozen sediment samples. The method is based on volume determination by applying Structure-from-Motion with Multi View Stereo (SfM-MVS) to estimate a 3D volumetric model based on overlapping imagery. The method was applied on artificial sediment mixtures as well as field samples. In addition, the commonly used water replacement method was applied to determine porosities in comparison with the SfM-MVS method. We examined a range of porosities from 0.16 to 0.46 that are representative of the wide range of porosities found in rivers. SfM-MVS performed well in determining volumes of the sediment samples. A very good correlation (r = 0.998, p < 0.0001) was observed between the SfM-MVS and the water replacement method. Results further show that the water replacement method underestimated total sample volumes. A comparison with several mathematical predictors showed that for non-uniform samples the calculated porosity based on the standard deviation performed better than porosities based on the median grain size. None of the predictors were effective at estimating the porosity of the field samples.
Automatic multi-label annotation of abdominal CT images using CBIR
NASA Astrophysics Data System (ADS)
Xue, Zhiyun; Antani, Sameer; Long, L. Rodney; Thoma, George R.
2017-03-01
We present a technique to annotate multiple organs shown in 2-D abdominal/pelvic CT images using CBIR. This annotation task is motivated by our research interests in visual question-answering (VQA). We aim to apply results from this effort in Open-iSM, a multimodal biomedical search engine developed by the National Library of Medicine (NLM). Understanding visual content of biomedical images is a necessary step for VQA. Though sufficient annotational information about an image may be available in related textual metadata, not all may be useful as descriptive tags, particularly for anatomy on the image. In this paper, we develop and evaluate a multi-label image annotation method using CBIR. We evaluate our method on two 2-D CT image datasets we generated from 3-D volumetric data obtained from a multi-organ segmentation challenge hosted in MICCAI 2015. Shape and spatial layout information is used to encode visual characteristics of the anatomy. We adapt a weighted voting scheme to assign multiple labels to the query image by combining the labels of the images identified as similar by the method. Key parameters that may affect the annotation performance, such as the number of images used in the label voting and the threshold for excluding labels that have low weights, are studied. The method proposes a coarse-to-fine retrieval strategy which integrates the classification with the nearest-neighbor search. Results from our evaluation (using the MICCAI CT image datasets as well as figures from Open-i) are presented.
“Multi-temperature” method for high-pressure sorption measurements on moist shales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gasparik, Matus; Ghanizadeh, Amin; Gensterblum, Yves
2013-08-15
A simple and effective experimental approach has been developed and tested to study the temperature dependence of high-pressure methane sorption in moist organic-rich shales. This method, denoted as “multi-temperature” (short “multi-T”) method, enables measuring multiple isotherms at varying temperatures in a single run. The measurement of individual sorption isotherms at different temperatures takes place in a closed system ensuring that the moisture content remains constant. The multi-T method was successfully tested for methane sorption on an organic-rich shale sample. Excess sorption isotherms for methane were measured at pressures of up to 25 MPa and at temperatures of 318.1 K, 338.1more » K, and 348.1 K on dry and moisture-equilibrated samples. The measured isotherms were parameterized with a 3-parameter Langmuir-based excess sorption function, from which thermodynamic sorption parameters (enthalpy and entropy of adsorption) were obtained. Using these, we show that by taking explicitly into account water vapor as molecular species in the gas phase with temperature-dependent water vapor pressure during the experiment, more meaningful results are obtained with respect to thermodynamical considerations. The proposed method can be applied to any adsorbent system (coals, shales, industrial adsorbents) and any supercritical gas (e.g., CH{sub 4}, CO{sub 2}) and is particularly suitable for sorption measurements using the manometric (volumetric) method.« less
Rejection Thresholds in Solid Chocolate-Flavored Compound Coating
Harwood, Meriel L.; Ziegler, Gregory R.; Hayes, John E.
2012-01-01
Classical detection thresholds do not predict liking, as they focus on the presence or absence of a sensation. Recently however, Prescott and colleagues described a new method, the rejection threshold, where a series of forced choice preference tasks are used to generate a dose-response function to determine hedonically acceptable concentrations. That is, how much is too much? To date, this approach has been used exclusively in liquid foods. Here, we determined group rejection thresholds in solid chocolate-flavored compound coating for bitterness. The influences of self-identified preferences for milk or dark chocolate, as well as eating style (chewers versus melters) on rejection thresholds were investigated. Stimuli included milk chocolate-flavored compound coating spiked with increasing amounts of sucrose octaacetate (SOA), a bitter GRAS additive. Paired preference tests (blank vs. spike) were used to determine the proportion of the group that preferred the blank. Across pairs, spiked samples were presented in ascending concentration. We were able to quantify and compare differences between two self-identified market segments. The rejection threshold for the dark chocolate preferring group was significantly higher than the milk chocolate preferring group (p = 0.01). Conversely, eating style did not affect group rejection thresholds (p = 0.14), although this may reflect the amount of chocolate given to participants. Additionally, there was no association between chocolate preference and eating style (p = 0.36). Present work supports the contention that this method can be used to examine preferences within specific market segments and potentially individual differences as they relate to ingestive behavior. PMID:22924788
Ly, Sovann; Arashiro, Takeshi; Ieng, Vanra; Tsuyuoka, Reiko; Parry, Amy; Horwood, Paul; Heng, Seng; Hamid, Sarah; Vandemaele, Katelijn; Chin, Savuth; Sar, Borann; Arima, Yuzo
2017-01-01
To establish seasonal and alert thresholds and transmission intensity categories for influenza to provide timely triggers for preventive measures or upscaling control measures in Cambodia. Using Cambodia's influenza-like illness (ILI) and laboratory-confirmed influenza surveillance data from 2009 to 2015, three parameters were assessed to monitor influenza activity: the proportion of ILI patients among all outpatients, proportion of ILI samples positive for influenza and the product of the two. With these parameters, four threshold levels (seasonal, moderate, high and alert) were established and transmission intensity was categorized based on a World Health Organization alignment method. Parameters were compared against their respective thresholds. Distinct seasonality was observed using the two parameters that incorporated laboratory data. Thresholds established using the composite parameter, combining syndromic and laboratory data, had the least number of false alarms in declaring season onset and were most useful in monitoring intensity. Unlike in temperate regions, the syndromic parameter was less useful in monitoring influenza activity or for setting thresholds. Influenza thresholds based on appropriate parameters have the potential to provide timely triggers for public health measures in a tropical country where monitoring and assessing influenza activity has been challenging. Based on these findings, the Ministry of Health plans to raise general awareness regarding influenza among the medical community and the general public. Our findings have important implications for countries in the tropics/subtropics and in resource-limited settings, and categorized transmission intensity can be used to assess severity of potential pandemic influenza as well as seasonal influenza.
Lai, Zongying; Zhang, Xinlin; Guo, Di; Du, Xiaofeng; Yang, Yonggui; Guo, Gang; Chen, Zhong; Qu, Xiaobo
2018-05-03
Multi-contrast images in magnetic resonance imaging (MRI) provide abundant contrast information reflecting the characteristics of the internal tissues of human bodies, and thus have been widely utilized in clinical diagnosis. However, long acquisition time limits the application of multi-contrast MRI. One efficient way to accelerate data acquisition is to under-sample the k-space data and then reconstruct images with sparsity constraint. However, images are compromised at high acceleration factor if images are reconstructed individually. We aim to improve the images with a jointly sparse reconstruction and Graph-based redundant wavelet transform (GBRWT). First, a sparsifying transform, GBRWT, is trained to reflect the similarity of tissue structures in multi-contrast images. Second, joint multi-contrast image reconstruction is formulated as a ℓ 2, 1 norm optimization problem under GBRWT representations. Third, the optimization problem is numerically solved using a derived alternating direction method. Experimental results in synthetic and in vivo MRI data demonstrate that the proposed joint reconstruction method can achieve lower reconstruction errors and better preserve image structures than the compared joint reconstruction methods. Besides, the proposed method outperforms single image reconstruction with joint sparsity constraint of multi-contrast images. The proposed method explores the joint sparsity of multi-contrast MRI images under graph-based redundant wavelet transform and realizes joint sparse reconstruction of multi-contrast images. Experiment demonstrate that the proposed method outperforms the compared joint reconstruction methods as well as individual reconstructions. With this high quality image reconstruction method, it is possible to achieve the high acceleration factors by exploring the complementary information provided by multi-contrast MRI.
Santos, Frédéric; Guyomarc'h, Pierre; Bruzek, Jaroslav
2014-12-01
Accuracy of identification tools in forensic anthropology primarily rely upon the variations inherent in the data upon which they are built. Sex determination methods based on craniometrics are widely used and known to be specific to several factors (e.g. sample distribution, population, age, secular trends, measurement technique, etc.). The goal of this study is to discuss the potential variations linked to the statistical treatment of the data. Traditional craniometrics of four samples extracted from documented osteological collections (from Portugal, France, the U.S.A., and Thailand) were used to test three different classification methods: linear discriminant analysis (LDA), logistic regression (LR), and support vector machines (SVM). The Portuguese sample was set as a training model on which the other samples were applied in order to assess the validity and reliability of the different models. The tests were performed using different parameters: some included the selection of the best predictors; some included a strict decision threshold (sex assessed only if the related posterior probability was high, including the notion of indeterminate result); and some used an unbalanced sex-ratio. Results indicated that LR tends to perform slightly better than the other techniques and offers a better selection of predictors. Also, the use of a decision threshold (i.e. p>0.95) is essential to ensure an acceptable reliability of sex determination methods based on craniometrics. Although the Portuguese, French, and American samples share a similar sexual dimorphism, application of Western models on the Thai sample (that displayed a lower degree of dimorphism) was unsuccessful. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Qu, Lei; Chen, Jian-Bo; Zhang, Gui-Jun; Sun, Su-Qin; Zheng, Jing
2017-03-05
As a kind of expensive perfume and valuable herb, Aquilariae Lignum Resinatum (ALR) is often adulterated for economic motivations. In this research, Fourier transform infrared (FT-IR) spectroscopy is employed to establish a simple and quick method for the adulteration screening of ALR. First, the principal chemical constituents of ALR are characterized by FT-IR spectroscopy at room temperature and two-dimensional correlation infrared (2D-IR) spectroscopy with thermal perturbation. Besides the common cellulose and lignin compounds, a certain amount of resin is the characteristic constituent of ALR. Synchronous and asynchronous 2D-IR spectra indicate that the resin (an unstable secondary metabolite) is more sensitive than cellulose and lignin (stable structural constituents) to the thermal perturbation. Using a certified ALR sample as the reference, the infrared spectral correlation threshold is determined by 30 authentic samples and 6 adulterated samples. The spectral correlation coefficient of an authentic ALR sample to the standard reference should be not less than 0.9886 (p=0.01). Three commercial adulterated ALR samples are identified by the correlation threshold. Further interpretation of the infrared spectra of the adulterated samples indicates the common adulterating methods - counterfeiting with other kind of wood, adding ingredient such as sand to increase the weight, and adding the cheap resin such as rosin to increase the content of resin compounds. Results of this research prove that FT-IR spectroscopy can be used as a simple and accurate quality control method of ALR. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Qu, Lei; Chen, Jian-bo; Zhang, Gui-Jun; Sun, Su-qin; Zheng, Jing
2017-03-01
As a kind of expensive perfume and valuable herb, Aquilariae Lignum Resinatum (ALR) is often adulterated for economic motivations. In this research, Fourier transform infrared (FT-IR) spectroscopy is employed to establish a simple and quick method for the adulteration screening of ALR. First, the principal chemical constituents of ALR are characterized by FT-IR spectroscopy at room temperature and two-dimensional correlation infrared (2D-IR) spectroscopy with thermal perturbation. Besides the common cellulose and lignin compounds, a certain amount of resin is the characteristic constituent of ALR. Synchronous and asynchronous 2D-IR spectra indicate that the resin (an unstable secondary metabolite) is more sensitive than cellulose and lignin (stable structural constituents) to the thermal perturbation. Using a certified ALR sample as the reference, the infrared spectral correlation threshold is determined by 30 authentic samples and 6 adulterated samples. The spectral correlation coefficient of an authentic ALR sample to the standard reference should be not less than 0.9886 (p = 0.01). Three commercial adulterated ALR samples are identified by the correlation threshold. Further interpretation of the infrared spectra of the adulterated samples indicates the common adulterating methods - counterfeiting with other kind of wood, adding ingredient such as sand to increase the weight, and adding the cheap resin such as rosin to increase the content of resin compounds. Results of this research prove that FT-IR spectroscopy can be used as a simple and accurate quality control method of ALR.
NASA Astrophysics Data System (ADS)
Meng, Rui; Cheong, Kang Hao; Bao, Wei; Wong, Kelvin Kian Loong; Wang, Lu; Xie, Neng-gang
2018-06-01
This article attempts to evaluate the safety and economic performance of an arch dam under the action of static loads. The geometric description of a crown cantilever section and the horizontal arch ring is presented. A three-objective optimization model of arch dam shape is established based on the arch dam volume, maximum principal tensile stress and total strain energy. The evolutionary game method is then applied to obtain the optimal solution. In the evolutionary game technique, a novel and more efficient exploration method of the game players' strategy space, named the 'sorting partition method under the threshold limit', is presented, with the game profit functions constructed according to both competitive and cooperative behaviour. By way of example, three optimization goals have all shown improvements over the initial solutions. In particular, the evolutionary game method has potentially faster convergence. This demonstrates the preliminary proof of principle of the evolutionary game method.
Non-destructive lichen biomass estimation in northwestern Alaska: a comparison of methods.
Rosso, Abbey; Neitlich, Peter; Smith, Robert J
2014-01-01
Terrestrial lichen biomass is an important indicator of forage availability for caribou in northern regions, and can indicate vegetation shifts due to climate change, air pollution or changes in vascular plant community structure. Techniques for estimating lichen biomass have traditionally required destructive harvesting that is painstaking and impractical, so we developed models to estimate biomass from relatively simple cover and height measurements. We measured cover and height of forage lichens (including single-taxon and multi-taxa "community" samples, n = 144) at 73 sites on the Seward Peninsula of northwestern Alaska, and harvested lichen biomass from the same plots. We assessed biomass-to-volume relationships using zero-intercept regressions, and compared differences among two non-destructive cover estimation methods (ocular vs. point count), among four landcover types in two ecoregions, and among single-taxon vs. multi-taxa samples. Additionally, we explored the feasibility of using lichen height (instead of volume) as a predictor of stand-level biomass. Although lichen taxa exhibited unique biomass and bulk density responses that varied significantly by growth form, we found that single-taxon sampling consistently under-estimated true biomass and was constrained by the need for taxonomic experts. We also found that the point count method provided little to no improvement over ocular methods, despite increased effort. Estimated biomass of lichen-dominated communities (mean lichen cover: 84.9±1.4%) using multi-taxa, ocular methods differed only nominally among landcover types within ecoregions (range: 822 to 1418 g m-2). Height alone was a poor predictor of lichen biomass and should always be weighted by cover abundance. We conclude that the multi-taxa (whole-community) approach, when paired with ocular estimates, is the most reasonable and practical method for estimating lichen biomass at landscape scales in northwest Alaska.
Non-Destructive Lichen Biomass Estimation in Northwestern Alaska: A Comparison of Methods
Rosso, Abbey; Neitlich, Peter; Smith, Robert J.
2014-01-01
Terrestrial lichen biomass is an important indicator of forage availability for caribou in northern regions, and can indicate vegetation shifts due to climate change, air pollution or changes in vascular plant community structure. Techniques for estimating lichen biomass have traditionally required destructive harvesting that is painstaking and impractical, so we developed models to estimate biomass from relatively simple cover and height measurements. We measured cover and height of forage lichens (including single-taxon and multi-taxa “community” samples, n = 144) at 73 sites on the Seward Peninsula of northwestern Alaska, and harvested lichen biomass from the same plots. We assessed biomass-to-volume relationships using zero-intercept regressions, and compared differences among two non-destructive cover estimation methods (ocular vs. point count), among four landcover types in two ecoregions, and among single-taxon vs. multi-taxa samples. Additionally, we explored the feasibility of using lichen height (instead of volume) as a predictor of stand-level biomass. Although lichen taxa exhibited unique biomass and bulk density responses that varied significantly by growth form, we found that single-taxon sampling consistently under-estimated true biomass and was constrained by the need for taxonomic experts. We also found that the point count method provided little to no improvement over ocular methods, despite increased effort. Estimated biomass of lichen-dominated communities (mean lichen cover: 84.9±1.4%) using multi-taxa, ocular methods differed only nominally among landcover types within ecoregions (range: 822 to 1418 g m−2). Height alone was a poor predictor of lichen biomass and should always be weighted by cover abundance. We conclude that the multi-taxa (whole-community) approach, when paired with ocular estimates, is the most reasonable and practical method for estimating lichen biomass at landscape scales in northwest Alaska. PMID:25079228
Evaluation Of Water Quality At River Bian In Merauke Papua
NASA Astrophysics Data System (ADS)
Djaja, Irba; Purwanto, P.; Sunoko, H. R.
2018-02-01
River Bian in Merauke Regency has been utilized by local people in Papua (the Marind) who live along the river for fulfilling their daily needs, such as shower, cloth and dish washing, and even defecation, waste disposal, including domestic waste, as well as for ceremonial activities related to the locally traditional culture. Change in land use for other necessities and domestic activities of the local people have mounted pressures on the status of the River Bian, thus decreasing the quality of the river. This study had objectives to find out and to analyze river water quality and water quality status of the River Bian, and its compliance with water quality standards for ideal use. The study determined sample point by a purposive sampling method, taking the water samples with a grab method. The analysis of the water quality was performed by standard and pollution index methods. The study revealed that the water quality of River Bian, concerning BOD, at the station 3 had exceeded quality threshold. COD parameter for all stations had exceeded the quality threshold for class III. At three stations, there was a decreasing value due to increasing PI, as found at the stations 1, 2, and 3. In other words, River Bian had been lightly contaminated.
Survey of abdominal obesities in an adult urban population of Kinshasa, Democratic Republic of Congo
Kasiam Lasi On’kin, JB; Longo-Mbenza, B; Okwe, A Nge; Kabangu, N Kangola
2007-01-01
Summary Background The prevalence of overweight/obesity, which is an important cardiovascular risk factor, is rapidly increasing worldwide. Abdominal obesity, a fundamental component of the metabolic syndrome, is not defined by appropriate cutoff points for sub-Saharan Africa. Objective To provide baseline and reference data on the anthropometry/body composition and the prevalence rates of obesity types and levels in the adult urban population of Kinshasa, DRC, Central Africa. Methods During this cross-sectional study carried out within a random sample of adults in Kinshasa town, body mass index, waist circumference and fatty mass were measured using standard methods. Their reference and local thresholds (cut-off points) were compared with those of WHO, NCEP and IFD to define the types and levels of obesity in the population. Results From this sample of 11 511 subjects (5 676 men and 5 835 women), the men presented with similar body mass index and fatty mass values to those of the women, but higher waist measurements. The international thresholds overestimated the prevalence of denutrition, but underscored that of general and abdominal obesity. The two types of obesity were more prevalent among women than men when using both international and local thresholds. Body mass index was negatively associated with age; but abdominal obesity was more frequent before 20 years of age and between 40 and 60 years old. Local thresholds of body mass index (≥ 23, ≥ 27 and ≥ 30 kg/m2) and waist measurement (≥ 80, ≥ 90 and ≥ 94 cm) defined epidemic rates of overweight/general obesity (52%) and abdominal obesity (40.9%). The threshold of waist circumference ≥ 94 cm (90th percentile) corresponding to the threshold of the body mass index ≥ 30 kg/m2 (90th percentile) was proposed as the specific threshold of definition of the metabolic syndrome, without reference to gender, for the cities of sub-Saharan Africa. Conclusion Further studies are required to define the optimal threshold of waist circumference in rural settings. The present local cut-off points of body mass index and waist circumference could be appropriate for the identification of Africans at risk of obesity-related disorders, and indicate the need to implement interventions to reverse increasing levels of obesity. PMID:17985031
Wearable Lactate Threshold Predicting Device is Valid and Reliable in Runners.
Borges, Nattai R; Driller, Matthew W
2016-08-01
Borges, NR and Driller, MW. Wearable lactate threshold predicting device is valid and reliable in runners. J Strength Cond Res 30(8): 2212-2218, 2016-A commercially available device claiming to be the world's first wearable lactate threshold predicting device (WLT), using near-infrared LED technology, has entered the market. The aim of this study was to determine the levels of agreement between the WLT-derived lactate threshold workload and traditional methods of lactate threshold (LT) calculation and the interdevice and intradevice reliability of the WLT. Fourteen (7 male, 7 female; mean ± SD; age: 18-45 years, height: 169 ± 9 cm, mass: 67 ± 13 kg, V[Combining Dot Above]O2max: 53 ± 9 ml·kg·min) subjects ranging from recreationally active to highly trained athletes completed an incremental exercise test to exhaustion on a treadmill. Blood lactate samples were taken at the end of each 3-minute stage during the test to determine lactate threshold using 5 traditional methods from blood lactate analysis which were then compared against the WLT predicted value. In a subset of the population (n = 12), repeat trials were performed to determine both inter-reliability and intrareliability of the WLT device. Intraclass correlation coefficient (ICC) found high to very high agreement between the WLT and traditional methods (ICC > 0.80), with TEMs and mean differences ranging between 3.9-10.2% and 1.3-9.4%. Both interdevice and intradevice reliability resulted in highly reproducible and comparable results (CV < 1.2%, TEM <0.2 km·h, ICC > 0.97). This study suggests that the WLT is a practical, reliable, and noninvasive tool for use in predicting LT in runners.
Chen, Jennifer C; Cooper, Richelle J; Lopez-O'Sullivan, Ana; Schriger, David L
2014-08-01
We assess emergency department (ED) patients' risk thresholds for preferring admission versus discharge when presenting with chest pain and determine how the method of information presentation affects patients' choices. In this cross-sectional survey, we enrolled a convenience sample of lower-risk acute chest pain patients from an urban ED. We presented patients with a hypothetical value for the risk of adverse outcome that could be decreased by hospitalization and asked them to identify the risk threshold at which they preferred admission versus discharge. We randomized patients to a method of numeric presentation (natural frequency or percentage) and the initial risk presented (low or high) and followed each numeric assessment with an assessment based on visually depicted risks. We enrolled 246 patients and analyzed data on 234 with complete information. The geometric mean risk threshold with numeric presentation was 1 in 736 (1 in 233 with a percentage presentation; 1 in 2,425 with a natural frequency presentation) and 1 in 490 with a visual presentation. Fifty-nine percent of patients (137/234) chose the lowest or highest risk values offered. One hundred fourteen patients chose different thresholds for numeric and visual risk presentations. We observed strong anchoring effects; patients starting with the lowest risk chose a lower threshold than those starting with the highest risk possible and vice versa. Using an expected utility model to measure patients' risk thresholds does not seem to work, either to find a stable risk preference within individuals or in groups. Further work in measurement of patients' risk tolerance or methods of shared decisionmaking not dependent on assessment of risk tolerance is needed. Copyright © 2014 American College of Emergency Physicians. Published by Mosby, Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kuang, Zheng; Lyon, Elliott; Cheng, Hua; Page, Vincent; Shenton, Tom; Dearden, Geoff
2017-03-01
We report on a study into multi-location laser ignition (LI) with a Spatial Light Modulator (SLM), to improve the performance of a single cylinder automotive gasoline engine. Three questions are addressed: i/ How to deliver a multi-beam diffracted pattern into an engine cylinder, through a small opening, while avoiding clipping? ii/ How much incident energy can a SLM handle (optical damage threshold) and how many simultaneous beam foci could thus be created? ; iii/ Would the multi-location sparks created be sufficiently intense and stable to ignite an engine and, if so, what would be their effect on engine performance compared to single-location LI? Answers to these questions were determined as follows. Multi-beam diffracted patterns were created by applying computer generated holograms (CGHs) to the SLM. An optical system for the SLM was developed via modelling in ZEMAX, to cleanly deliver the multi-beam patterns into the combustion chamber without clipping. Optical damage experiments were carried out on Liquid Crystal on Silicon (LCoS) samples provided by the SLM manufacturer and the maximum safe pulse energy to avoid SLM damage found to be 60 mJ. Working within this limit, analysis of the multi-location laser induced sparks showed that diffracting into three identical beams gave slightly insufficient energy to guarantee 100% sparking, so subsequent engine experiments used 2 equal energy beams laterally spaced by 4 mm. The results showed that dual-location LI gave more stable combustion and higher engine power output than single-location LI, for increasingly lean air-fuel mixtures. The paper concludes by a discussion of how these results may be exploited.
Schmitt, Stephen J.; Milby Dawson, Barbara J.; Belitz, Kenneth
2009-01-01
Groundwater quality in the approximately 1,600 square-mile Antelope Valley study unit (ANT) was investigated from January to April 2008 as part of the Priority Basin Project of the Groundwater Ambient Monitoring and Assessment (GAMA) Program. The GAMA Priority Basin Project was developed in response to the Groundwater Quality Monitoring Act of 2001, and is being conducted by the U.S. Geological Survey (USGS) in cooperation with the California State Water Resources Control Board (SWRCB). The study was designed to provide a spatially unbiased assessment of the quality of raw groundwater used for public water supplies within ANT, and to facilitate statistically consistent comparisons of groundwater quality throughout California. Samples were collected from 57 wells in Kern, Los Angeles, and San Bernardino Counties. Fifty-six of the wells were selected using a spatially distributed, randomized, grid-based method to provide statistical representation of the study area (grid wells), and one additional well was selected to aid in evaluation of specific water-quality issues (understanding well). The groundwater samples were analyzed for a large number of organic constituents (volatile organic compounds [VOCs], gasoline additives and degradates, pesticides and pesticide degradates, fumigants, and pharmaceutical compounds), constituents of special interest (perchlorate, N-nitrosodimethylamine [NDMA], and 1,2,3-trichloropropane [1,2,3-TCP]), naturally occurring inorganic constituents (nutrients, major and minor ions, and trace elements), and radioactive constituents (gross alpha and gross beta radioactivity, radium isotopes, and radon-222). Naturally occurring isotopes (strontium, tritium, and carbon-14, and stable isotopes of hydrogen and oxygen in water), and dissolved noble gases also were measured to help identify the sources and ages of the sampled groundwater. In total, 239 constituents and water-quality indicators (field parameters) were investigated. Quality-control samples (blanks, replicates, and samples for matrix spikes) were collected at 12 percent of the wells, and the results for these samples were used to evaluate the quality of the data for the groundwater samples. Field blanks rarely contained detectable concentrations of any constituent, suggesting that contamination was not a noticeable source of bias in the data for the groundwater samples. Differences between replicate samples generally were within acceptable ranges, indicating acceptably low variability. Matrix spike recoveries were within acceptable ranges for most compoundsThis study did not evaluate the quality of water delivered to consumers; after withdrawal from the ground, water typically is treated, disinfected, or blended with other waters to maintain water quality. Regulatory thresholds apply to water that is served to the consumer, not to raw groundwater. However, to provide some context for the results, concentrations of constituents measured in the raw groundwater were compared with regulatory and non-regulatory health-based thresholds established by the U.S. Environmental Protection Agency (USEPA) and California Department of Public Health (CDPH) and thresholds established for aesthetic concerns (secondary maximum contaminant levels, SMCL-CA) by CDPH. Comparisons between data collected for this study and drinking-water thresholds are for illustrative purposes only, and are not indicative of compliance or non-compliance with drinking water standards. Most constituents that were detected in groundwater samples were found at concentrations below drinking-water thresholds. Volatile organic compounds (VOCs) were detected in about one-half of the samples and pesticides detected in about one-third of the samples; all detections of these constituents were below health-based thresholds. Most detections of trace elements and nutrients in samples from ANT wells were below health-based thresholds. Exceptions include: one detection of nitrite plus nitr
Gas Composition Sensing Using Carbon Nanotube Arrays
NASA Technical Reports Server (NTRS)
Li, Jing; Meyyappan, Meyya
2012-01-01
This innovation is a lightweight, small sensor for inert gases that consumes a relatively small amount of power and provides measurements that are as accurate as conventional approaches. The sensing approach is based on generating an electrical discharge and measuring the specific gas breakdown voltage associated with each gas present in a sample. An array of carbon nanotubes (CNTs) in a substrate is connected to a variable-pulse voltage source. The CNT tips are spaced appropriately from the second electrode maintained at a constant voltage. A sequence of voltage pulses is applied and a pulse discharge breakdown threshold voltage is estimated for one or more gas components, from an analysis of the current-voltage characteristics. Each estimated pulse discharge breakdown threshold voltage is compared with known threshold voltages for candidate gas components to estimate whether at least one candidate gas component is present in the gas. The procedure can be repeated at higher pulse voltages to estimate a pulse discharge breakdown threshold voltage for a second component present in the gas. The CNTs in the gas sensor have a sharp (low radius of curvature) tip; they are preferably multi-wall carbon nanotubes (MWCNTs) or carbon nanofibers (CNFs), to generate high-strength electrical fields adjacent to the tips for breakdown of the gas components with lower voltage application and generation of high current. The sensor system can provide a high-sensitivity, low-power-consumption tool that is very specific for identification of one or more gas components. The sensor can be multiplexed to measure current from multiple CNT arrays for simultaneous detection of several gas components.
NASA Astrophysics Data System (ADS)
Jamróz, Dariusz; Niedoba, Tomasz; Surowiak, Agnieszka; Tumidajski, Tadeusz; Szostek, Roman; Gajer, Mirosław
2017-09-01
The application of methods drawing upon multi-parameter visualization of data by transformation of multidimensional space into two-dimensional one allow to show multi-parameter data on computer screen. Thanks to that, it is possible to conduct a qualitative analysis of this data in the most natural way for human being, i.e. by the sense of sight. An example of such method of multi-parameter visualization is multidimensional scaling. This method was used in this paper to present and analyze a set of seven-dimensional data obtained from Janina Mining Plant and Wieczorek Coal Mine. It was decided to examine whether the method of multi-parameter data visualization allows to divide the samples space into areas of various applicability to fluidal gasification process. The "Technological applicability card for coals" was used for this purpose [Sobolewski et al., 2012; 2017], in which the key parameters, important and additional ones affecting the gasification process were described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parsons, Brendon A.; Pinkerton, David K.; Wright, Bob W.
The illicit chemical alteration of petroleum fuels is of scientific interest, particularly to regulatory agencies which set fuel specifications, or excises based on those specifications. One type of alteration is the reaction of diesel fuel with concentrated sulfuric acid. Such reactions are known to subtly alter the chemical composition of the fuel, particularly the aromatic species native to the fuel. Comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GC × GC–TOFMS) is ideally suited for the analysis of diesel fuel, but may provide the analyst with an overwhelming amount of data, particularly in sample-class comparison experiments comprised of manymore » samples. The tile-based Fisher-ratio (F-ratio) method reduces the abundance of data in a GC × GC–TOFMS experiment to only the peaks which significantly distinguish the unaltered and acid altered sample classes. Three samples of diesel fuel from different filling stations were each altered to discover chemical features, i.e., analyte peaks, which were consistently changed by the acid reaction. Using different fuels prioritizes the discovery of features which are likely to be robust to the variation present between fuel samples and which will consequently be useful in determining whether an unknown sample has been acid altered. The subsequent analysis confirmed that aromatic species are removed by the acid alteration, with the degree of removal consistent with predicted reactivity toward electrophilic aromatic sulfonation. Additionally, we observed that alkenes and alkynes were also removed from the fuel, and that sulfur dioxide or compounds that degrade to sulfur dioxide are generated by the acid alteration. In addition to applying the previously reported tile-based F-ratio method, this report also expands null distribution analysis to algorithmically determine an F-ratio threshold to confidently select only the features which are sufficiently class-distinguishing. When applied to the acid alteration of diesel fuel, the suggested per-hit F-ratio threshold was 12.4, which is predicted to maintain the false discovery rate (FDR) below 0.1%. Using this F-ratio threshold, 107 of the 3362 preliminary hits were deemed significantly changing due to the acid alteration, with the number of false positives estimated to be about 3.« less
A multigrid method for steady Euler equations on unstructured adaptive grids
NASA Technical Reports Server (NTRS)
Riemslagh, Kris; Dick, Erik
1993-01-01
A flux-difference splitting type algorithm is formulated for the steady Euler equations on unstructured grids. The polynomial flux-difference splitting technique is used. A vertex-centered finite volume method is employed on a triangular mesh. The multigrid method is in defect-correction form. A relaxation procedure with a first order accurate inner iteration and a second-order correction performed only on the finest grid, is used. A multi-stage Jacobi relaxation method is employed as a smoother. Since the grid is unstructured a Jacobi type is chosen. The multi-staging is necessary to provide sufficient smoothing properties. The domain is discretized using a Delaunay triangular mesh generator. Three grids with more or less uniform distribution of nodes but with different resolution are generated by successive refinement of the coarsest grid. Nodes of coarser grids appear in the finer grids. The multigrid method is started on these grids. As soon as the residual drops below a threshold value, an adaptive refinement is started. The solution on the adaptively refined grid is accelerated by a multigrid procedure. The coarser multigrid grids are generated by successive coarsening through point removement. The adaption cycle is repeated a few times. Results are given for the transonic flow over a NACA-0012 airfoil.
Roalf, David R.; Quarmley, Megan; Elliott, Mark A.; Satterthwaite, Theodore D.; Vandekar, Simon N.; Ruparel, Kosha; Gennatas, Efstathios D.; Calkins, Monica E.; Moore, Tyler M.; Hopson, Ryan; Prabhakaran, Karthik; Jackson, Chad T.; Verma, Ragini; Hakonarson, Hakon; Gur, Ruben C.; Gur, Raquel E.
2015-01-01
Background Diffusion tensor imaging (DTI) is applied in investigation of brain biomarkers for neurodevelopmental and neurodegenerative disorders. However, the quality of DTI measurements, like other neuroimaging techniques, is susceptible to several confounding factors (e.g. motion, eddy currents), which have only recently come under scrutiny. These confounds are especially relevant in adolescent samples where data quality may be compromised in ways that confound interpretation of maturation parameters. The current study aims to leverage DTI data from the Philadelphia Neurodevelopmental Cohort (PNC), a sample of 1,601 youths ages of 8–21 who underwent neuroimaging, to: 1) establish quality assurance (QA) metrics for the automatic identification of poor DTI image quality; 2) examine the performance of these QA measures in an external validation sample; 3) document the influence of data quality on developmental patterns of typical DTI metrics. Methods All diffusion-weighted images were acquired on the same scanner. Visual QA was performed on all subjects completing DTI; images were manually categorized as Poor, Good, or Excellent. Four image quality metrics were automatically computed and used to predict manual QA status: Mean voxel intensity outlier count (MEANVOX), Maximum voxel intensity outlier count (MAXVOX), mean relative motion (MOTION) and temporal signal-to-noise ratio (TSNR). Classification accuracy for each metric was calculated as the area under the receiver-operating characteristic curve (AUC). A threshold was generated for each measure that best differentiated visual QA status and applied in a validation sample. The effects of data quality on sensitivity to expected age effects in this developmental sample were then investigated using the traditional MRI diffusion metrics: fractional anisotropy (FA) and mean diffusivity (MD). Finally, our method of QA is compared to DTIPrep. Results TSNR (AUC=0.94) best differentiated Poor data from Good and Excellent data. MAXVOX (AUC=0.88) best differentiated Good from Excellent DTI data. At the optimal threshold, 88% of Poor data and 91% Good/Excellent data were correctly identified. Use of these thresholds on a validation dataset (n=374) indicated high accuracy. In the validation sample 83% of Poor data and 94% of Excellent data was identified using thresholds derived from the training sample. Both FA and MD were affected by the inclusion of poor data in an analysis of age, sex and race in a matched comparison sample. In addition, we show that the inclusion of poor data results in significant attenuation of the correlation between diffusion metrics (FA and MD) and age during a critical neurodevelopmental period. We find higher correspondence between our QA method and DTIPrep for Poor data, but we find our method to be more robust for apparently high-quality images. Conclusion Automated QA of DTI can facilitate large-scale, high-throughput quality assurance by reliably identifying both scanner and subject induced imaging artifacts. The results present a practical example of the confounding effects of artifacts on DTI analysis in a large population-based sample, and suggest that estimates of data quality should not only be reported but also accounted for in data analysis, especially in studies of development. PMID:26520775
McCaffrey, Nikki; Agar, Meera; Harlum, Janeane; Karnon, Jonathon; Currow, David; Eckermann, Simon
2015-01-01
Introduction Comparing multiple, diverse outcomes with cost-effectiveness analysis (CEA) is important, yet challenging in areas like palliative care where domains are unamenable to integration with survival. Generic multi-attribute utility values exclude important domains and non-health outcomes, while partial analyses—where outcomes are considered separately, with their joint relationship under uncertainty ignored—lead to incorrect inference regarding preferred strategies. Objective The objective of this paper is to consider whether such decision making can be better informed with alternative presentation and summary measures, extending methods previously shown to have advantages in multiple strategy comparison. Methods Multiple outcomes CEA of a home-based palliative care model (PEACH) relative to usual care is undertaken in cost disutility (CDU) space and compared with analysis on the cost-effectiveness plane. Summary measures developed for comparing strategies across potential threshold values for multiple outcomes include: expected net loss (ENL) planes quantifying differences in expected net benefit; the ENL contour identifying preferred strategies minimising ENL and their expected value of perfect information; and cost-effectiveness acceptability planes showing probability of strategies minimising ENL. Results Conventional analysis suggests PEACH is cost-effective when the threshold value per additional day at home ( 1) exceeds $1,068 or dominated by usual care when only the proportion of home deaths is considered. In contrast, neither alternative dominate in CDU space where cost and outcomes are jointly considered, with the optimal strategy depending on threshold values. For example, PEACH minimises ENL when 1=$2,000 and 2=$2,000 (threshold value for dying at home), with a 51.6% chance of PEACH being cost-effective. Conclusion Comparison in CDU space and associated summary measures have distinct advantages to multiple domain comparisons, aiding transparent and robust joint comparison of costs and multiple effects under uncertainty across potential threshold values for effect, better informing net benefit assessment and related reimbursement and research decisions. PMID:25751629
NASA Astrophysics Data System (ADS)
Hori, Y.; Cheng, V. Y. S.; Gough, W. A.
2017-12-01
A network of winter roads in northern Canada connects a number of remote First Nations communities to all-season roads and rails. The extent of the winter road networks depends on the geographic features, socio-economic activities, and the numbers of remote First Nations so that it differs among the provinces. The most extensive winter road networks below the 60th parallel south are located in Ontario and Manitoba, serving 32 and 18 communities respectively. In recent years, a warmer climate has resulted in a shorter winter road season and an increase in unreliable road conditions; thus, limiting access among remote communities. This study focused on examining the future freezing degree-days (FDDs) accumulations during the winter road season at selected locations throughout Ontario's Far North and northern Manitoba using recent climate model projections from the multi-model ensembles of General Circulation Models (GCMs) under the Representative Concentration Pathway (RCP) scenarios. First, the non-parametric Mann-Kendall correlation test and the Theil-Sen method were used to identify any statistically significant trends between FDDs and time for the base period (1981-2010). Second, future climate scenarios are developed for the study areas using statistical downscaling methods. This study also examined the lowest threshold of FDDs during the winter road construction in a future period. Our previous study established the lowest threshold of 380 FDDs, which derived from the relationship between the FDDs and the opening dates of James Bay Winter Road near the Hudson-James Bay coast. Thus, this study applied the threshold measure as a conservative estimate of the minimum threshold of FDDs to examine the effects of climate change on the winter road construction period.
Aircraft Conflict Analysis and Real-Time Conflict Probing Using Probabilistic Trajectory Modeling
NASA Technical Reports Server (NTRS)
Yang, Lee C.; Kuchar, James K.
2000-01-01
Methods for maintaining separation between aircraft in the current airspace system have been built from a foundation of structured routes and evolved procedures. However, as the airspace becomes more congested and the chance of failures or operational error become more problematic, automated conflict alerting systems have been proposed to help provide decision support and to serve as traffic monitoring aids. The problem of conflict detection and resolution has been tackled from a number of different ways, but in this thesis, it is recast as a problem of prediction in the presence of uncertainties. Much of the focus is concentrated on the errors and uncertainties from the working trajectory model used to estimate future aircraft positions. The more accurate the prediction, the more likely an ideal (no false alarms, no missed detections) alerting system can be designed. Additional insights into the problem were brought forth by a review of current operational and developmental approaches found in the literature. An iterative, trial and error approach to threshold design was identified. When examined from a probabilistic perspective, the threshold parameters were found to be a surrogate to probabilistic performance measures. To overcome the limitations in the current iterative design method, a new direct approach is presented where the performance measures are directly computed and used to perform the alerting decisions. The methodology is shown to handle complex encounter situations (3-D, multi-aircraft, multi-intent, with uncertainties) with relative ease. Utilizing a Monte Carlo approach, a method was devised to perform the probabilistic computations in near realtime. Not only does this greatly increase the method's potential as an analytical tool, but it also opens up the possibility for use as a real-time conflict alerting probe. A prototype alerting logic was developed and has been utilized in several NASA Ames Research Center experimental studies.
Wavelet-based multicomponent denoising on GPU to improve the classification of hyperspectral images
NASA Astrophysics Data System (ADS)
Quesada-Barriuso, Pablo; Heras, Dora B.; Argüello, Francisco; Mouriño, J. C.
2017-10-01
Supervised classification allows handling a wide range of remote sensing hyperspectral applications. Enhancing the spatial organization of the pixels over the image has proven to be beneficial for the interpretation of the image content, thus increasing the classification accuracy. Denoising in the spatial domain of the image has been shown as a technique that enhances the structures in the image. This paper proposes a multi-component denoising approach in order to increase the classification accuracy when a classification method is applied. It is computed on multicore CPUs and NVIDIA GPUs. The method combines feature extraction based on a 1Ddiscrete wavelet transform (DWT) applied in the spectral dimension followed by an Extended Morphological Profile (EMP) and a classifier (SVM or ELM). The multi-component noise reduction is applied to the EMP just before the classification. The denoising recursively applies a separable 2D DWT after which the number of wavelet coefficients is reduced by using a threshold. Finally, inverse 2D-DWT filters are applied to reconstruct the noise free original component. The computational cost of the classifiers as well as the cost of the whole classification chain is high but it is reduced achieving real-time behavior for some applications through their computation on NVIDIA multi-GPU platforms.
Adaptive segmentation of nuclei in H&S stained tendon microscopy
NASA Astrophysics Data System (ADS)
Chuang, Bo-I.; Wu, Po-Ting; Hsu, Jian-Han; Jou, I.-Ming; Su, Fong-Chin; Sun, Yung-Nien
2015-12-01
Tendiopathy is a popular clinical issue in recent years. In most cases like trigger finger or tennis elbow, the pathology change can be observed under H and E stained tendon microscopy. However, the qualitative analysis is too subjective and thus the results heavily depend on the observers. We develop an automatic segmentation procedure which segments and counts the nuclei in H and E stained tendon microscopy fast and precisely. This procedure first determines the complexity of images and then segments the nuclei from the image. For the complex images, the proposed method adopts sampling-based thresholding to segment the nuclei. While for the simple images, the Laplacian-based thresholding is employed to re-segment the nuclei more accurately. In the experiments, the proposed method is compared with the experts outlined results. The nuclei number of proposed method is closed to the experts counted, and the processing time of proposed method is much faster than the experts'.
Bikel, Shirley; Jacobo-Albavera, Leonor; Sánchez-Muñoz, Fausto; Cornejo-Granados, Fernanda; Canizales-Quinteros, Samuel; Soberón, Xavier; Sotelo-Mundo, Rogerio R; Del Río-Navarro, Blanca E; Mendoza-Vargas, Alfredo; Sánchez, Filiberto; Ochoa-Leyva, Adrian
2017-01-01
In spite of the emergence of RNA sequencing (RNA-seq), microarrays remain in widespread use for gene expression analysis in the clinic. There are over 767,000 RNA microarrays from human samples in public repositories, which are an invaluable resource for biomedical research and personalized medicine. The absolute gene expression analysis allows the transcriptome profiling of all expressed genes under a specific biological condition without the need of a reference sample. However, the background fluorescence represents a challenge to determine the absolute gene expression in microarrays. Given that the Y chromosome is absent in female subjects, we used it as a new approach for absolute gene expression analysis in which the fluorescence of the Y chromosome genes of female subjects was used as the background fluorescence for all the probes in the microarray. This fluorescence was used to establish an absolute gene expression threshold, allowing the differentiation between expressed and non-expressed genes in microarrays. We extracted the RNA from 16 children leukocyte samples (nine males and seven females, ages 6-10 years). An Affymetrix Gene Chip Human Gene 1.0 ST Array was carried out for each sample and the fluorescence of 124 genes of the Y chromosome was used to calculate the absolute gene expression threshold. After that, several expressed and non-expressed genes according to our absolute gene expression threshold were compared against the expression obtained using real-time quantitative polymerase chain reaction (RT-qPCR). From the 124 genes of the Y chromosome, three genes (DDX3Y, TXLNG2P and EIF1AY) that displayed significant differences between sexes were used to calculate the absolute gene expression threshold. Using this threshold, we selected 13 expressed and non-expressed genes and confirmed their expression level by RT-qPCR. Then, we selected the top 5% most expressed genes and found that several KEGG pathways were significantly enriched. Interestingly, these pathways were related to the typical functions of leukocytes cells, such as antigen processing and presentation and natural killer cell mediated cytotoxicity. We also applied this method to obtain the absolute gene expression threshold in already published microarray data of liver cells, where the top 5% expressed genes showed an enrichment of typical KEGG pathways for liver cells. Our results suggest that the three selected genes of the Y chromosome can be used to calculate an absolute gene expression threshold, allowing a transcriptome profiling of microarray data without the need of an additional reference experiment. Our approach based on the establishment of a threshold for absolute gene expression analysis will allow a new way to analyze thousands of microarrays from public databases. This allows the study of different human diseases without the need of having additional samples for relative expression experiments.
Few-cycle pulse laser induced damage threshold determination of ultra-broadband optics.
Kafka, Kyle R P; Talisa, Noah; Tempea, Gabriel; Austin, Drake R; Neacsu, Catalin; Chowdhury, Enam A
2016-12-12
A systematic study of few-cycle pulse laser induced damage threshold (LIDT) determination was performed for commercially-available ultra-broadband optics, (i.e. chirped mirrors, silver mirrors, beamsplitters, etc.) in vacuum and in air, for single and multi-pulse regime (S-on-1). Multi-pulse damage morphology at fluences below the single-pulse LIDT was studied in order to investigate the mechanisms leading to the onset of damage. Stark morphological contrast was observed between multi-pulse damage sites formed in air versus those in vacuum. One effect of vacuum testing compared to air included suppression of laser-induced periodic surface structures (LIPSS) formation, possibly influenced by a reduced presence of damage debris. Another effect of vacuum was occasional lowering of LIDT, which appears to be due to the stress-strain performance of the coating design during laser irradiation and under the external stress of vacuum ambience. A fused silica substrate is also examined, and a non-LIPSS nanostructuring is observed on the surface. Possible mechanisms are discussed.
Shelton, Jennifer L.; Pimentel, Isabel; Fram, Miranda S.; Belitz, Kenneth
2008-01-01
Ground-water quality in the approximately 3,000 square-mile Kern County Subbasin study unit (KERN) was investigated from January to March, 2006, as part of the Priority Basin Assessment Project of the Groundwater Ambient Monitoring and Assessment (GAMA) Program. The GAMA Priority Basin Assessment project was developed in response to the Groundwater Quality Monitoring Act of 2001, and is being conducted by the California State Water Resources Control Board (SWRCB) in collaboration with the U.S. Geological Survey (USGS) and the Lawrence Livermore National Laboratory (LLNL). The Kern County Subbasin study was designed to provide a spatially unbiased assessment of raw (untreated) ground-water quality within KERN, as well as a statistically consistent basis for comparing water quality throughout California. Samples were collected from 50 wells within the San Joaquin Valley portion of Kern County. Forty-seven of the wells were selected using a randomized grid-based method to provide a statistical representation of the ground-water resources within the study unit. Three additional wells were sampled to aid in the evaluation of changes in water chemistry along regional ground-water flow paths. The ground-water samples were analyzed for a large number of man-made organic constituents (volatile organic compounds [VOCs], pesticides, and pesticide degradates), constituents of special interest (perchlorate, N-nitrosodimethylamine [NDMA], and 1,2,3-trichloropropane [1,2,3-TCP]), naturally occurring inorganic constituents (nutrients, major and minor ions, and trace elements), radioactive constituents, and microbial indicators. Naturally occurring isotopes (tritium, carbon-14, and stable isotopes of hydrogen, oxygen, nitrogen, and carbon) and dissolved noble gases also were measured to help identify the source and age of the sampled ground water. Quality-control samples (blanks, replicates, and laboratory matrix spikes) were collected and analyzed at approximately 10 percent of the wells, and the results for these samples were used to evaluate the quality of the data from the ground-water samples. Assessment of the quality-control information resulted in censoring of less than 0.4 percent of the data collected for ground-water samples. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, raw ground water typically is treated, disinfected, or blended with other waters to maintain acceptable water quality. Regulatory thresholds apply, not to the raw ground water, but to treated water that is served to the consumer. However, to provide some context for the results, concentrations of constituents measured in the raw ground water were compared with health-based thresholds established by the U.S. Environmental Protection Agency (USEPA) and the California Department of Public Health (CDPH), and as well as with thresholds established for aesthetic concerns (secondary maximum contaminant levels, SMCL-CA) by CDPH. VOCs and pesticides each were detected in approximately 60 percent of the grid wells, and detections of all compounds but one were below health-based thresholds. The fumigant, 1,2-dibromo-3-chloropropane (DBCP), was detected above the USEPA maximum contaminant level (MCL-US) in one sample. Detections of most inorganic constituents were also below health-based thresholds. Constituents detected above health-based thresholds include: nitrate, (MCL-US, 2 samples), arsenic (MCL-US, 2 samples), and vanadium (California notification level, NL-CA, 1 sample). All detections of radioactive constituents were below health-based thresholds, although nine samples had activities of radon-222 above the lower proposed MCL-US. Most of the samples from KERN wells had concentrations of major elements, total dissolved solids, and trace elements below the non-enforceable thresholds set for aesthetic concerns.
Privacy preserving data anonymization of spontaneous ADE reporting system dataset.
Lin, Wen-Yang; Yang, Duen-Chuan; Wang, Jie-Teng
2016-07-18
To facilitate long-term safety surveillance of marketing drugs, many spontaneously reporting systems (SRSs) of ADR events have been established world-wide. Since the data collected by SRSs contain sensitive personal health information that should be protected to prevent the identification of individuals, it procures the issue of privacy preserving data publishing (PPDP), that is, how to sanitize (anonymize) raw data before publishing. Although much work has been done on PPDP, very few studies have focused on protecting privacy of SRS data and none of the anonymization methods is favorable for SRS datasets, due to which contain some characteristics such as rare events, multiple individual records, and multi-valued sensitive attributes. We propose a new privacy model called MS(k, θ (*) )-bounding for protecting published spontaneous ADE reporting data from privacy attacks. Our model has the flexibility of varying privacy thresholds, i.e., θ (*) , for different sensitive values and takes the characteristics of SRS data into consideration. We also propose an anonymization algorithm for sanitizing the raw data to meet the requirements specified through the proposed model. Our algorithm adopts a greedy-based clustering strategy to group the records into clusters, conforming to an innovative anonymization metric aiming to minimize the privacy risk as well as maintain the data utility for ADR detection. Empirical study was conducted using FAERS dataset from 2004Q1 to 2011Q4. We compared our model with four prevailing methods, including k-anonymity, (X, Y)-anonymity, Multi-sensitive l-diversity, and (α, k)-anonymity, evaluated via two measures, Danger Ratio (DR) and Information Loss (IL), and considered three different scenarios of threshold setting for θ (*) , including uniform setting, level-wise setting and frequency-based setting. We also conducted experiments to inspect the impact of anonymized data on the strengths of discovered ADR signals. With all three different threshold settings for sensitive value, our method can successively prevent the disclosure of sensitive values (nearly all observed DRs are zeros) without sacrificing too much of data utility. With non-uniform threshold setting, level-wise or frequency-based, our MS(k, θ (*))-bounding exhibits the best data utility and the least privacy risk among all the models. The experiments conducted on selected ADR signals from MedWatch show that only very small difference on signal strength (PRR or ROR) were observed. The results show that our method can effectively prevent the disclosure of patient sensitive information without sacrificing data utility for ADR signal detection. We propose a new privacy model for protecting SRS data that possess some characteristics overlooked by contemporary models and an anonymization algorithm to sanitize SRS data in accordance with the proposed model. Empirical evaluation on the real SRS dataset, i.e., FAERS, shows that our method can effectively solve the privacy problem in SRS data without influencing the ADR signal strength.
Impact on enzyme activity as a new quality index of wastewater.
Balestri, Francesco; Moschini, Roberta; Cappiello, Mario; Del-Corso, Antonella; Mura, Umberto
2013-03-15
The aim of this study was to define a new indicator for the quality of wastewaters that are released into the environment. A quality index is proposed for wastewater samples in terms of the inertness of wastewater samples toward enzyme activity. This involves taking advantage of the sensitivity of enzymes to pollutants that may be present in the waste samples. The effect of wastewater samples on the rate of a number of different enzyme-catalyzed reactions was measured, and the results for all the selected enzymes were analyzed in an integrated fashion (multi-enzymatic sensor). This approach enabled us to define an overall quality index, the "Impact on Enzyme Function" (IEF-index), which is composed of three indicators: i) the Synoptic parameter, related to the average effect of the waste sample on each component of the enzymatic sensor; ii) the Peak parameter, related to the maximum effect observed among all the effects exerted by the sample on the sensor components; and, iii) the Interference parameter, related to the number of sensor components that are affected less than a fixed threshold value. A number of water based samples including public potable tap water, fluids from urban sewage systems, wastewater disposal from leather, paper and dye industries were analyzed and the IEF-index was then determined. Although the IEF-index cannot discriminate between different types of wastewater samples, it could be a useful parameter in monitoring the improvement of the quality of a specific sample. However, by analyzing an adequate number of waste samples of the same type, even from different local contexts, the profile of the impact of each component of the multi-enzymatic sensor could be typical for specific types of waste. The IEF-index is proposed as a supplementary qualification score for wastewaters, in addition to the certification of the waste's conformity to legal requirements. Copyright © 2013 Elsevier Ltd. All rights reserved.
Cao, Peng; Liu, Xiaoli; Yang, Jinzhu; Zhao, Dazhe; Huang, Min; Zhang, Jian; Zaiane, Osmar
2017-12-01
Alzheimer's disease (AD) has been not only a substantial financial burden to the health care system but also an emotional burden to patients and their families. Making accurate diagnosis of AD based on brain magnetic resonance imaging (MRI) is becoming more and more critical and emphasized at the earliest stages. However, the high dimensionality and imbalanced data issues are two major challenges in the study of computer aided AD diagnosis. The greatest limitations of existing dimensionality reduction and over-sampling methods are that they assume a linear relationship between the MRI features (predictor) and the disease status (response). To better capture the complicated but more flexible relationship, we propose a multi-kernel based dimensionality reduction and over-sampling approaches. We combined Marginal Fisher Analysis with ℓ 2,1 -norm based multi-kernel learning (MKMFA) to achieve the sparsity of region-of-interest (ROI), which leads to simultaneously selecting a subset of the relevant brain regions and learning a dimensionality transformation. Meanwhile, a multi-kernel over-sampling (MKOS) was developed to generate synthetic instances in the optimal kernel space induced by MKMFA, so as to compensate for the class imbalanced distribution. We comprehensively evaluate the proposed models for the diagnostic classification (binary class and multi-class classification) including all subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. The experimental results not only demonstrate the proposed method has superior performance over multiple comparable methods, but also identifies relevant imaging biomarkers that are consistent with prior medical knowledge. Copyright © 2017 Elsevier Ltd. All rights reserved.
Han, Wenhua; Shen, Xiaohui; Xu, Jun; Wang, Ping; Tian, Guiyun; Wu, Zhengyang
2014-01-01
Magnetic flux leakage (MFL) inspection is one of the most important and sensitive nondestructive testing approaches. For online MFL inspection of a long-range railway track or oil pipeline, a fast and effective defect profile estimating method based on a multi-power affine projection algorithm (MAPA) is proposed, where the depth of a sampling point is related with not only the MFL signals before it, but also the ones after it, and all of the sampling points related to one point appear as serials or multi-power. Defect profile estimation has two steps: regulating a weight vector in an MAPA filter and estimating a defect profile with the MAPA filter. Both simulation and experimental data are used to test the performance of the proposed method. The results demonstrate that the proposed method exhibits high speed while maintaining the estimated profiles clearly close to the desired ones in a noisy environment, thereby meeting the demand of accurate online inspection. PMID:25192314
Han, Wenhua; Shen, Xiaohui; Xu, Jun; Wang, Ping; Tian, Guiyun; Wu, Zhengyang
2014-09-04
Magnetic flux leakage (MFL) inspection is one of the most important and sensitive nondestructive testing approaches. For online MFL inspection of a long-range railway track or oil pipeline, a fast and effective defect profile estimating method based on a multi-power affine projection algorithm (MAPA) is proposed, where the depth of a sampling point is related with not only the MFL signals before it, but also the ones after it, and all of the sampling points related to one point appear as serials or multi-power. Defect profile estimation has two steps: regulating a weight vector in an MAPA filter and estimating a defect profile with the MAPA filter. Both simulation and experimental data are used to test the performance of the proposed method. The results demonstrate that the proposed method exhibits high speed while maintaining the estimated profiles clearly close to the desired ones in a noisy environment, thereby meeting the demand of accurate online inspection.
Liang, Xianrui; Zhao, Cui; Su, Weike
2015-11-01
An ultra-performance liquid chromatography coupled with quadrupole time-of-flight mass spectrometry method integrating multi-constituent determination and fingerprint analysis has been established for quality assessment and control of Scutellaria indica L. The optimized method possesses the advantages of speediness, efficiency, and allows multi-constituents determination and fingerprint analysis in one chromatographic run within 11 min. 36 compounds were detected, and 23 of them were unequivocally identified or tentatively assigned. The established fingerprint method was applied to the analysis of ten S. indica samples from different geographic locations. The quality assessment was achieved by using principal component analysis. The proposed method is useful and reliable for the characterization of multi-constituents in a complex chemical system and the overall quality assessment of S. indica. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Du, Jinming; Tang, Lixin
2018-01-01
Understanding voluntary contribution in threshold public goods games has important practical implications. To improve contributions and provision frequency, free-rider problem and assurance problem should be solved. Insurance could play a significant, but largely unrecognized, role in facilitating a contribution to provision of public goods through providing insurance compensation against the losses. In this paper, we study how insurance compensation mechanism affects individuals’ decision-making under risk environments. We propose a multi-level threshold public goods game model where two kinds of public goods games (local and global) are considered. Particularly, the global public goods game involves a threshold, which is related to the safety of all the players. We theoretically probe the evolution of contributions of different levels and free-riders, and focus on the influence of the insurance on the global contribution. We explore, in both the cases, the scenarios that only global contributors could buy insurance and all the players could. It is found that with greater insurance compensation, especially under high collective risks, players are more likely to contribute globally when only global contributors are insured. On the other hand, global contribution could be promoted if a premium discount is given to global contributors when everyone buys insurance.
Todorov, Todor I.; Wolf, Ruth E.; Adams, Monique
2014-01-01
Typically, 27 major, minor, and trace elements are determined in natural waters, acid mine drainage, extraction fluids, and leachates of geological and environmental samples by inductively coupled plasma-optical emission spectrometry (ICP-OES). At the discretion of the analyst, additional elements may be determined after suitable method modifications and performance data are established. Samples are preserved in 1–2 percent nitric acid (HNO3) at sample collection or as soon as possible after collection. The aqueous samples are aspirated into the ICP-OES discharge, where the elemental emission signals are measured simultaneously for 27 elements. Calibration is performed with a series of matrix-matched, multi-element solution standards.
Evaluation damage threshold of optical thin-film using an amplified spontaneous emission source
NASA Astrophysics Data System (ADS)
Zhou, Qiong; Sun, Mingying; Zhang, Zhixiang; Yao, Yudong; Peng, Yujie; Liu, Dean; Zhu, Jianqiang
2014-10-01
An accurate evaluation method with an amplified spontaneous emission (ASE) as the irradiation source has been developed for testing thin-film damage threshold. The partial coherence of the ASE source results in a very smooth beam profile in the near-field and a uniform intensity distribution of the focal spot in the far-field. ASE is generated by an Nd: glass rod amplifier in SG-II high power laser facility, with pulse duration of 9 ns and spectral width (FWHM) of 1 nm. The damage threshold of the TiO2 high reflection film is 14.4J/cm2 using ASE as the irradiation source, about twice of 7.4 J/cm2 that tested by a laser source with the same pulse duration and central wavelength. The damage area induced by ASE is small with small-scale desquamation and a few pits, corresponding to the defect distribution of samples. Large area desquamation is observed in the area damaged by laser, as the main reason that the non-uniformity of the laser light. The ASE damage threshold leads to more accurate evaluations of the samples damage probability by reducing the influence of hot spots in the irradiation beam. Furthermore, the ASE source has a great potential in the detection of the defect distribution of the optical elements.
Consensus for second-order multi-agent systems with position sampled data
NASA Astrophysics Data System (ADS)
Wang, Rusheng; Gao, Lixin; Chen, Wenhai; Dai, Dameng
2016-10-01
In this paper, the consensus problem with position sampled data for second-order multi-agent systems is investigated. The interaction topology among the agents is depicted by a directed graph. The full-order and reduced-order observers with position sampled data are proposed, by which two kinds of sampled data-based consensus protocols are constructed. With the provided sampled protocols, the consensus convergence analysis of a continuous-time multi-agent system is equivalently transformed into that of a discrete-time system. Then, by using matrix theory and a sampled control analysis method, some sufficient and necessary consensus conditions based on the coupling parameters, spectrum of the Laplacian matrix and sampling period are obtained. While the sampling period tends to zero, our established necessary and sufficient conditions are degenerated to the continuous-time protocol case, which are consistent with the existing result for the continuous-time case. Finally, the effectiveness of our established results is illustrated by a simple simulation example. Project supported by the Natural Science Foundation of Zhejiang Province, China (Grant No. LY13F030005) and the National Natural Science Foundation of China (Grant No. 61501331).
On dealing with multiple correlation peaks in PIV
NASA Astrophysics Data System (ADS)
Masullo, A.; Theunissen, R.
2018-05-01
A novel algorithm to analyse PIV images in the presence of strong in-plane displacement gradients and reduce sub-grid filtering is proposed in this paper. Interrogation windows subjected to strong in-plane displacement gradients often produce correlation maps presenting multiple peaks. Standard multi-grid procedures discard such ambiguous correlation windows using a signal to noise (SNR) filter. The proposed algorithm improves the standard multi-grid algorithm allowing the detection of splintered peaks in a correlation map through an automatic threshold, producing multiple displacement vectors for each correlation area. Vector locations are chosen by translating images according to the peak displacements and by selecting the areas with the strongest match. The method is assessed on synthetic images of a boundary layer of varying intensity and a sinusoidal displacement field of changing wavelength. An experimental case of a flow exhibiting strong velocity gradients is also provided to show the improvements brought by this technique.
MULTI-K: accurate classification of microarray subtypes using ensemble k-means clustering
Kim, Eun-Youn; Kim, Seon-Young; Ashlock, Daniel; Nam, Dougu
2009-01-01
Background Uncovering subtypes of disease from microarray samples has important clinical implications such as survival time and sensitivity of individual patients to specific therapies. Unsupervised clustering methods have been used to classify this type of data. However, most existing methods focus on clusters with compact shapes and do not reflect the geometric complexity of the high dimensional microarray clusters, which limits their performance. Results We present a cluster-number-based ensemble clustering algorithm, called MULTI-K, for microarray sample classification, which demonstrates remarkable accuracy. The method amalgamates multiple k-means runs by varying the number of clusters and identifies clusters that manifest the most robust co-memberships of elements. In addition to the original algorithm, we newly devised the entropy-plot to control the separation of singletons or small clusters. MULTI-K, unlike the simple k-means or other widely used methods, was able to capture clusters with complex and high-dimensional structures accurately. MULTI-K outperformed other methods including a recently developed ensemble clustering algorithm in tests with five simulated and eight real gene-expression data sets. Conclusion The geometric complexity of clusters should be taken into account for accurate classification of microarray data, and ensemble clustering applied to the number of clusters tackles the problem very well. The C++ code and the data sets tested are available from the authors. PMID:19698124
Detecting text in natural scenes with multi-level MSER and SWT
NASA Astrophysics Data System (ADS)
Lu, Tongwei; Liu, Renjun
2018-04-01
The detection of the characters in the natural scene is susceptible to factors such as complex background, variable viewing angle and diverse forms of language, which leads to poor detection results. Aiming at these problems, a new text detection method was proposed, which consisted of two main stages, candidate region extraction and text region detection. At first stage, the method used multiple scale transformations of original image and multiple thresholds of maximally stable extremal regions (MSER) to detect the text regions which could detect character regions comprehensively. At second stage, obtained SWT maps by using the stroke width transform (SWT) algorithm to compute the candidate regions, then using cascaded classifiers to propose non-text regions. The proposed method was evaluated on the standard benchmark datasets of ICDAR2011 and the datasets that we made our own data sets. The experiment results showed that the proposed method have greatly improved that compared to other text detection methods.
Panahi, Rasool; Jafari, Zahra; Sheibanizade, Abdoreza; Salehi, Masoud; Esteghamati, Abdoreza; Hasani, Sara
2013-01-01
Introduction: Neonatal hyperbilirubinemia is one of the most important factors affecting the auditory system and can cause sensorineural hearing loss. This study investigated the relationship between behavioral hearing thresholds in children with a history of jaundice and the maximum level of bilirubin concentration in the blood. Materials and Methods: This study was performed on 18 children with a mean age of 5.6 years and with a history of neonatal hyperbilirubinemia. Behavioral hearing thresholds, transient evoked emissions and brainstem evoked responses were evaluated in all children. Results: Six children (33.3%) had normal hearing thresholds and the remaining (66.7%) had some degree of hearing loss. There was no significant relationship (r=-0.28, P=0.09) between the mean total bilirubin levels and behavioral hearing thresholds in all samples. A transient evoked emission was seen only in children with normal hearing thresholds however in eight cases brainstem evoked responses had not detected. Conclusion: Increased blood levels of bilirubin at the neonatal period were potentially one of the causes of hearing loss. There was a lack of a direct relationship between neonatal bilirubin levels and the average hearing thresholds which emphasizes on the necessity of monitoring the various amounts of bilirubin levels. PMID:24303432
Arulandhu, Alfred J.; Staats, Martijn; Hagelaar, Rico; Voorhuijzen, Marleen M.; Prins, Theo W.; Scholtens, Ingrid; Costessi, Adalberto; Duijsings, Danny; Rechenmann, François; Gaspar, Frédéric B.; Barreto Crespo, Maria Teresa; Holst-Jensen, Arne; Birck, Matthew; Burns, Malcolm; Haynes, Edward; Hochegger, Rupert; Klingl, Alexander; Lundberg, Lisa; Natale, Chiara; Niekamp, Hauke; Perri, Elena; Barbante, Alessandra; Rosec, Jean-Philippe; Seyfarth, Ralf; Sovová, Tereza; Van Moorleghem, Christoff; van Ruth, Saskia; Peelen, Tamara
2017-01-01
Abstract DNA metabarcoding provides great potential for species identification in complex samples such as food supplements and traditional medicines. Such a method would aid Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) enforcement officers to combat wildlife crime by preventing illegal trade of endangered plant and animal species. The objective of this research was to develop a multi-locus DNA metabarcoding method for forensic wildlife species identification and to evaluate the applicability and reproducibility of this approach across different laboratories. A DNA metabarcoding method was developed that makes use of 12 DNA barcode markers that have demonstrated universal applicability across a wide range of plant and animal taxa and that facilitate the identification of species in samples containing degraded DNA. The DNA metabarcoding method was developed based on Illumina MiSeq amplicon sequencing of well-defined experimental mixtures, for which a bioinformatics pipeline with user-friendly web-interface was developed. The performance of the DNA metabarcoding method was assessed in an international validation trial by 16 laboratories, in which the method was found to be highly reproducible and sensitive enough to identify species present in a mixture at 1% dry weight content. The advanced multi-locus DNA metabarcoding method assessed in this study provides reliable and detailed data on the composition of complex food products, including information on the presence of CITES-listed species. The method can provide improved resolution for species identification, while verifying species with multiple DNA barcodes contributes to an enhanced quality assurance. PMID:29020743
Arulandhu, Alfred J; Staats, Martijn; Hagelaar, Rico; Voorhuijzen, Marleen M; Prins, Theo W; Scholtens, Ingrid; Costessi, Adalberto; Duijsings, Danny; Rechenmann, François; Gaspar, Frédéric B; Barreto Crespo, Maria Teresa; Holst-Jensen, Arne; Birck, Matthew; Burns, Malcolm; Haynes, Edward; Hochegger, Rupert; Klingl, Alexander; Lundberg, Lisa; Natale, Chiara; Niekamp, Hauke; Perri, Elena; Barbante, Alessandra; Rosec, Jean-Philippe; Seyfarth, Ralf; Sovová, Tereza; Van Moorleghem, Christoff; van Ruth, Saskia; Peelen, Tamara; Kok, Esther
2017-10-01
DNA metabarcoding provides great potential for species identification in complex samples such as food supplements and traditional medicines. Such a method would aid Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) enforcement officers to combat wildlife crime by preventing illegal trade of endangered plant and animal species. The objective of this research was to develop a multi-locus DNA metabarcoding method for forensic wildlife species identification and to evaluate the applicability and reproducibility of this approach across different laboratories. A DNA metabarcoding method was developed that makes use of 12 DNA barcode markers that have demonstrated universal applicability across a wide range of plant and animal taxa and that facilitate the identification of species in samples containing degraded DNA. The DNA metabarcoding method was developed based on Illumina MiSeq amplicon sequencing of well-defined experimental mixtures, for which a bioinformatics pipeline with user-friendly web-interface was developed. The performance of the DNA metabarcoding method was assessed in an international validation trial by 16 laboratories, in which the method was found to be highly reproducible and sensitive enough to identify species present in a mixture at 1% dry weight content. The advanced multi-locus DNA metabarcoding method assessed in this study provides reliable and detailed data on the composition of complex food products, including information on the presence of CITES-listed species. The method can provide improved resolution for species identification, while verifying species with multiple DNA barcodes contributes to an enhanced quality assurance. © The Authors 2017. Published by Oxford University Press.
Differential Correlates of Multi-Type Maltreatment among Urban Youth
ERIC Educational Resources Information Center
Arata, Catalina M.; Langhinrichsen-Rohling, Jennifer; Bowers, David; O'Brien, Natalie
2007-01-01
Objective: The aim of this study was to examine the differential effects of multi-types of maltreatment in an adolescent sample. Different combinations of maltreatment (emotional, sexual, physical, neglect) were examined in relation to both negative affect and externalizing symptoms in male and female youth. Method: One thousand four hundred…
Luo, Jianquan; Meyer, Anne S; Mateiu, R V; Pinelo, Manuel
2015-05-25
Facile co-immobilization of enzymes is highly desirable for bioconversion methods involving multi-enzymatic cascade reactions. Here we show for the first time that three enzymes can be immobilized in flat-sheet polymeric membranes simultaneously or separately by simple pressure-driven filtration (i.e. by directing membrane fouling formation), without any addition of organic solvent. Such co-immobilization and sequential immobilization systems were examined for the production of methanol from CO2 with formate dehydrogenase (FDH), formaldehyde dehydrogenase (FaldDH) and alcohol dehydrogenase (ADH). Enzyme activity was fully retained by this non-covalent immobilization strategy. The two immobilization systems had similar catalytic efficiencies because the second reaction (formic acid→formaldehyde) catalyzed by FaldDH was found to be the cascade bottleneck (a threshold substrate concentration was required). Moreover, the trade-off between the mitigation of product inhibition and low substrate concentration for the adjacent enzymes probably made the co-immobilization meaningless. Thus, sequential immobilization could be used for multi-enzymatic cascade reactions, as it allowed the operational conditions for each single step to be optimized, not only during the enzyme immobilization but also during the reaction process, and the pressure-driven mass transfer (flow-through mode) could overcome the diffusion resistance between enzymes. This study not only offers a green and facile immobilization method for multi-enzymatic cascade systems, but also reveals the reaction bottleneck and provides possible solutions for the bioconversion of CO2 to methanol. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kanisch, G.
2017-05-01
The concepts of ISO 11929 (2010) are applied to evaluation of radionuclide activities from more complex multi-nuclide gamma-ray spectra. From net peak areas estimated by peak fitting, activities and their standard uncertainties are calculated by weighted linear least-squares method with an additional step, where uncertainties of the design matrix elements are taken into account. A numerical treatment of the standard's uncertainty function, based on ISO 11929 Annex C.5, leads to a procedure for deriving decision threshold and detection limit values. The methods shown allow resolving interferences between radionuclide activities also in case of calculating detection limits where they can improve the latter by including more than one gamma line per radionuclide. The co"mmon single nuclide weighted mean is extended to an interference-corrected (generalized) weighted mean, which, combined with the least-squares method, allows faster detection limit calculations. In addition, a new grouped uncertainty budget was inferred, which for each radionuclide gives uncertainty budgets from seven main variables, such as net count rates, peak efficiencies, gamma emission intensities and others; grouping refers to summation over lists of peaks per radionuclide.
Indonesian Sign Language Number Recognition using SIFT Algorithm
NASA Astrophysics Data System (ADS)
Mahfudi, Isa; Sarosa, Moechammad; Andrie Asmara, Rosa; Azrino Gustalika, M.
2018-04-01
Indonesian sign language (ISL) is generally used for deaf individuals and poor people communication in communicating. They use sign language as their primary language which consists of 2 types of action: sign and finger spelling. However, not all people understand their sign language so that this becomes a problem for them to communicate with normal people. this problem also becomes a factor they are isolated feel from the social life. It needs a solution that can help them to be able to interacting with normal people. Many research that offers a variety of methods in solving the problem of sign language recognition based on image processing. SIFT (Scale Invariant Feature Transform) algorithm is one of the methods that can be used to identify an object. SIFT is claimed very resistant to scaling, rotation, illumination and noise. Using SIFT algorithm for Indonesian sign language recognition number result rate recognition to 82% with the use of a total of 100 samples image dataset consisting 50 sample for training data and 50 sample images for testing data. Change threshold value get affect the result of the recognition. The best value threshold is 0.45 with rate recognition of 94%.
The "Smart Dining Table": Automatic Behavioral Tracking of a Meal with a Multi-Touch-Computer.
Manton, Sean; Magerowski, Greta; Patriarca, Laura; Alonso-Alonso, Miguel
2016-01-01
Studying how humans eat in the context of a meal is important to understanding basic mechanisms of food intake regulation and can help develop new interventions for the promotion of healthy eating and prevention of obesity and eating disorders. While there are a number of methodologies available for behavioral evaluation of a meal, there is a need for new tools that can simplify data collection through automatic and online analysis. Also, there are currently no methods that leverage technology to add a dimension of interactivity to the meal table. In this study, we examined the feasibility of a new technology for automatic detection and classification of bites during a laboratory meal. We used a SUR40 multi-touch tabletop computer, powered by an infrared camera behind the screen. Tags were attached to three plates, allowing their positions to be tracked, and the saturation (a measure of the infrared intensity) in the surrounding region was measured. A Kinect camera was used to record the meals for manual verification and provide gesture detection for when the bites were taken. Bite detections triggered classification of the source plate by the SUR40 based on saturation flux in the preceding time window. Five healthy subjects (aged 20-40 years, one female) were tested, providing a total sample of 320 bites. Sensitivity, defined as the number of correctly detected bites out of the number of actual bites, was 67.5%. Classification accuracy, defined as the number of correctly classified bites out of those detected, was 82.4%. Due to the poor sensitivity, a second experiment was designed using a single plate and a Myo armband containing a nine-axis accelerometer as an alternative method for bite detection. The same subjects were tested (sample: 195 bites). Using a simple threshold on the pitch reading of the magnetometer, the Myo data achieved 86.1% sensitivity vs. 60.5% with the Kinect. Further, the precision of positive predictive value was 72.1% for the Myo vs. 42.8% for the Kinect. We conclude that the SUR40 + Myo combination is feasible for automatic detection and classification of bites with adequate accuracy for a range of applications.
New prior sampling methods for nested sampling - Development and testing
NASA Astrophysics Data System (ADS)
Stokes, Barrie; Tuyl, Frank; Hudson, Irene
2017-06-01
Nested Sampling is a powerful algorithm for fitting models to data in the Bayesian setting, introduced by Skilling [1]. The nested sampling algorithm proceeds by carrying out a series of compressive steps, involving successively nested iso-likelihood boundaries, starting with the full prior distribution of the problem parameters. The "central problem" of nested sampling is to draw at each step a sample from the prior distribution whose likelihood is greater than the current likelihood threshold, i.e., a sample falling inside the current likelihood-restricted region. For both flat and informative priors this ultimately requires uniform sampling restricted to the likelihood-restricted region. We present two new methods of carrying out this sampling step, and illustrate their use with the lighthouse problem [2], a bivariate likelihood used by Gregory [3] and a trivariate Gaussian mixture likelihood. All the algorithm development and testing reported here has been done with Mathematica® [4].
Summary of water body extraction methods based on ZY-3 satellite
NASA Astrophysics Data System (ADS)
Zhu, Yu; Sun, Li Jian; Zhang, Chuan Yin
2017-12-01
Extracting from remote sensing images is one of the main means of water information extraction. Affected by spectral characteristics, many methods can be not applied to the satellite image of ZY-3. To solve this problem, we summarize the extraction methods for ZY-3 and analyze the extraction results of existing methods. According to the characteristics of extraction results, the method of WI& single band threshold and the method of texture filtering based on probability statistics are explored. In addition, the advantages and disadvantages of all methods are compared, which provides some reference for the research of water extraction from images. The obtained conclusions are as follows. 1) NIR has higher water sensitivity, consequently when the surface reflectance in the study area is less similar to water, using single band threshold method or multi band operation can obtain the ideal effect. 2) Compared with the water index and HIS optimal index method, object extraction method based on rules, which takes into account not only the spectral information of the water, but also space and texture feature constraints, can obtain better extraction effect, yet the image segmentation process is time consuming and the definition of the rules requires a certain knowledge. 3) The combination of the spectral relationship and water index can eliminate the interference of the shadow to a certain extent. When there is less small water or small water is not considered in further study, texture filtering based on probability statistics can effectively reduce the noises in result and avoid mixing shadows or paddy field with water in a certain extent.
Crowley, Stephanie J; Suh, Christina; Molina, Thomas A; Fogg, Louis F; Sharkey, Katherine M; Carskadon, Mary A
2016-04-01
Circadian rhythm sleep-wake disorders (CRSWDs) often manifest during the adolescent years. Measurement of circadian phase such as the dim light melatonin onset (DLMO) improves diagnosis and treatment of these disorders, but financial and time costs limit the use of DLMO phase assessments in clinic. The current analysis aims to inform a cost-effective and efficient protocol to measure the DLMO in older adolescents by reducing the number of samples and total sampling duration. A total of 66 healthy adolescents (26 males) aged 14.8-17.8 years participated in a study; they were required to sleep on a fixed baseline schedule for a week before which they visited the laboratory for saliva collection in dim light (<20 lux). Two partial 6-h salivary melatonin profiles were derived for each participant. Both profiles began 5 h before bedtime and ended 1 h after bedtime, but one profile was derived from samples taken every 30 min (13 samples) and the other from samples taken every 60 min (seven samples). Three standard thresholds (first three melatonin values mean + 2 SDs, 3 pg/mL, and 4 pg/mL) were used to compute the DLMO. An agreement between DLMOs derived from 30-min and 60-min sampling rates was determined using Bland-Altman analysis; agreement between the sampling rate DLMOs was defined as ± 1 h. Within a 6-h sampling window, 60-min sampling provided DLMO estimates within ± 1 h of DLMO from 30-min sampling, but only when an absolute threshold (3 or 4 pg/mL) was used to compute the DLMO. Future analyses should be extended to include adolescents with CRSWDs. Copyright © 2016 Elsevier B.V. All rights reserved.
Rapid habitability assessment of Mars samples by pyrolysis-FTIR
NASA Astrophysics Data System (ADS)
Gordon, Peter R.; Sephton, Mark A.
2016-02-01
Pyrolysis Fourier transform infrared spectroscopy (pyrolysis FTIR) is a potential sample selection method for Mars Sample Return missions. FTIR spectroscopy can be performed on solid and liquid samples but also on gases following preliminary thermal extraction, pyrolysis or gasification steps. The detection of hydrocarbon and non-hydrocarbon gases can reveal information on sample mineralogy and past habitability of the environment in which the sample was created. The absorption of IR radiation at specific wavenumbers by organic functional groups can indicate the presence and type of any organic matter present. Here we assess the utility of pyrolysis-FTIR to release water, carbon dioxide, sulfur dioxide and organic matter from Mars relevant materials to enable a rapid habitability assessment of target rocks for sample return. For our assessment a range of minerals were analyzed by attenuated total reflectance FTIR. Subsequently, the mineral samples were subjected to single step pyrolysis and multi step pyrolysis and the products characterised by gas phase FTIR. Data from both single step and multi step pyrolysis-FTIR provide the ability to identify minerals that reflect habitable environments through their water and carbon dioxide responses. Multi step pyrolysis-FTIR can be used to gain more detailed information on the sources of the liberated water and carbon dioxide owing to the characteristic decomposition temperatures of different mineral phases. Habitation can be suggested when pyrolysis-FTIR indicates the presence of organic matter within the sample. Pyrolysis-FTIR, therefore, represents an effective method to assess whether Mars Sample Return target rocks represent habitable conditions and potential records of habitation and can play an important role in sample triage operations.
Vehicle response-based track geometry assessment using multi-body simulation
NASA Astrophysics Data System (ADS)
Kraft, Sönke; Causse, Julien; Coudert, Frédéric
2018-02-01
The assessment of the geometry of railway tracks is an indispensable requirement for safe rail traffic. Defects which represent a risk for the safety of the train have to be identified and the necessary measures taken. According to current standards, amplitude thresholds are applied to the track geometry parameters measured by recording cars. This geometry-based assessment has proved its value but suffers from the low correlation between the geometry parameters and the vehicle reactions. Experience shows that some defects leading to critical vehicle reactions are underestimated by this approach. The use of vehicle responses in the track geometry assessment process allows identifying critical defects and improving the maintenance operations. This work presents a vehicle response-based assessment method using multi-body simulation. The choice of the relevant operation conditions and the estimation of the simulation uncertainty are outlined. The defects are identified from exceedances of track geometry and vehicle response parameters. They are then classified using clustering methods and the correlation with vehicle response is analysed. The use of vehicle responses allows the detection of critical defects which are not identified from geometry parameters.
Research on AHP decision algorithms based on BP algorithm
NASA Astrophysics Data System (ADS)
Ma, Ning; Guan, Jianhe
2017-10-01
Decision making is the thinking activity that people choose or judge, and scientific decision-making has always been a hot issue in the field of research. Analytic Hierarchy Process (AHP) is a simple and practical multi-criteria and multi-objective decision-making method that combines quantitative and qualitative and can show and calculate the subjective judgment in digital form. In the process of decision analysis using AHP method, the rationality of the two-dimensional judgment matrix has a great influence on the decision result. However, in dealing with the real problem, the judgment matrix produced by the two-dimensional comparison is often inconsistent, that is, it does not meet the consistency requirements. BP neural network algorithm is an adaptive nonlinear dynamic system. It has powerful collective computing ability and learning ability. It can perfect the data by constantly modifying the weights and thresholds of the network to achieve the goal of minimizing the mean square error. In this paper, the BP algorithm is used to deal with the consistency of the two-dimensional judgment matrix of the AHP.
NASA Astrophysics Data System (ADS)
Witteveen, Jeroen A. S.; Bijl, Hester
2009-10-01
The Unsteady Adaptive Stochastic Finite Elements (UASFE) method resolves the effect of randomness in numerical simulations of single-mode aeroelastic responses with a constant accuracy in time for a constant number of samples. In this paper, the UASFE framework is extended to multi-frequency responses and continuous structures by employing a wavelet decomposition pre-processing step to decompose the sampled multi-frequency signals into single-frequency components. The effect of the randomness on the multi-frequency response is then obtained by summing the results of the UASFE interpolation at constant phase for the different frequency components. Results for multi-frequency responses and continuous structures show a three orders of magnitude reduction of computational costs compared to crude Monte Carlo simulations in a harmonically forced oscillator, a flutter panel problem, and the three-dimensional transonic AGARD 445.6 wing aeroelastic benchmark subject to random fields and random parameters with various probability distributions.
Target matching based on multi-view tracking
NASA Astrophysics Data System (ADS)
Liu, Yahui; Zhou, Changsheng
2011-01-01
A feature matching method is proposed based on Maximally Stable Extremal Regions (MSER) and Scale Invariant Feature Transform (SIFT) to solve the problem of the same target matching in multiple cameras. Target foreground is extracted by using frame difference twice and bounding box which is regarded as target regions is calculated. Extremal regions are got by MSER. After fitted into elliptical regions, those regions will be normalized into unity circles and represented with SIFT descriptors. Initial matching is obtained from the ratio of the closest distance to second distance less than some threshold and outlier points are eliminated in terms of RANSAC. Experimental results indicate the method can reduce computational complexity effectively and is also adapt to affine transformation, rotation, scale and illumination.
Poperechna, Nataliya; Heumann, Klaus G
2005-09-01
An accurate and sensitive multi-species species-specific isotope dilution GC-ICP-MS method was developed for the simultaneous determination of trimethyllead (Me3Pb+), monomethylmercury (MeHg+) and the three butyltin species Bu3Sn+, Bu2Sn2+, and BuSn3+ in biological samples. The method was validated by three biological reference materials (CRM 477, mussel tissue certified for butyltins; CRM 463, tuna fish certified for MeHg+; DORM 2, dogfish muscle certified for MeHg+). Under certain conditions, and with minor modifications of the sample pretreatment procedure, this method could also be transferred to environmental samples such as sediments, as demonstrated by analyzing sediment reference material BCR 646 (freshwater sediment, certified for butyltins). The detection limits of the multi-species GC-ICP-IDMS method for biological samples were 1.4 ng g(-1) for MeHg+, 0.06 ng g(-1) for Me3Pb+, 0.3 ng g(-1) for BuSn3+ and Bu3Sn+, and 1.2 ng g(-1) for Bu2Sn2+. Because of the high relevance of these heavy metal alkyl species to the quality assurance of seafood, the method was also applied to corresponding samples purchased from a supermarket. The methylated lead fraction in these samples, correlated to total lead, varied over a broad range (from 0.01% to 7.6%). On the other hand, the MeHg+ fraction was much higher, normally in the range of 80-100%. Considering that we may expect tighter legislative limitations on MeHg+ levels in seafood in the future, we found the highest methylmercury contents (up to 10.6 microg g(-1)) in two shark samples, an animal which is at the end of the marine food chain, whereas MeHg+ contents of less than 0.2 microg g(-1) were found in most other seafood samples; these results correlate with the idea that MeHg+ is usually of biological origin in the marine environment. The concentration of butyltins and the fraction of the total tin content that is from butyltins strongly depend on possible contamination, due to the exclusively anthropogenic character of these compounds. A broad variation in the butylated tin fraction (in the range of <0.3-49%) was therefore observed in different seafood samples. Corresponding isotope-labeled spike compounds (except for trimethyllead) are commercially available for all of these compounds, and since these can be used in the multi-species species-specific GC-ICP-IDMS method developed here, this technique shows great potential for routine analysis in the future.
Evaluating Composite Sampling Methods of Bacillus Spores at Low Concentrations
Hess, Becky M.; Amidan, Brett G.; Anderson, Kevin K.; Hutchison, Janine R.
2016-01-01
Restoring all facility operations after the 2001 Amerithrax attacks took years to complete, highlighting the need to reduce remediation time. Some of the most time intensive tasks were environmental sampling and sample analyses. Composite sampling allows disparate samples to be combined, with only a single analysis needed, making it a promising method to reduce response times. We developed a statistical experimental design to test three different composite sampling methods: 1) single medium single pass composite (SM-SPC): a single cellulose sponge samples multiple coupons with a single pass across each coupon; 2) single medium multi-pass composite: a single cellulose sponge samples multiple coupons with multiple passes across each coupon (SM-MPC); and 3) multi-medium post-sample composite (MM-MPC): a single cellulose sponge samples a single surface, and then multiple sponges are combined during sample extraction. Five spore concentrations of Bacillus atrophaeus Nakamura spores were tested; concentrations ranged from 5 to 100 CFU/coupon (0.00775 to 0.155 CFU/cm2). Study variables included four clean surface materials (stainless steel, vinyl tile, ceramic tile, and painted dry wallboard) and three grime coated/dirty materials (stainless steel, vinyl tile, and ceramic tile). Analysis of variance for the clean study showed two significant factors: composite method (p< 0.0001) and coupon material (p = 0.0006). Recovery efficiency (RE) was higher overall using the MM-MPC method compared to the SM-SPC and SM-MPC methods. RE with the MM-MPC method for concentrations tested (10 to 100 CFU/coupon) was similar for ceramic tile, dry wall, and stainless steel for clean materials. RE was lowest for vinyl tile with both composite methods. Statistical tests for the dirty study showed RE was significantly higher for vinyl and stainless steel materials, but lower for ceramic tile. These results suggest post-sample compositing can be used to reduce sample analysis time when responding to a Bacillus anthracis contamination event of clean or dirty surfaces. PMID:27736999
Evaluating Composite Sampling Methods of Bacillus Spores at Low Concentrations.
Hess, Becky M; Amidan, Brett G; Anderson, Kevin K; Hutchison, Janine R
2016-01-01
Restoring all facility operations after the 2001 Amerithrax attacks took years to complete, highlighting the need to reduce remediation time. Some of the most time intensive tasks were environmental sampling and sample analyses. Composite sampling allows disparate samples to be combined, with only a single analysis needed, making it a promising method to reduce response times. We developed a statistical experimental design to test three different composite sampling methods: 1) single medium single pass composite (SM-SPC): a single cellulose sponge samples multiple coupons with a single pass across each coupon; 2) single medium multi-pass composite: a single cellulose sponge samples multiple coupons with multiple passes across each coupon (SM-MPC); and 3) multi-medium post-sample composite (MM-MPC): a single cellulose sponge samples a single surface, and then multiple sponges are combined during sample extraction. Five spore concentrations of Bacillus atrophaeus Nakamura spores were tested; concentrations ranged from 5 to 100 CFU/coupon (0.00775 to 0.155 CFU/cm2). Study variables included four clean surface materials (stainless steel, vinyl tile, ceramic tile, and painted dry wallboard) and three grime coated/dirty materials (stainless steel, vinyl tile, and ceramic tile). Analysis of variance for the clean study showed two significant factors: composite method (p< 0.0001) and coupon material (p = 0.0006). Recovery efficiency (RE) was higher overall using the MM-MPC method compared to the SM-SPC and SM-MPC methods. RE with the MM-MPC method for concentrations tested (10 to 100 CFU/coupon) was similar for ceramic tile, dry wall, and stainless steel for clean materials. RE was lowest for vinyl tile with both composite methods. Statistical tests for the dirty study showed RE was significantly higher for vinyl and stainless steel materials, but lower for ceramic tile. These results suggest post-sample compositing can be used to reduce sample analysis time when responding to a Bacillus anthracis contamination event of clean or dirty surfaces.
Support vector machines-based fault diagnosis for turbo-pump rotor
NASA Astrophysics Data System (ADS)
Yuan, Sheng-Fa; Chu, Fu-Lei
2006-05-01
Most artificial intelligence methods used in fault diagnosis are based on empirical risk minimisation principle and have poor generalisation when fault samples are few. Support vector machines (SVM) is a new general machine-learning tool based on structural risk minimisation principle that exhibits good generalisation even when fault samples are few. Fault diagnosis based on SVM is discussed. Since basic SVM is originally designed for two-class classification, while most of fault diagnosis problems are multi-class cases, a new multi-class classification of SVM named 'one to others' algorithm is presented to solve the multi-class recognition problems. It is a binary tree classifier composed of several two-class classifiers organised by fault priority, which is simple, and has little repeated training amount, and the rate of training and recognition is expedited. The effectiveness of the method is verified by the application to the fault diagnosis for turbo pump rotor.
Ferrari, Matthew J.; Fram, Miranda S.; Belitz, Kenneth
2008-01-01
Ground-water quality in the approximately 950 square kilometer (370 square mile) Central Sierra study unit (CENSIE) was investigated in May 2006 as part of the Priority Basin Assessment project of the Groundwater Ambient Monitoring and Assessment (GAMA) Program. The GAMA Priority Basin Assessment project was developed in response to the Ground-Water Quality Monitoring Act of 2001, and is being conducted by the U.S. Geological Survey (USGS) in cooperation with the California State Water Resources Control Board (SWRCB). This study was designed to provide a spatially unbiased assessment of the quality of raw ground water used for drinking-water supplies within CENSIE, and to facilitate statistically consistent comparisons of ground-water quality throughout California. Samples were collected from thirty wells in Madera County. Twenty-seven of the wells were selected using a spatially distributed, randomized grid-based method to provide statistical representation of the study area (grid wells), and three were selected to aid in evaluation of specific water-quality issues (understanding wells). Ground-water samples were analyzed for a large number of synthetic organic constituents (volatile organic compounds [VOCs], gasoline oxygenates and degradates, pesticides and pesticide degradates), constituents of special interest (N-nitrosodimethylamine, perchlorate, and 1,2,3-trichloropropane), naturally occurring inorganic constituents [nutrients, major and minor ions, and trace elements], radioactive constituents, and microbial indicators. Naturally occurring isotopes [tritium, and carbon-14, and stable isotopes of hydrogen, oxygen, nitrogen, and carbon], and dissolved noble gases also were measured to help identify the sources and ages of the sampled ground water. In total, over 250 constituents and water-quality indicators were investigated. Quality-control samples (blanks, replicates, and samples for matrix spikes) were collected at approximately one-sixth of the wells, and the results for these samples were used to evaluate the quality of the data for the ground-water samples. Results from field blanks indicated contamination was not a noticeable source of bias in the data for ground-water samples. Differences between replicate samples were within acceptable ranges, indicating acceptably low variability. Matrix spike recoveries were within acceptable ranges for most constituents. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, water typically is treated, disinfected, or blended with other waters to maintain water quality. Regulatory thresholds apply to water that is served to the consumer, not to raw ground water. However, to provide some context for the results, concentrations of constituents measured in the raw ground water were compared with health-based thresholds established by the U.S. Environmental Protection Agency (USEPA) and California Department of Public Health (CDPH), and thresholds established for aesthetic concerns (Secondary Maximum Contaminant Levels, SMCL-CA) by CDPH. Therefore, any comparisons of the results of this study to drinking-water standards only is for illustrative purposes and is not indicative of compliance or non-compliance to those standards. Most constituents that were detected in ground-water samples were found at concentrations below drinking-water standards or thresholds. Six constituents? fluoride, arsenic, molybdenum, uranium, gross-alpha radioactivity, and radon-222?were detected at concentrations higher than thresholds set for health-based regulatory purposes. Three additional constituents?pH, iron and manganese?were detected at concentrations above thresholds set for aesthetic concerns. Volatile organic compounds (VOCs) and pesticides, were detected in less than one-third of the samples and generally at less than one one-hundredth of a health-based threshold.
Rios, Anthony; Kavuluru, Ramakanth
2013-09-01
Extracting diagnosis codes from medical records is a complex task carried out by trained coders by reading all the documents associated with a patient's visit. With the popularity of electronic medical records (EMRs), computational approaches to code extraction have been proposed in the recent years. Machine learning approaches to multi-label text classification provide an important methodology in this task given each EMR can be associated with multiple codes. In this paper, we study the the role of feature selection, training data selection, and probabilistic threshold optimization in improving different multi-label classification approaches. We conduct experiments based on two different datasets: a recent gold standard dataset used for this task and a second larger and more complex EMR dataset we curated from the University of Kentucky Medical Center. While conventional approaches achieve results comparable to the state-of-the-art on the gold standard dataset, on our complex in-house dataset, we show that feature selection, training data selection, and probabilistic thresholding provide significant gains in performance.
Zlotnik, V.A.; McGuire, V.L.
1998-01-01
Using the developed theory and modified Springer-Gelhar (SG) model, an identification method is proposed for estimating hydraulic conductivity from multi-level slug tests. The computerized algorithm calculates hydraulic conductivity from both monotonic and oscillatory well responses obtained using a double-packer system. Field verification of the method was performed at a specially designed fully penetrating well of 0.1-m diameter with a 10-m screen in a sand and gravel alluvial aquifer (MSEA site, Shelton, Nebraska). During well installation, disturbed core samples were collected every 0.6 m using a split-spoon sampler. Vertical profiles of hydraulic conductivity were produced on the basis of grain-size analysis of the disturbed core samples. These results closely correlate with the vertical profile of horizontal hydraulic conductivity obtained by interpreting multi-level slug test responses using the modified SG model. The identification method was applied to interpret the response from 474 slug tests in 156 locations at the MSEA site. More than 60% of responses were oscillatory. The method produced a good match to experimental data for both oscillatory and monotonic responses using an automated curve matching procedure. The proposed method allowed us to drastically increase the efficiency of each well used for aquifer characterization and to process massive arrays of field data. Recommendations generalizing this experience to massive application of the proposed method are developed.Using the developed theory and modified Springer-Gelhar (SG) model, an identification method is proposed for estimating hydraulic conductivity from multi-level slug tests. The computerized algorithm calculates hydraulic conductivity from both monotonic and oscillatory well responses obtained using a double-packer system. Field verification of the method was performed at a specially designed fully penetrating well of 0.1-m diameter with a 10-m screen in a sand and gravel alluvial aquifer (MSEA site, Shelton, Nebraska). During well installation, disturbed core samples were collected every 0.6 m using a split-spoon sampler. Vertical profiles of hydraulic conductivity were produced on the basis of grain-size analysis of the disturbed core samples. These results closely correlate with the vertical profile of horizontal hydraulic conductivity obtained by interpreting multi-level slug test responses using the modified SG model. The identification method was applied to interpret the response from 474 slug tests in 156 locations at the MSEA site. More than 60% of responses were oscillatory. The method produced a good match to experimental data for both oscillatory and monotonic responses using an automated curve matching procedure. The proposed method allowed us to drastically increase the efficiency of each well used for aquifer characterization and to process massive arrays of field data. Recommendations generalizing this experience to massive application of the proposed method are developed.
Using risk-adjustment models to identify high-cost risks.
Meenan, Richard T; Goodman, Michael J; Fishman, Paul A; Hornbrook, Mark C; O'Keeffe-Rosetti, Maureen C; Bachman, Donald J
2003-11-01
We examine the ability of various publicly available risk models to identify high-cost individuals and enrollee groups using multi-HMO administrative data. Five risk-adjustment models (the Global Risk-Adjustment Model [GRAM], Diagnostic Cost Groups [DCGs], Adjusted Clinical Groups [ACGs], RxRisk, and Prior-expense) were estimated on a multi-HMO administrative data set of 1.5 million individual-level observations for 1995-1996. Models produced distributions of individual-level annual expense forecasts for comparison to actual values. Prespecified "high-cost" thresholds were set within each distribution. The area under the receiver operating characteristic curve (AUC) for "high-cost" prevalences of 1% and 0.5% was calculated, as was the proportion of "high-cost" dollars correctly identified. Results are based on a separate 106,000-observation validation dataset. For "high-cost" prevalence targets of 1% and 0.5%, ACGs, DCGs, GRAM, and Prior-expense are very comparable in overall discrimination (AUCs, 0.83-0.86). Given a 0.5% prevalence target and a 0.5% prediction threshold, DCGs, GRAM, and Prior-expense captured $963,000 (approximately 3%) more "high-cost" sample dollars than other models. DCGs captured the most "high-cost" dollars among enrollees with asthma, diabetes, and depression; predictive performance among demographic groups (Medicaid members, members over 64, and children under 13) varied across models. Risk models can efficiently identify enrollees who are likely to generate future high costs and who could benefit from case management. The dollar value of improved prediction performance of the most accurate risk models should be meaningful to decision-makers and encourage their broader use for identifying high costs.
Differentially Private Frequent Sequence Mining via Sampling-based Candidate Pruning
Xu, Shengzhi; Cheng, Xiang; Li, Zhengyi; Xiong, Li
2016-01-01
In this paper, we study the problem of mining frequent sequences under the rigorous differential privacy model. We explore the possibility of designing a differentially private frequent sequence mining (FSM) algorithm which can achieve both high data utility and a high degree of privacy. We found, in differentially private FSM, the amount of required noise is proportionate to the number of candidate sequences. If we could effectively reduce the number of unpromising candidate sequences, the utility and privacy tradeoff can be significantly improved. To this end, by leveraging a sampling-based candidate pruning technique, we propose a novel differentially private FSM algorithm, which is referred to as PFS2. The core of our algorithm is to utilize sample databases to further prune the candidate sequences generated based on the downward closure property. In particular, we use the noisy local support of candidate sequences in the sample databases to estimate which sequences are potentially frequent. To improve the accuracy of such private estimations, a sequence shrinking method is proposed to enforce the length constraint on the sample databases. Moreover, to decrease the probability of misestimating frequent sequences as infrequent, a threshold relaxation method is proposed to relax the user-specified threshold for the sample databases. Through formal privacy analysis, we show that our PFS2 algorithm is ε-differentially private. Extensive experiments on real datasets illustrate that our PFS2 algorithm can privately find frequent sequences with high accuracy. PMID:26973430
Ma, Jiping; Lu, Xi; Xia, Yan; Yan, Fengli
2015-02-01
A solid-phase extraction (SPE) method using multi-walled carbon nanotubes as adsorbent coupled with high-performance liquid chromatography was developed for the determination of four pyrazole and pyrrole pesticides (fenpyroximate, chlorfenapyr, fipronil and flusilazole) in environmental water samples. Several parameters, such as extraction adsorbent, elution solvent and volume and sample loading flow rate were optimized to obtain high SPE recoveries and extraction efficiency. The calibration curves for the pesticides extracted were linear in the range of 0.05-10 μg L(-1) for chlorfenapyr and fenpyroximate and 0.05-20 μg L(-1) for fipronil and flusilazole, with the correlation coefficients (r(2)) between 0.9966 and 0.9990. The method gave good precisions (relative standard deviation %) from 2.9 to 10.1% for real spiked samples from reservoir water and seawater; method recoveries ranged 92.2-105.9 and 98.5-103.9% for real spiked samples from reservoir water and seawater, respectively. Limits of detection (S/N = 3) for the method were determined to be 8-19 ng L(-1). The optimized method was successfully applied to the determination of four pesticides of pyrazoles and pyrroles in real environmental water samples. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Zhang, Wei; Zhang, Xiaolong; Qiang, Yan; Tian, Qi; Tang, Xiaoxian
2017-01-01
The fast and accurate segmentation of lung nodule image sequences is the basis of subsequent processing and diagnostic analyses. However, previous research investigating nodule segmentation algorithms cannot entirely segment cavitary nodules, and the segmentation of juxta-vascular nodules is inaccurate and inefficient. To solve these problems, we propose a new method for the segmentation of lung nodule image sequences based on superpixels and density-based spatial clustering of applications with noise (DBSCAN). First, our method uses three-dimensional computed tomography image features of the average intensity projection combined with multi-scale dot enhancement for preprocessing. Hexagonal clustering and morphological optimized sequential linear iterative clustering (HMSLIC) for sequence image oversegmentation is then proposed to obtain superpixel blocks. The adaptive weight coefficient is then constructed to calculate the distance required between superpixels to achieve precise lung nodules positioning and to obtain the subsequent clustering starting block. Moreover, by fitting the distance and detecting the change in slope, an accurate clustering threshold is obtained. Thereafter, a fast DBSCAN superpixel sequence clustering algorithm, which is optimized by the strategy of only clustering the lung nodules and adaptive threshold, is then used to obtain lung nodule mask sequences. Finally, the lung nodule image sequences are obtained. The experimental results show that our method rapidly, completely and accurately segments various types of lung nodule image sequences. PMID:28880916
NASA Astrophysics Data System (ADS)
Therrien, A. C.; Lemaire, W.; Lecoq, P.; Fontaine, R.; Pratte, J.-F.
2018-01-01
The advantages of Time-of-Flight positron emission tomography (TOF-PET) have pushed the development of detectors with better time resolution. In particular, Silicon Photomultipliers (SiPM) have evolved tremendously in the past decade and arrays with a fully digital readout are the next logical step (dSiPM). New multi-timestamp methods use the precise time information of multiple photons to estimate the time of a PET event with greater accuracy, resulting in excellent time resolution. We propose a method which uses the same timestamps as the time estimator to perform energy discrimination, thus using data obtained within 5 ns of the beginning of the event. Having collected all the necessary information, the dSiPM could then be disabled for the remaining scintillation while dedicated electronics process the collected data. This would reduce afterpulsing as the SPAD would be turned off for several hundred nanoseconds, emptying the majority of traps. The proposed method uses a strategy based on subtraction and minimal electronics to reject energy below a selected threshold. This method achieves an error rate of less than 3% for photopeak discrimination (threshold at 400 keV) for dark count rates up to 100 cps/μm2, time-to-digital converter resolution up to 50 ps and a photon detection efficiency ranging from 10 to 70%.
Ellipsometric porosimetry on pore-controlled TiO2 layers
NASA Astrophysics Data System (ADS)
Rosu, Dana-Maria; Ortel, Erik; Hodoroaba, Vasile-Dan; Kraehnert, Ralph; Hertwig, Andreas
2017-11-01
The practical performance of surface coatings in applications like catalysis, water splitting or batteries depends critically on the coating materials' porosity. Determining the porosity in a fast and non-destructive way is still an unsolved problem for industrial thin-films technology. As a contribution to calibrated, non-destructive, optical layer characterisation, we present a multi-method comparison study on porous TiO2 films deposited by sol-gel synthesis on Si wafers. The ellipsometric data were collected on a range of samples with different TiO2 layer thickness and different porosity values. These samples were produced by templated sol-gel synthesis resulting in layers with a well-defined pore size and pore density. The ellipsometry measurement data were analysed by means of a Bruggeman effective medium approximation (BEMA), with the aim to determine the mixture ratio of void and matrix material by a multi-sample analysis strategy. This analysis yielded porosities and layer thicknesses for all samples as well as the dielectric function for the matrix material. Following the idea of multi-method techniques in metrology, the data was referenced to imaging by electron microscopy (SEM) and to a new EPMA (electron probe microanalysis) porosity approach for thin film analysis. This work might lead to a better metrological understanding of optical porosimetry and also to better-qualified characterisation methods for nano-porous layer systems.
A fast image matching algorithm based on key points
NASA Astrophysics Data System (ADS)
Wang, Huilin; Wang, Ying; An, Ru; Yan, Peng
2014-05-01
Image matching is a very important technique in image processing. It has been widely used for object recognition and tracking, image retrieval, three-dimensional vision, change detection, aircraft position estimation, and multi-image registration. Based on the requirements of matching algorithm for craft navigation, such as speed, accuracy and adaptability, a fast key point image matching method is investigated and developed. The main research tasks includes: (1) Developing an improved celerity key point detection approach using self-adapting threshold of Features from Accelerated Segment Test (FAST). A method of calculating self-adapting threshold was introduced for images with different contrast. Hessian matrix was adopted to eliminate insecure edge points in order to obtain key points with higher stability. This approach in detecting key points has characteristics of small amount of computation, high positioning accuracy and strong anti-noise ability; (2) PCA-SIFT is utilized to describe key point. 128 dimensional vector are formed based on the SIFT method for the key points extracted. A low dimensional feature space was established by eigenvectors of all the key points, and each eigenvector was projected onto the feature space to form a low dimensional eigenvector. These key points were re-described by dimension-reduced eigenvectors. After reducing the dimension by the PCA, the descriptor was reduced to 20 dimensions from the original 128. This method can reduce dimensions of searching approximately near neighbors thereby increasing overall speed; (3) Distance ratio between the nearest neighbour and second nearest neighbour searching is regarded as the measurement criterion for initial matching points from which the original point pairs matched are obtained. Based on the analysis of the common methods (e.g. RANSAC (random sample consensus) and Hough transform cluster) used for elimination false matching point pairs, a heuristic local geometric restriction strategy is adopted to discard false matched point pairs further; and (4) Affine transformation model is introduced to correct coordinate difference between real-time image and reference image. This resulted in the matching of the two images. SPOT5 Remote sensing images captured at different date and airborne images captured with different flight attitude were used to test the performance of the method from matching accuracy, operation time and ability to overcome rotation. Results show the effectiveness of the approach.
Renjith, Arokia; Manjula, P; Mohan Kumar, P
2015-01-01
Brain tumour is one of the main causes for an increase in transience among children and adults. This paper proposes an improved method based on Magnetic Resonance Imaging (MRI) brain image classification and image segmentation approach. Automated classification is encouraged by the need of high accuracy when dealing with a human life. The detection of the brain tumour is a challenging problem, due to high diversity in tumour appearance and ambiguous tumour boundaries. MRI images are chosen for detection of brain tumours, as they are used in soft tissue determinations. First of all, image pre-processing is used to enhance the image quality. Second, dual-tree complex wavelet transform multi-scale decomposition is used to analyse texture of an image. Feature extraction extracts features from an image using gray-level co-occurrence matrix (GLCM). Then, the Neuro-Fuzzy technique is used to classify the stages of brain tumour as benign, malignant or normal based on texture features. Finally, tumour location is detected using Otsu thresholding. The classifier performance is evaluated based on classification accuracies. The simulated results show that the proposed classifier provides better accuracy than previous method.
Enforcing positivity in intrusive PC-UQ methods for reactive ODE systems
Najm, Habib N.; Valorani, Mauro
2014-04-12
We explore the relation between the development of a non-negligible probability of negative states and the instability of numerical integration of the intrusive Galerkin ordinary differential equation system describing uncertain chemical ignition. To prevent this instability without resorting to either multi-element local polynomial chaos (PC) methods or increasing the order of the PC representation in time, we propose a procedure aimed at modifying the amplitude of the PC modes to bring the probability of negative state values below a user-defined threshold. This modification can be effectively described as a filtering procedure of the spectral PC coefficients, which is applied on-the-flymore » during the numerical integration when the current value of the probability of negative states exceeds the prescribed threshold. We demonstrate the filtering procedure using a simple model of an ignition process in a batch reactor. This is carried out by comparing different observables and error measures as obtained by non-intrusive Monte Carlo and Gauss-quadrature integration and the filtered intrusive procedure. Lastly, the filtering procedure has been shown to effectively stabilize divergent intrusive solutions, and also to improve the accuracy of stable intrusive solutions which are close to the stability limits.« less
Shear wave speed estimation by adaptive random sample consensus method.
Lin, Haoming; Wang, Tianfu; Chen, Siping
2014-01-01
This paper describes a new method for shear wave velocity estimation that is capable of extruding outliers automatically without preset threshold. The proposed method is an adaptive random sample consensus (ARANDSAC) and the metric used here is finding the certain percentage of inliers according to the closest distance criterion. To evaluate the method, the simulation and phantom experiment results were compared using linear regression with all points (LRWAP) and radon sum transform (RS) method. The assessment reveals that the relative biases of mean estimation are 20.00%, 4.67% and 5.33% for LRWAP, ARANDSAC and RS respectively for simulation, 23.53%, 4.08% and 1.08% for phantom experiment. The results suggested that the proposed ARANDSAC algorithm is accurate in shear wave speed estimation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Hongjing, E-mail: wuhongjing@mail.nwpu.edu.cn; Wu, Guanglei, E-mail: wuguanglei@mail.xjtu.edu.cn; Wu, Qiaofeng
2014-11-15
We reported the preparation of C@Ni–NiO core–shell hybrid solid spheres or multi-shelled NiO hollow spheres by combining a facile hydrothermal route with a calcination process in H{sub 2} or air atmosphere, respectively. The synthesized C@Ni–NiO core–shell solid spheres with diameters of approximately 2–6 μm were in fact built from dense NiO nanoparticles coated by random two-dimensional metal Ni nanosheets without any visible pores. The multi-shelled NiO hollow spheres were built from particle-like ligaments and there are a lot of pores with size of several nanometers on the surface. Combined Raman spectra with X-ray photoelectron spectra (XPS), it suggested that themore » defects in the samples play a limited role in the dielectric loss. Compared with the other samples, the permeability of the samples calcined in H{sub 2} and air was increased slightly and the natural resonance frequency shifted to higher frequency (7, 11 and 14 GHz, respectively), leading to an enhancement of microwave absorption property. For the sample calcined in H{sub 2}, an optimal reflection loss less than − 10 was obtained at 7 GHz with a matching thickness of 5.0 mm. Our study demonstrated the potential application of C@Ni–NiO core–shell hybrid solid sphere or multi-shelled NiO hollow sphere as a more efficient electromagnetic (EM) wave absorber. - Highlights: • C@Ni–NiO core–shell hybrid solid sphere was synthesized by a facile method. • Multi-shelled NiO hollow sphere was synthesized by a facile method. • It suggested that the defects in the samples play a limited role in dielectric loss. • The permeability of the samples calcined in H{sub 2} and air was increased. • Microwave absorbability of C@Ni–NiO core–shell hybrid solid sphere was investigated.« less
Thresholding Based on Maximum Weighted Object Correlation for Rail Defect Detection
NASA Astrophysics Data System (ADS)
Li, Qingyong; Huang, Yaping; Liang, Zhengping; Luo, Siwei
Automatic thresholding is an important technique for rail defect detection, but traditional methods are not competent enough to fit the characteristics of this application. This paper proposes the Maximum Weighted Object Correlation (MWOC) thresholding method, fitting the features that rail images are unimodal and defect proportion is small. MWOC selects a threshold by optimizing the product of object correlation and the weight term that expresses the proportion of thresholded defects. Our experimental results demonstrate that MWOC achieves misclassification error of 0.85%, and outperforms the other well-established thresholding methods, including Otsu, maximum correlation thresholding, maximum entropy thresholding and valley-emphasis method, for the application of rail defect detection.
NASA Astrophysics Data System (ADS)
Hu, Leqian; Ma, Shuai; Yin, Chunling
2018-03-01
In this work, fluorescence spectroscopy combined with multi-way pattern recognition techniques were developed for determining the geographical origin of kudzu root and detection and quantification of adulterants in kudzu root. Excitation-emission (EEM) spectra were obtained for 150 pure kudzu root samples of different geographical origins and 150 fake kudzu roots with different adulteration proportions by recording emission from 330 to 570 nm with excitation in the range of 320-480 nm, respectively. Multi-way principal components analysis (M-PCA) and multilinear partial least squares discriminant analysis (N-PLS-DA) methods were used to decompose the excitation-emission matrices datasets. 150 pure kudzu root samples could be differentiated exactly from each other according to their geographical origins by M-PCA and N-PLS-DA models. For the adulteration kudzu root samples, N-PLS-DA got better and more reliable classification result comparing with the M-PCA model. The results obtained in this study indicated that EEM spectroscopy coupling with multi-way pattern recognition could be used as an easy, rapid and novel tool to distinguish the geographical origin of kudzu root and detect adulterated kudzu root. Besides, this method was also suitable for determining the geographic origin and detection the adulteration of the other foodstuffs which can produce fluorescence.
Micromagnetic Architectures for On-chip Microparticle Transport
NASA Astrophysics Data System (ADS)
Ouk, Minae; Beach, Geoffrey S. D.
2015-03-01
Superparamagnetic microbeads (SBs) are widely used to capture and manipulate biological entities in a fluid environment. Chip-based magnetic actuation provides a means to transport SBs in lab-on-a-chip devices. This is usually accomplished using the stray field from patterned magnetic microstructures, or domain walls in magnetic nanowires. Magnetic anti-dot arrays are particularly attractive due to the high-gradient stray fields from their partial domain wall structures. Here we use a self-assembly method to create magnetic anti-dot arrays in Co films, and describe the motion of SBs across the surface by a rotating field. We find a critical field-rotation frequency beyond which bead motion ceases and a critical threshold for both the in-plane and out-of-plane field components that must be exceeded for bead motion to occur. We show that these field thresholds are bead size dependent, and can thus be used to digitally separate magnetic beads in multi-bead populations. Hence these large-area structures can be used to combine long distance transport with novel functionalities.
Comprehensive field studies were conducted to evaluate the performance of sampling methods for measuring the coarse fraction of PM10 in ambient air. Five separate sampling approaches were evaluated at each of three sampling sites. As the primary basis of comparison, a discret...
Comprehensive field studies were conducted to evaluate the performance of sampling methods for measuring the coarse fraction of PM10 in ambient air. Five separate sampling approaches were evaluated at each of three sampling sites. As the primary basis of comparison, a discret...
Comprehensive field studies were conducted to evaluate the performance of sampling methods for measuring the coarse fraction of PM10 in ambient air. Five separate sampling approaches were evaluated at each of three sampling sites. As the primary basis of comparison, a discrete ...
Hsu, Justine; Flores, Gabriela; Evans, David; Mills, Anne; Hanson, Kara
2018-05-31
Monitoring financial protection against catastrophic health expenditures is important to understand how health financing arrangements in a country protect its population against high costs associated with accessing health services. While catastrophic health expenditures are generally defined to be when household expenditures for health exceed a given threshold of household resources, there is no gold standard with several methods applied to define the threshold and household resources. These different approaches to constructing the indicator might give different pictures of a country's progress towards financial protection. In order for monitoring to effectively provide policy insight, it is critical to understand the sensitivity of measurement to these choices. This paper examines the impact of varying two methodological choices by analysing household expenditure data from a sample of 47 countries. We assess sensitivity of cross-country comparisons to a range of thresholds by testing for restricted dominance. We further assess sensitivity of comparisons to different methods for defining household resources (i.e. total expenditure, non-food expenditure and non-subsistence expenditure) by conducting correlation tests of country rankings. We found country rankings are robust to the choice of threshold in a tenth to a quarter of comparisons within the 5-85% threshold range and this increases to half of comparisons if the threshold is restricted to 5-40%, following those commonly used in the literature. Furthermore, correlations of country rankings using different methods to define household resources were moderate to high; thus, this choice makes less difference from a measurement perspective than from an ethical perspective as different definitions of available household resources reflect varying concerns for equity. Interpreting comparisons from global monitoring based on a single threshold should be done with caution as these may not provide reliable insight into relative country progress. We therefore recommend financial protection against catastrophic health expenditures be measured across a range of thresholds using a catastrophic incidence curve as shown in this paper. We further recommend evaluating financial protection in relation to a country's health financing system arrangements in order to better understand the extent of protection and better inform future policy changes.
Stefanovic, Aleksandra; Roscoe, Diane; Ranasinghe, Romali; Wong, Titus; Bryce, Elizabeth; Porter, Charlene; Lim, Adelina; Grant, Jennifer; Ng, Karen; Pudek, Morris
2017-09-01
Urine flow cytometry (UFC) is an automated method to quantify bacterial and white blood cell (WBC) counts. We aimed to determine whether a threshold for these parameters can be set to use UFC as a sensitive screen to predict which urine samples will subsequently grow in culture. Urines submitted to our microbiology laboratory at a tertiary care centre from 22 July 2015-17 February 2016 underwent UFC (Sysmex UF-1000i) analysis, regular urinalysis and urine culture. Positive urine cultures were defined as growth ≥104 c.f.u. ml-1 of organisms associated with urinary tract infections. The correlation of UFC bacterial and WBC counts with urine culture was assessed using receiver operating characteristics curves. The sensitivity (SN), specificity (SP), negative predictive values (NPVs), positive predictive values (PPVs) and false negative rate (FNR) were calculated at various thresholds in immunocompetent and immunosuppressed patients. A total of 15 046 urine specimens were submitted, of which 14 908 were analysable in the study. The average time to UFC result from receipt in the laboratory was 0.76 h (+/-1.04). The test performance at a set threshold of UFC bacteria ≥20 or WBC >5 was: SN=96.0 %, SP=39.2 %, PPV=47.0 %, NPV=94.5 % and FNR=4.0 %. This threshold eliminates 26 % of urine cultures. Immunosuppressed hosts had a lower sensitivity of 90.6 % and a higher FNR of 9.4 %. UFC is a rapid and sensitive method to screen out urine samples that will subsequently be negative and to reflex urines to culture that will subsequently grow. UFC results are available within 1 h from receipt and enable the elimination of culture when the set threshold is not met.
Chaotic Signal Denoising Based on Hierarchical Threshold Synchrosqueezed Wavelet Transform
NASA Astrophysics Data System (ADS)
Wang, Wen-Bo; Jing, Yun-yu; Zhao, Yan-chao; Zhang, Lian-Hua; Wang, Xiang-Li
2017-12-01
In order to overcoming the shortcoming of single threshold synchrosqueezed wavelet transform(SWT) denoising method, an adaptive hierarchical threshold SWT chaotic signal denoising method is proposed. Firstly, a new SWT threshold function is constructed based on Stein unbiased risk estimation, which is two order continuous derivable. Then, by using of the new threshold function, a threshold process based on the minimum mean square error was implemented, and the optimal estimation value of each layer threshold in SWT chaotic denoising is obtained. The experimental results of the simulating chaotic signal and measured sunspot signals show that, the proposed method can filter the noise of chaotic signal well, and the intrinsic chaotic characteristic of the original signal can be recovered very well. Compared with the EEMD denoising method and the single threshold SWT denoising method, the proposed method can obtain better denoising result for the chaotic signal.
Wu, Dingming; Wang, Dongfang; Zhang, Michael Q; Gu, Jin
2015-12-01
One major goal of large-scale cancer omics study is to identify molecular subtypes for more accurate cancer diagnoses and treatments. To deal with high-dimensional cancer multi-omics data, a promising strategy is to find an effective low-dimensional subspace of the original data and then cluster cancer samples in the reduced subspace. However, due to data-type diversity and big data volume, few methods can integrative and efficiently find the principal low-dimensional manifold of the high-dimensional cancer multi-omics data. In this study, we proposed a novel low-rank approximation based integrative probabilistic model to fast find the shared principal subspace across multiple data types: the convexity of the low-rank regularized likelihood function of the probabilistic model ensures efficient and stable model fitting. Candidate molecular subtypes can be identified by unsupervised clustering hundreds of cancer samples in the reduced low-dimensional subspace. On testing datasets, our method LRAcluster (low-rank approximation based multi-omics data clustering) runs much faster with better clustering performances than the existing method. Then, we applied LRAcluster on large-scale cancer multi-omics data from TCGA. The pan-cancer analysis results show that the cancers of different tissue origins are generally grouped as independent clusters, except squamous-like carcinomas. While the single cancer type analysis suggests that the omics data have different subtyping abilities for different cancer types. LRAcluster is a very useful method for fast dimension reduction and unsupervised clustering of large-scale multi-omics data. LRAcluster is implemented in R and freely available via http://bioinfo.au.tsinghua.edu.cn/software/lracluster/ .
Inter-class sparsity based discriminative least square regression.
Wen, Jie; Xu, Yong; Li, Zuoyong; Ma, Zhongli; Xu, Yuanrong
2018-06-01
Least square regression is a very popular supervised classification method. However, two main issues greatly limit its performance. The first one is that it only focuses on fitting the input features to the corresponding output labels while ignoring the correlations among samples. The second one is that the used label matrix, i.e., zero-one label matrix is inappropriate for classification. To solve these problems and improve the performance, this paper presents a novel method, i.e., inter-class sparsity based discriminative least square regression (ICS_DLSR), for multi-class classification. Different from other methods, the proposed method pursues that the transformed samples have a common sparsity structure in each class. For this goal, an inter-class sparsity constraint is introduced to the least square regression model such that the margins of samples from the same class can be greatly reduced while those of samples from different classes can be enlarged. In addition, an error term with row-sparsity constraint is introduced to relax the strict zero-one label matrix, which allows the method to be more flexible in learning the discriminative transformation matrix. These factors encourage the method to learn a more compact and discriminative transformation for regression and thus has the potential to perform better than other methods. Extensive experimental results show that the proposed method achieves the best performance in comparison with other methods for multi-class classification. Copyright © 2018 Elsevier Ltd. All rights reserved.
Implementation guide for turbidity threshold sampling: principles, procedures, and analysis
Jack Lewis; Rand Eads
2009-01-01
Turbidity Threshold Sampling uses real-time turbidity and river stage information to automatically collect water quality samples for estimating suspended sediment loads. The system uses a programmable data logger in conjunction with a stage measurement device, a turbidity sensor, and a pumping sampler. Specialized software enables the user to control the sampling...
Tang, Jing; Zheng, Jianbin; Wang, Yang; Yu, Lie; Zhan, Enqi; Song, Qiuzhi
2018-02-06
This paper presents a novel methodology for detecting the gait phase of human walking on level ground. The previous threshold method (TM) sets a threshold to divide the ground contact forces (GCFs) into on-ground and off-ground states. However, the previous methods for gait phase detection demonstrate no adaptability to different people and different walking speeds. Therefore, this paper presents a self-tuning triple threshold algorithm (STTTA) that calculates adjustable thresholds to adapt to human walking. Two force sensitive resistors (FSRs) were placed on the ball and heel to measure GCFs. Three thresholds (i.e., high-threshold, middle-threshold andlow-threshold) were used to search out the maximum and minimum GCFs for the self-adjustments of thresholds. The high-threshold was the main threshold used to divide the GCFs into on-ground and off-ground statuses. Then, the gait phases were obtained through the gait phase detection algorithm (GPDA), which provides the rules that determine calculations for STTTA. Finally, the STTTA reliability is determined by comparing the results between STTTA and Mariani method referenced as the timing analysis module (TAM) and Lopez-Meyer methods. Experimental results show that the proposed method can be used to detect gait phases in real time and obtain high reliability when compared with the previous methods in the literature. In addition, the proposed method exhibits strong adaptability to different wearers walking at different walking speeds.
Wang, Ophelia; Zachmann, Luke J; Sesnie, Steven E; Olsson, Aaryn D; Dickson, Brett G
2014-01-01
Prioritizing areas for management of non-native invasive plants is critical, as invasive plants can negatively impact plant community structure. Extensive and multi-jurisdictional inventories are essential to prioritize actions aimed at mitigating the impact of invasions and changes in disturbance regimes. However, previous work devoted little effort to devising sampling methods sufficient to assess the scope of multi-jurisdictional invasion over extensive areas. Here we describe a large-scale sampling design that used species occurrence data, habitat suitability models, and iterative and targeted sampling efforts to sample five species and satisfy two key management objectives: 1) detecting non-native invasive plants across previously unsampled gradients, and 2) characterizing the distribution of non-native invasive plants at landscape to regional scales. Habitat suitability models of five species were based on occurrence records and predictor variables derived from topography, precipitation, and remotely sensed data. We stratified and established field sampling locations according to predicted habitat suitability and phenological, substrate, and logistical constraints. Across previously unvisited areas, we detected at least one of our focal species on 77% of plots. In turn, we used detections from 2011 to improve habitat suitability models and sampling efforts in 2012, as well as additional spatial constraints to increase detections. These modifications resulted in a 96% detection rate at plots. The range of habitat suitability values that identified highly and less suitable habitats and their environmental conditions corresponded to field detections with mixed levels of agreement. Our study demonstrated that an iterative and targeted sampling framework can address sampling bias, reduce time costs, and increase detections. Other studies can extend the sampling framework to develop methods in other ecosystems to provide detection data. The sampling methods implemented here provide a meaningful tool when understanding the potential distribution and habitat of species over multi-jurisdictional and extensive areas is needed for achieving management objectives.
Wang, Ophelia; Zachmann, Luke J.; Sesnie, Steven E.; Olsson, Aaryn D.; Dickson, Brett G.
2014-01-01
Prioritizing areas for management of non-native invasive plants is critical, as invasive plants can negatively impact plant community structure. Extensive and multi-jurisdictional inventories are essential to prioritize actions aimed at mitigating the impact of invasions and changes in disturbance regimes. However, previous work devoted little effort to devising sampling methods sufficient to assess the scope of multi-jurisdictional invasion over extensive areas. Here we describe a large-scale sampling design that used species occurrence data, habitat suitability models, and iterative and targeted sampling efforts to sample five species and satisfy two key management objectives: 1) detecting non-native invasive plants across previously unsampled gradients, and 2) characterizing the distribution of non-native invasive plants at landscape to regional scales. Habitat suitability models of five species were based on occurrence records and predictor variables derived from topography, precipitation, and remotely sensed data. We stratified and established field sampling locations according to predicted habitat suitability and phenological, substrate, and logistical constraints. Across previously unvisited areas, we detected at least one of our focal species on 77% of plots. In turn, we used detections from 2011 to improve habitat suitability models and sampling efforts in 2012, as well as additional spatial constraints to increase detections. These modifications resulted in a 96% detection rate at plots. The range of habitat suitability values that identified highly and less suitable habitats and their environmental conditions corresponded to field detections with mixed levels of agreement. Our study demonstrated that an iterative and targeted sampling framework can address sampling bias, reduce time costs, and increase detections. Other studies can extend the sampling framework to develop methods in other ecosystems to provide detection data. The sampling methods implemented here provide a meaningful tool when understanding the potential distribution and habitat of species over multi-jurisdictional and extensive areas is needed for achieving management objectives. PMID:25019621
Dual-mode nested search method for categorical uncertain multi-objective optimization
NASA Astrophysics Data System (ADS)
Tang, Long; Wang, Hu
2016-10-01
Categorical multi-objective optimization is an important issue involved in many matching design problems. Non-numerical variables and their uncertainty are the major challenges of such optimizations. Therefore, this article proposes a dual-mode nested search (DMNS) method. In the outer layer, kriging metamodels are established using standard regular simplex mapping (SRSM) from categorical candidates to numerical values. Assisted by the metamodels, a k-cluster-based intelligent sampling strategy is developed to search Pareto frontier points. The inner layer uses an interval number method to model the uncertainty of categorical candidates. To improve the efficiency, a multi-feature convergent optimization via most-promising-area stochastic search (MFCOMPASS) is proposed to determine the bounds of objectives. Finally, typical numerical examples are employed to demonstrate the effectiveness of the proposed DMNS method.
A new sampling scheme for developing metamodels with the zeros of Chebyshev polynomials
NASA Astrophysics Data System (ADS)
Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing
2015-09-01
The accuracy of metamodelling is determined by both the sampling and approximation. This article proposes a new sampling method based on the zeros of Chebyshev polynomials to capture the sampling information effectively. First, the zeros of one-dimensional Chebyshev polynomials are applied to construct Chebyshev tensor product (CTP) sampling, and the CTP is then used to construct high-order multi-dimensional metamodels using the 'hypercube' polynomials. Secondly, the CTP sampling is further enhanced to develop Chebyshev collocation method (CCM) sampling, to construct the 'simplex' polynomials. The samples of CCM are randomly and directly chosen from the CTP samples. Two widely studied sampling methods, namely the Smolyak sparse grid and Hammersley, are used to demonstrate the effectiveness of the proposed sampling method. Several numerical examples are utilized to validate the approximation accuracy of the proposed metamodel under different dimensions.
The Study of Residential Areas Extraction Based on GF-3 Texture Image Segmentation
NASA Astrophysics Data System (ADS)
Shao, G.; Luo, H.; Tao, X.; Ling, Z.; Huang, Y.
2018-04-01
The study chooses the standard stripe and dual polarization SAR images of GF-3 as the basic data. Residential areas extraction processes and methods based upon GF-3 images texture segmentation are compared and analyzed. GF-3 images processes include radiometric calibration, complex data conversion, multi-look processing, images filtering, and then conducting suitability analysis for different images filtering methods, the filtering result show that the filtering method of Kuan is efficient for extracting residential areas, then, we calculated and analyzed the texture feature vectors using the GLCM (the Gary Level Co-occurrence Matrix), texture feature vectors include the moving window size, step size and angle, the result show that window size is 11*11, step is 1, and angle is 0°, which is effective and optimal for the residential areas extracting. And with the FNEA (Fractal Net Evolution Approach), we segmented the GLCM texture images, and extracted the residential areas by threshold setting. The result of residential areas extraction verified and assessed by confusion matrix. Overall accuracy is 0.897, kappa is 0.881, and then we extracted the residential areas by SVM classification based on GF-3 images, the overall accuracy is less 0.09 than the accuracy of extraction method based on GF-3 Texture Image Segmentation. We reached the conclusion that residential areas extraction based on GF-3 SAR texture image multi-scale segmentation is simple and highly accurate. although, it is difficult to obtain multi-spectrum remote sensing image in southern China, in cloudy and rainy weather throughout the year, this paper has certain reference significance.
Meta‐analysis of test accuracy studies using imputation for partial reporting of multiple thresholds
Deeks, J.J.; Martin, E.C.; Riley, R.D.
2017-01-01
Introduction For tests reporting continuous results, primary studies usually provide test performance at multiple but often different thresholds. This creates missing data when performing a meta‐analysis at each threshold. A standard meta‐analysis (no imputation [NI]) ignores such missing data. A single imputation (SI) approach was recently proposed to recover missing threshold results. Here, we propose a new method that performs multiple imputation of the missing threshold results using discrete combinations (MIDC). Methods The new MIDC method imputes missing threshold results by randomly selecting from the set of all possible discrete combinations which lie between the results for 2 known bounding thresholds. Imputed and observed results are then synthesised at each threshold. This is repeated multiple times, and the multiple pooled results at each threshold are combined using Rubin's rules to give final estimates. We compared the NI, SI, and MIDC approaches via simulation. Results Both imputation methods outperform the NI method in simulations. There was generally little difference in the SI and MIDC methods, but the latter was noticeably better in terms of estimating the between‐study variances and generally gave better coverage, due to slightly larger standard errors of pooled estimates. Given selective reporting of thresholds, the imputation methods also reduced bias in the summary receiver operating characteristic curve. Simulations demonstrate the imputation methods rely on an equal threshold spacing assumption. A real example is presented. Conclusions The SI and, in particular, MIDC methods can be used to examine the impact of missing threshold results in meta‐analysis of test accuracy studies. PMID:29052347
NASA Astrophysics Data System (ADS)
Alizadeh Sahraei, Abolfazl; Ayati, Moosa; Baniassadi, Majid; Rodrigue, Denis; Baghani, Mostafa; Abdi, Yaser
2018-03-01
This study attempts to comprehensively investigate the effects of multi-walled carbon nanotubes (MWCNTs) on the AC and DC electrical conductivity of epoxy nanocomposites. The samples (0.2, 0.3, and 0.5 wt. % MWCNT) were produced using a combination of ultrason and shear mixing methods. DC measurements were performed by continuous measurement of the current-voltage response and the results were analyzed via a numerical percolation approach, while for the AC behavior, the frequency response was studied by analyzing phase difference and impedance in the 10 Hz to 0.2 MHz frequency range. The results showed that the dielectric parameters, including relative permittivity, impedance phase, and magnitude, present completely different behaviors for the frequency range and MWCNT weight fractions studied. To better understand the nanocomposites electrical behavior, equivalent electric circuits were also built for both DC and AC modes. The DC equivalent networks were developed based on the current-voltage curves, while the AC equivalent circuits were proposed by using an optimization problem according to the impedance magnitude and phase at different frequencies. The obtained equivalent electrical circuits were found to be highly useful tools to understand the physical mechanisms involved in MWCNT filled polymer nanocomposites.
USDA-ARS?s Scientific Manuscript database
LC-MS/MS and GC-MS based targeted metabolomics is typically conducted by analyzing and quantifying a cascade of metabolites with methods specifically developed for the metabolite class. Here we describe an approach for the development of multi-residue analytical profiles, calibration standards, and ...
ERIC Educational Resources Information Center
Chango, Joanna M.; McElhaney, Kathleen Boykin; Allen, Joseph P.; Schad, Megan M.; Marston, Emily
2012-01-01
The role of rejection sensitivity as a critical diathesis moderating the link between adolescent relational stressors and depressive symptoms was examined using multi-method, multi-reporter data from a diverse community sample of 173 adolescents, followed from age 16 to 18. Relational stressors examined included emotional abuse, maternal behavior…
VIGAN: Missing View Imputation with Generative Adversarial Networks.
Shang, Chao; Palmer, Aaron; Sun, Jiangwen; Chen, Ko-Shin; Lu, Jin; Bi, Jinbo
2017-01-01
In an era when big data are becoming the norm, there is less concern with the quantity but more with the quality and completeness of the data. In many disciplines, data are collected from heterogeneous sources, resulting in multi-view or multi-modal datasets. The missing data problem has been challenging to address in multi-view data analysis. Especially, when certain samples miss an entire view of data, it creates the missing view problem. Classic multiple imputations or matrix completion methods are hardly effective here when no information can be based on in the specific view to impute data for such samples. The commonly-used simple method of removing samples with a missing view can dramatically reduce sample size, thus diminishing the statistical power of a subsequent analysis. In this paper, we propose a novel approach for view imputation via generative adversarial networks (GANs), which we name by VIGAN. This approach first treats each view as a separate domain and identifies domain-to-domain mappings via a GAN using randomly-sampled data from each view, and then employs a multi-modal denoising autoencoder (DAE) to reconstruct the missing view from the GAN outputs based on paired data across the views. Then, by optimizing the GAN and DAE jointly, our model enables the knowledge integration for domain mappings and view correspondences to effectively recover the missing view. Empirical results on benchmark datasets validate the VIGAN approach by comparing against the state of the art. The evaluation of VIGAN in a genetic study of substance use disorders further proves the effectiveness and usability of this approach in life science.
Wang, Chao-Qun; Jia, Xiu-Hong; Zhu, Shu; Komatsu, Katsuko; Wang, Xuan; Cai, Shao-Qing
2015-03-01
A new quantitative analysis of multi-component with single marker (QAMS) method for 11 saponins (ginsenosides Rg1, Rb1, Rg2, Rh1, Rf, Re and Rd; notoginsenosides R1, R4, Fa and K) in notoginseng was established, when 6 of these saponins were individually used as internal referring substances to investigate the influences of chemical structure, concentrations of quantitative components, and purities of the standard substances on the accuracy of the QAMS method. The results showed that the concentration of the analyte in sample solution was the major influencing parameter, whereas the other parameters had minimal influence on the accuracy of the QAMS method. A new method for calculating the relative correction factors by linear regression was established (linear regression method), which demonstrated to decrease standard method differences of the QAMS method from 1.20%±0.02% - 23.29%±3.23% to 0.10%±0.09% - 8.84%±2.85% in comparison with the previous method. And the differences between external standard method and the QAMS method using relative correction factors calculated by linear regression method were below 5% in the quantitative determination of Rg1, Re, R1, Rd and Fa in 24 notoginseng samples and Rb1 in 21 notoginseng samples. And the differences were mostly below 10% in the quantitative determination of Rf, Rg2, R4 and N-K (the differences of these 4 constituents bigger because their contents lower) in all the 24 notoginseng samples. The results indicated that the contents assayed by the new QAMS method could be considered as accurate as those assayed by external standard method. In addition, a method for determining applicable concentration ranges of the quantitative components assayed by QAMS method was established for the first time, which could ensure its high accuracy and could be applied to QAMS methods of other TCMs. The present study demonstrated the practicability of the application of the QAMS method for the quantitative analysis of multi-component and the quality control of TCMs and TCM prescriptions. Copyright © 2014 Elsevier B.V. All rights reserved.
Evaluating Composite Sampling Methods of Bacillus spores at Low Concentrations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hess, Becky M.; Amidan, Brett G.; Anderson, Kevin K.
Restoring facility operations after the 2001 Amerithrax attacks took over three months to complete, highlighting the need to reduce remediation time. The most time intensive tasks were environmental sampling and sample analyses. Composite sampling allows disparate samples to be combined, with only a single analysis needed, making it a promising method to reduce response times. We developed a statistical experimental design to test three different composite sampling methods: 1) single medium single pass composite: a single cellulose sponge samples multiple coupons; 2) single medium multi-pass composite: a single cellulose sponge is used to sample multiple coupons; and 3) multi-medium post-samplemore » composite: a single cellulose sponge samples a single surface, and then multiple sponges are combined during sample extraction. Five spore concentrations of Bacillus atrophaeus Nakamura spores were tested; concentrations ranged from 5 to 100 CFU/coupon (0.00775 to 0.155CFU/cm2, respectively). Study variables included four clean surface materials (stainless steel, vinyl tile, ceramic tile, and painted wallboard) and three grime coated/dirty materials (stainless steel, vinyl tile, and ceramic tile). Analysis of variance for the clean study showed two significant factors: composite method (p-value < 0.0001) and coupon material (p-value = 0.0008). Recovery efficiency (RE) was higher overall using the post-sample composite (PSC) method compared to single medium composite from both clean and grime coated materials. RE with the PSC method for concentrations tested (10 to 100 CFU/coupon) was similar for ceramic tile, painted wall board, and stainless steel for clean materials. RE was lowest for vinyl tile with both composite methods. Statistical tests for the dirty study showed RE was significantly higher for vinyl and stainless steel materials, but significantly lower for ceramic tile. These results suggest post-sample compositing can be used to reduce sample analysis time when responding to a Bacillus anthracis contamination event of clean or dirty surfaces.« less
A multi-view face recognition system based on cascade face detector and improved Dlib
NASA Astrophysics Data System (ADS)
Zhou, Hongjun; Chen, Pei; Shen, Wei
2018-03-01
In this research, we present a framework for multi-view face detect and recognition system based on cascade face detector and improved Dlib. This method is aimed to solve the problems of low efficiency and low accuracy in multi-view face recognition, to build a multi-view face recognition system, and to discover a suitable monitoring scheme. For face detection, the cascade face detector is used to extracted the Haar-like feature from the training samples, and Haar-like feature is used to train a cascade classifier by combining Adaboost algorithm. Next, for face recognition, we proposed an improved distance model based on Dlib to improve the accuracy of multiview face recognition. Furthermore, we applied this proposed method into recognizing face images taken from different viewing directions, including horizontal view, overlooks view, and looking-up view, and researched a suitable monitoring scheme. This method works well for multi-view face recognition, and it is also simulated and tested, showing satisfactory experimental results.
NASA Astrophysics Data System (ADS)
Zhang, Lixin; Lin, Min; Wan, Baikun; Zhou, Yu; Wang, Yizhong
2005-01-01
In this paper, a new method of body fat and its distribution testing is proposed based on CT image processing. As it is more sensitive to slight differences in attenuation than standard radiography, CT depicts the soft tissues with better clarity. And body fat has a distinct grayness range compared with its neighboring tissues in a CT image. An effective multi-thresholds image segmentation method based on potential function clustering is used to deal with multiple peaks in the grayness histogram of a CT image. The CT images of abdomens of 14 volunteers with different fatness are processed with the proposed method. Not only can the result of total fat area be got, but also the differentiation of subcutaneous fat from intra-abdominal fat has been identified. The results show the adaptability and stability of the proposed method, which will be a useful tool for diagnosing obesity.
Autoregressive statistical pattern recognition algorithms for damage detection in civil structures
NASA Astrophysics Data System (ADS)
Yao, Ruigen; Pakzad, Shamim N.
2012-08-01
Statistical pattern recognition has recently emerged as a promising set of complementary methods to system identification for automatic structural damage assessment. Its essence is to use well-known concepts in statistics for boundary definition of different pattern classes, such as those for damaged and undamaged structures. In this paper, several statistical pattern recognition algorithms using autoregressive models, including statistical control charts and hypothesis testing, are reviewed as potentially competitive damage detection techniques. To enhance the performance of statistical methods, new feature extraction techniques using model spectra and residual autocorrelation, together with resampling-based threshold construction methods, are proposed. Subsequently, simulated acceleration data from a multi degree-of-freedom system is generated to test and compare the efficiency of the existing and proposed algorithms. Data from laboratory experiments conducted on a truss and a large-scale bridge slab model are then used to further validate the damage detection methods and demonstrate the superior performance of proposed algorithms.
Optical microscope using an interferometric source of two-color, two-beam entangled photons
Dress, William B.; Kisner, Roger A.; Richards, Roger K.
2004-07-13
Systems and methods are described for an optical microscope using an interferometric source of multi-color, multi-beam entangled photons. A method includes: downconverting a beam of coherent energy to provide a beam of multi-color entangled photons; converging two spatially resolved portions of the beam of multi-color entangled photons into a converged multi-color entangled photon beam; transforming at least a portion of the converged multi-color entangled photon beam by interaction with a sample to generate an entangled photon specimen beam; and combining the entangled photon specimen beam with an entangled photon reference beam within a single beamsplitter. An apparatus includes: a multi-refringent device providing a beam of multi-color entangled photons; a condenser device optically coupled to the multi-refringent device, the condenser device converging two spatially resolved portions of the beam of multi-color entangled photons into a converged multi-color entangled photon beam; a beam probe director and specimen assembly optically coupled to the condenser device; and a beam splitter optically coupled to the beam probe director and specimen assembly, the beam splitter combining an entangled photon specimen beam from the beam probe director and specimen assembly with an entangled photon reference beam.
Zou, W; Ouyang, H
2016-02-01
We propose a multiple estimation adjustment (MEA) method to correct effect overestimation due to selection bias from a hypothesis-generating study (HGS) in pharmacogenetics. MEA uses a hierarchical Bayesian approach to model individual effect estimates from maximal likelihood estimation (MLE) in a region jointly and shrinks them toward the regional effect. Unlike many methods that model a fixed selection scheme, MEA capitalizes on local multiplicity independent of selection. We compared mean square errors (MSEs) in simulated HGSs from naive MLE, MEA and a conditional likelihood adjustment (CLA) method that model threshold selection bias. We observed that MEA effectively reduced MSE from MLE on null effects with or without selection, and had a clear advantage over CLA on extreme MLE estimates from null effects under lenient threshold selection in small samples, which are common among 'top' associations from a pharmacogenetics HGS.
NASA Astrophysics Data System (ADS)
Feng, Yanchun; Lei, Deqing; Hu, Changqin
We created a rapid detection procedure for identifying herbal medicines illegally adulterated with synthetic drugs using near infrared spectroscopy. This procedure includes a reverse correlation coefficient method (RCCM) and comparison of characteristic peaks. Moreover, we made improvements to the RCCM based on new strategies for threshold settings. Any tested herbal medicine must meet two criteria to be identified with our procedure as adulterated. First, the correlation coefficient between the tested sample and the reference must be greater than the RCCM threshold. Next, the NIR spectrum of the tested sample must contain the same characteristic peaks as the reference. In this study, four pure synthetic anti-diabetic drugs (i.e., metformin, gliclazide, glibenclamide and glimepiride), 174 batches of laboratory samples and 127 batches of herbal anti-diabetic medicines were used to construct and validate the procedure. The accuracy of this procedure was greater than 80%. Our data suggest that this protocol is a rapid screening tool to identify synthetic drug adulterants in herbal medicines on the market.
NASA Astrophysics Data System (ADS)
Xie, Fengle; Jiang, Zhansi; Jiang, Hui
2018-05-01
This paper presents a multi-damages identification method for Cantilever Beam. First, the damage location is identified by using the mode shape curvatures. Second, samples of varying damage severities at the damage location and their corresponding natural frequencies are used to construct the initial Kriging surrogate model. Then a particle swarm optimization (PSO) algorithm is employed to identify the damage severities based on Kriging surrogate model. The simulation study of a double-damaged cantilever beam demonstrated that the proposed method is effective.
Schmitt, Stephen J.; Fram, Miranda S.; Milby Dawson, Barbara J.; Belitz, Kenneth
2008-01-01
Ground-water quality in the approximately 3,340 square mile Middle Sacramento Valley study unit (MSACV) was investigated from June through September, 2006, as part of the California Groundwater Ambient Monitoring and Assessment (GAMA) program. The GAMA Priority Basin Assessment project was developed in response to the Groundwater Quality Monitoring Act of 2001 and is being conducted by the U.S. Geological Survey (USGS) in cooperation with the California State Water Resources Control Board (SWRCB). The Middle Sacramento Valley study was designed to provide a spatially unbiased assessment of raw ground-water quality within MSACV, as well as a statistically consistent basis for comparing water quality throughout California. Samples were collected from 108 wells in Butte, Colusa, Glenn, Sutter, Tehama, Yolo, and Yuba Counties. Seventy-one wells were selected using a randomized grid-based method to provide statistical representation of the study unit (grid wells), 15 wells were selected to evaluate changes in water chemistry along ground-water flow paths (flow-path wells), and 22 were shallow monitoring wells selected to assess the effects of rice agriculture, a major land use in the study unit, on ground-water chemistry (RICE wells). The ground-water samples were analyzed for a large number of synthetic organic constituents (volatile organic compounds [VOCs], gasoline oxygenates and degradates, pesticides and pesticide degradates, and pharmaceutical compounds), constituents of special interest (perchlorate, N-nitrosodimethylamine [NDMA], and 1,2,3-trichloropropane [1,2,3-TCP]), inorganic constituents (nutrients, major and minor ions, and trace elements), radioactive constituents, and microbial indicators. Naturally occurring isotopes (tritium, and carbon-14, and stable isotopes of hydrogen, oxygen, nitrogen, and carbon), and dissolved noble gases also were measured to help identify the sources and ages of the sampled ground water. Quality-control samples (blanks, replicates, laboratory matrix spikes) were collected at approximately 10 percent of the wells, and the results for these samples were used to evaluate the quality of the data for the ground-water samples. Field blanks rarely contained detectable concentrations of any constituent, suggesting that contamination was not a noticeable source of bias in the data for the ground-water samples. Differences between replicate samples were within acceptable ranges, indicating acceptably low variability. Matrix spike recoveries were within acceptable ranges for most constituents. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, water typically is treated, disinfected, or blended with other waters to maintain acceptable water quality. Regulatory thresholds apply to treated water that is served to the consumer, not to raw ground water. However, to provide some context for the results, concentrations of constituents measured in the raw ground water were compared with health-based thresholds established by the U.S. Environmental Protection Agency (USEPA) and California Department of Public Health (CDPH) and thresholds established for aesthetic concerns (secondary maximum contaminant levels, SMCL-CA) by CDPH. Comparisons between data collected for this study and drinking-water thresholds are for illustrative purposes only and are not indicative of compliance or noncompliance with regulatory thresholds. Most constituents that were detected in ground-water samples were found at concentrations below drinking-water thresholds. VOCs were detected in less than one-third and pesticides and pesticide degradates in just over one-half of the grid wells, and all detections of these constituents in samples from all wells of the MSACV study unit were below health-based thresholds. All detections of trace elements in samples from MSACV grid wells were below health-based thresholds, with the exceptions of arsenic and boro
Robust and fast pedestrian detection method for far-infrared automotive driving assistance systems
NASA Astrophysics Data System (ADS)
Liu, Qiong; Zhuang, Jiajun; Ma, Jun
2013-09-01
Despite considerable effort has been contributed to night-time pedestrian detection for automotive driving assistance systems recent years, robust and real-time pedestrian detection is by no means a trivial task and is still underway due to the moving cameras, uncontrolled outdoor environments, wide range of possible pedestrian presentations and the stringent performance criteria for automotive applications. This paper presents an alternative night-time pedestrian detection method using monocular far-infrared (FIR) camera, which includes two modules (regions of interest (ROIs) generation and pedestrian recognition) in a cascade fashion. Pixel-gradient oriented vertical projection is first proposed to estimate the vertical image stripes that might contain pedestrians, and then local thresholding image segmentation is adopted to generate ROIs more accurately within the estimated vertical stripes. A novel descriptor called PEWHOG (pyramid entropy weighted histograms of oriented gradients) is proposed to represent FIR pedestrians in recognition module. Specifically, PEWHOG is used to capture both the local object shape described by the entropy weighted distribution of oriented gradient histograms and its pyramid spatial layout. Then PEWHOG is fed to a three-branch structured classifier using support vector machines (SVM) with histogram intersection kernel (HIK). An off-line training procedure combining both the bootstrapping and early-stopping strategy is introduced to generate a more robust classifier by exploiting hard negative samples iteratively. Finally, multi-frame validation is utilized to suppress some transient false positives. Experimental results on FIR video sequences from various scenarios demonstrate that the presented method is effective and promising.
Elwaer, Nagmeddin; Hintelmann, Holger
2007-11-01
The analytical performance of five sample introduction systems, a cross flow nebulizer spray chamber, two different solvent desolvation systems, a multi-mode sample introduction system (MSIS), and a hydride generation (LI2) system were compared for the determination of Se isotope ratio measurements using multi-collector inductively coupled plasma mass spectrometry (MC-ICP/MS). The optimal operating parameters for obtaining the highest Se signal-to-noise (S/N) ratios and isotope ratio precision for each sample introduction were determined. The hydride generation (LI2) system was identified as the most suitable sample introduction method yielding maximum sensitivity and precision for Se isotope ratio measurement. It provided five times higher S/N ratios for all Se isotopes compared to the MSIS, 20 times the S/N ratios of both desolvation units, and 100 times the S/N ratios produced by the conventional spray chamber sample introduction method. The internal precision achieved for the (78)Se/(82)Se ratio at 100 ng mL(-1) Se with the spray chamber, two desolvation, MSIS, and the LI2 systems coupled to MC-ICP/MS was 150, 125, 114, 13, and 7 ppm, respectively. Instrument mass bias factors (K) were calculated using an exponential law correction function. Among the five studied sample introduction systems the LI2 showed the lowest mass bias of -0.0265 and the desolvation system showed the largest bias with -0.0321.
Threshold Velocity for Saltation Activity in the Taklimakan Desert
NASA Astrophysics Data System (ADS)
Yang, Xinghua; He, Qing; Matimin, Ali; Yang, Fan; Huo, Wen; Liu, Xinchun; Zhao, Tianliang; Shen, Shuanghe
2017-12-01
The threshold velocity is an indicator of a soil's susceptibility to saltation activity and is also an important parameter in dust emission models. In this study, the saltation activity, atmospheric conditions, and soil conditions were measured from 1 August 2008 to 31 July 2009 in the Taklimakan Desert, China. the threshold velocity was estimated using the Gaussian time fraction equivalence method. At 2 m height, the 1-min averaged threshold velocity varied between 3.5 and 10.9 m/s, with a mean of 5.9 m/s. Threshold velocities varying between 4.5 and 7.5 m/s accounted for about 91.4% of all measurements. The average threshold velocity displayed clear seasonal variations in the following sequence: winter (5.1 m/s) < autumn (5.8 m/s) < spring (6.1 m/s) < summer (6.5 m/s). A regression equation of threshold velocity was established based on the relations between daily mean threshold velocity and air temperature, specific humidity, and soil volumetric moisture content. High or moderate positive correlations were found between threshold velocity and air temperature, specific humidity, and soil volumetric moisture content (air temperature r = 0.75; specific humidity r = 0.59; and soil volumetric moisture content r = 0.55; sample size = 251). In the study area, the observed horizontal dust flux was 4198.0 kg/m during the whole period of observation, while the horizontal dust flux calculated using the threshold velocity from the regression equation was 4675.6 kg/m. The correlation coefficient between the calculated result and the observations was 0.91. These results indicate that atmospheric and soil conditions should not be neglected in parameterization schemes for threshold velocity.
Averbeck, Beate; Seitz, Lena; Kolb, Florian P; Kutz, Dieter F
2017-09-01
Sex-related differences in human thermal and pain sensitivity are the subject of controversial discussion. The goal of this study in a large number of subjects was to investigate sex differences in thermal and thermal pain perception and the thermal grill illusion (TGI) as a phenomenon reflecting crosstalk between the thermoreceptive and nociceptive systems. The thermal grill illusion is a sensation of strong, but not necessarily painful, heat often preceded by transient cold upon skin contact with spatially interlaced innocuous warm and cool stimuli. The TGI was studied in a group of 78 female and 58 male undergraduate students and was evoked by placing the palm of the right hand on the thermal grill (20/40 °C interleaved stimulus). Sex-related thermal perception was investigated by a retrospective analysis of thermal detection and thermal pain threshold data that had been measured in student laboratory courses over 5 years (776 female and 476 male undergraduate students) using the method of quantitative sensory testing (QST). To analyse correlations between thermal pain sensitivity and the TGI, thermal pain threshold and the TGI were determined in a group of 20 female and 20 male undergraduate students. The TGI was more pronounced in females than males. Females were more sensitive with respect to thermal detection and thermal pain thresholds. Independent of sex, thermal detection thresholds were dependent on the baseline temperature with a specific progression of an optimum curve for cold detection threshold versus baseline temperature. The distribution of cold pain thresholds was multi-modal and sex-dependent. The more pronounced TGI in females correlated with higher cold sensitivity and cold pain sensitivity in females than in males. Our finding that thermal detection threshold not only differs between the sexes but is also dependent on the baseline temperature reveals a complex processing of "cold" and "warm" inputs in thermal perception. The results of the TGI experiment support the assumption that sex differences in cold-related thermoreception are responsible for sex differences in the TGI.
Pasquarella, Cesira; Veronesi, Licia; Napoli, Christian; Castiglia, Paolo; Liguori, Giorgio; Rizzetto, Rolando; Torre, Ida; Righi, Elena; Farruggia, Patrizia; Tesauro, Marina; Torregrossa, Maria V; Montagna, Maria T; Colucci, Maria E; Gallè, Francesca; Masia, Maria D; Strohmenger, Laura; Bergomi, Margherita; Tinteri, Carola; Panico, Manuela; Pennino, Francesca; Cannova, Lucia; Tanzi, Marialuisa
2012-03-15
A microbiological environmental investigation was carried out in ten dental clinics in Italy. Microbial contamination of water, air and surfaces was assessed in each clinic during the five working days, for one week per month, for a three-month period. Water and surfaces were sampled before and after clinical activity; air was sampled before, after, and during clinical activity. A wide variation was found in microbial environmental contamination, both within the participating clinics and for the different sampling times. Before clinical activity, microbial water contamination in tap water reached 51,200cfu/mL (colony forming units per milliliter), and that in Dental Unit Water Systems (DUWSs) reached 872,000cfu/mL. After clinical activity, there was a significant decrease in the Total Viable Count (TVC) in tap water and in DUWSs. Pseudomonas aeruginosa was found in 2.38% (7/294) of tap water samples and in 20.06% (59/294) of DUWS samples; Legionella spp. was found in 29.96% (89/297) of tap water samples and 15.82% (47/297) of DUWS samples, with no significant difference between pre- and post-clinical activity. Microbial air contamination was highest during dental treatments, and decreased significantly at the end of the working activity (p<0.05). The microbial buildup on surfaces increased significantly during the working hours. This study provides data for the establishment of standardized sampling methods, and threshold values for contamination monitoring in dentistry. Some very critical situations have been observed which require urgent intervention. Furthermore, the study emphasizes the need for research aimed at defining effective managing strategies for dental clinics. Copyright © 2012 Elsevier B.V. All rights reserved.
Chen, Yisheng; Schwack, Wolfgang
2014-08-22
The world-wide usage and partly abuse of veterinary antibiotics resulted in a pressing need to control residues in animal-derived foods. Large-scale screening for residues of antibiotics is typically performed by microbial agar diffusion tests. This work employing high-performance thin-layer chromatography (HPTLC) combined with bioautography and electrospray ionization mass spectrometry introduces a rapid and efficient method for a multi-class screening of antibiotic residues. The viability of the bioluminescent bacterium Aliivibrio fischeri to the studied antibiotics (16 species of 5 groups) was optimized on amino plates, enabling detection sensitivity down to the strictest maximum residue limits. The HPTLC method was developed not to separate the individual antibiotics, but for cleanup of sample extracts. The studied antibiotics either remained at the start zones (tetracyclines, aminoglycosides, fluoroquinolones, and macrolides) or migrated into the front (amphenicols), while interfering co-extracted matrix compounds were dispersed at hRf 20-80. Only after a few hours, the multi-sample plate image clearly revealed the presence or absence of antibiotic residues. Moreover, molecular information as to the suspected findings was rapidly achieved by HPTLC-mass spectrometry. Showing remarkable sensitivity and matrix-tolerance, the established method was successfully applied to milk and kidney samples. Copyright © 2014 Elsevier B.V. All rights reserved.
Multichannel FPGA based MVT system for high precision time (20 ps RMS) and charge measurement
NASA Astrophysics Data System (ADS)
Pałka, M.; Strzempek, P.; Korcyl, G.; Bednarski, T.; Niedźwiecki, Sz.; Białas, P.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Jasińska, B.; Kamińska, D.; Kajetanowicz, M.; Kowalski, P.; Kozik, T.; Krzemień, W.; Kubicz, E.; Mohhamed, M.; Raczyński, L.; Rudy, Z.; Rundel, O.; Salabura, P.; Sharma, N. G.; Silarski, M.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Wiślicki, W.; Zieliński, M.; Zgardzińska, B.; Moskal, P.
2017-08-01
In this article it is presented an FPGA based Multi-Voltage Threshold (MVT) system which allows of sampling fast signals (1-2 ns rising and falling edge) in both voltage and time domain. It is possible to achieve a precision of time measurement of 20 ps RMS and reconstruct charge of signals, using a simple approach, with deviation from real value smaller than 10%. Utilization of the differential inputs of an FPGA chip as comparators together with an implementation of a TDC inside an FPGA allowed us to achieve a compact multi-channel system characterized by low power consumption and low production costs. This paper describes realization and functioning of the system comprising 192-channel TDC board and a four mezzanine cards which split incoming signals and discriminate them. The boards have been used to validate a newly developed Time-of-Flight Positron Emission Tomography system based on plastic scintillators. The achieved full system time resolution of σ(TOF) ≈ 68 ps is by factor of two better with respect to the current TOF-PET systems.
Knudsen, Anders Dahl; Bennike, Tue; Kjeldal, Henrik; Birkelund, Svend; Otzen, Daniel Erik; Stensballe, Allan
2014-05-30
We describe Condenser, a freely available, comprehensive open-source tool for merging multidimensional quantitative proteomics data from the Matrix Science Mascot Distiller Quantitation Toolbox into a common format ready for subsequent bioinformatic analysis. A number of different relative quantitation technologies, such as metabolic (15)N and amino acid stable isotope incorporation, label-free and chemical-label quantitation are supported. The program features multiple options for curative filtering of the quantified peptides, allowing the user to choose data quality thresholds appropriate for the current dataset, and ensure the quality of the calculated relative protein abundances. Condenser also features optional global normalization, peptide outlier removal, multiple testing and calculation of t-test statistics for highlighting and evaluating proteins with significantly altered relative protein abundances. Condenser provides an attractive addition to the gold-standard quantitative workflow of Mascot Distiller, allowing easy handling of larger multi-dimensional experiments. Source code, binaries, test data set and documentation are available at http://condenser.googlecode.com/. Copyright © 2014 Elsevier B.V. All rights reserved.
Multi-parameter analysis of titanium vocal fold medializing implant in an excised larynx model
Witt, Rachel E.; Hoffman, Matthew R.; Friedrich, Gerhard; Rieves, Adam L.; Schoepke, Benjamin J.; Jiang, Jack J.
2010-01-01
Objective Evaluate the efficacy of the titanium vocal fold medializing implant (TVFMI) for the treatment of unilateral vocal fold paralysis (UVFP) based on acoustic, aerodynamic, and mucosal wave measurements in an excised larynx setup. Methods Measurements were recorded on eight excised canine larynges with simulated UVFP before and after medialization with the TVFMI. Results Phonation threshold flow (PTF) and phonation threshold power (PTW) decreased significantly after medialization (p<0.001; p=0.008). Phonation threshold pressure (PTP) also decreased, but this difference was not significant (p=0.081). Percent jitter and percent shimmer decreased significantly after medialization (p=0.005; p=0.034). Signal to noise ratio (SNR) increased significantly (p=0.05). Differences in mucosal wave characteristics were discernable, but not significant. Phase difference between the normal and paralyzed vocal fold and amplitude of the paralyzed vocal fold decreased (p=0.15; p=0.78). Glottal gap decreased significantly (p=0.004). Conclusions The TVFMI was effective in achieving vocal fold medialization, improving vocal aerodynamic and acoustic characteristics of phonation significantly and mucosal wave characteristics discernibly. This study provides objective, quantitative support for the use of the TVFMI in improving vocal function in patients with unilateral vocal fold paralysis. PMID:20336924
Water quality and bed sediment quality in the Albemarle Sound, North Carolina, 2012–14
Moorman, Michelle C.; Fitzgerald, Sharon A.; Gurley, Laura N.; Rhoni-Aref, Ahmed; Loftin, Keith A.
2017-01-23
The Albemarle Sound region was selected in 2012 as one of two demonstration sites in the Nation to test and improve the design of the National Water Quality Monitoring Council’s National Monitoring Network (NMN) for U.S. Coastal Waters and Tributaries. The goal of the NMN for U.S. Coastal Waters and Tributaries is to provide information about the health of our oceans, coastal ecosystems, and inland influences on coastal waters for improved resource management. The NMN is an integrated, multidisciplinary, and multi-organizational program using multiple sources of data and information to augment current monitoring programs.This report presents and summarizes selected water-quality and bed sediment-quality data collected as part of the demonstration project conducted in two phases. The first phase was an occurrence and distribution study to assess nutrients, metals, pesticides, cyanotoxins, and phytoplankton communities in the Albemarle Sound during the summer of 2012 at 34 sites in Albemarle Sound, nearby sounds, and various tributaries. The second phase consisted of monthly sampling over a year (March 2013 through February 2014) to assess seasonality in a more limited set of constituents including nutrients, cyanotoxins, and phytoplankton communities at a subset (eight) of the sites sampled in the first phase. During the summer of 2012, few constituent concentrations exceeded published water-quality thresholds; however, elevated levels of chlorophyll a and pH were observed in the northern embayments and in Currituck Sound. Chlorophyll a, and metals (copper, iron, and zinc) were detected above a water-quality threshold. The World Health Organization provisional guideline based on cyanobacterial density for high recreational risk was exceeded in approximately 50 percent of water samples collected during the summer of 2012. Cyanobacteria capable of producing toxins were present, but only low levels of cyanotoxins below human health benchmarks were detected. Finally, 12 metals in surficial bed sediments were detected at levels above a published sediment-quality threshold. These metals included chromium, mercury, copper, lead, arsenic, nickel, and cadmium. Sites with several metal concentrations above the respective thresholds had relatively high concentrations of organic carbon or fine sediment (silt plus clay), or both and were predominantly located in the western and northwestern parts of the Albemarle Sound.Results from the second phase were generally similar to those of the first in that relatively few constituents exceeded a water-quality threshold, both pH and chlorophyll a were detected above the respective water-quality thresholds, and many of these elevated concentrations occurred in the northern embayments and in Currituck Sound. In contrast to the results from phase one, the cyanotoxin, microcystin was detected at more than 10 times the water-quality threshold during a phytoplankton bloom on the Chowan River at Mount Gould, North Carolina in August of 2013. This was the only cyanotoxin concentration measured during the entire study that exceeded a respective water-quality threshold.The information presented in this report can be used to improve understanding of water-quality conditions in the Albemarle Sound, particularly when evaluating causal and response variables that are indicators of eutrophication. In particular, this information can be used by State agencies to help develop water-quality criteria for nutrients, and to understand factors like cyanotoxins that may affect fisheries and recreation in the Albemarle Sound region.
Novel Maximum-based Timing Acquisition for Spread-Spectrum Communications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sibbetty, Taylor; Moradiz, Hussein; Farhang-Boroujeny, Behrouz
This paper proposes and analyzes a new packet detection and timing acquisition method for spread spectrum systems. The proposed method provides an enhancement over the typical thresholding techniques that have been proposed for direct sequence spread spectrum (DS-SS). The effective implementation of thresholding methods typically require accurate knowledge of the received signal-to-noise ratio (SNR), which is particularly difficult to estimate in spread spectrum systems. Instead, we propose a method which utilizes a consistency metric of the location of maximum samples at the output of a filter matched to the spread spectrum waveform to achieve acquisition, and does not require knowledgemore » of the received SNR. Through theoretical study, we show that the proposed method offers a low probability of missed detection over a large range of SNR with a corresponding probability of false alarm far lower than other methods. Computer simulations that corroborate our theoretical results are also presented. Although our work here has been motivated by our previous study of a filter bank multicarrier spread-spectrum (FB-MC-SS) system, the proposed method is applicable to DS-SS systems as well.« less
Matching health information seekers' queries to medical terms
2012-01-01
Background The Internet is a major source of health information but most seekers are not familiar with medical vocabularies. Hence, their searches fail due to bad query formulation. Several methods have been proposed to improve information retrieval: query expansion, syntactic and semantic techniques or knowledge-based methods. However, it would be useful to clean those queries which are misspelled. In this paper, we propose a simple yet efficient method in order to correct misspellings of queries submitted by health information seekers to a medical online search tool. Methods In addition to query normalizations and exact phonetic term matching, we tested two approximate string comparators: the similarity score function of Stoilos and the normalized Levenshtein edit distance. We propose here to combine them to increase the number of matched medical terms in French. We first took a sample of query logs to determine the thresholds and processing times. In the second run, at a greater scale we tested different combinations of query normalizations before or after misspelling correction with the retained thresholds in the first run. Results According to the total number of suggestions (around 163, the number of the first sample of queries), at a threshold comparator score of 0.3, the normalized Levenshtein edit distance gave the highest F-Measure (88.15%) and at a threshold comparator score of 0.7, the Stoilos function gave the highest F-Measure (84.31%). By combining Levenshtein and Stoilos, the highest F-Measure (80.28%) is obtained with 0.2 and 0.7 thresholds respectively. However, queries are composed by several terms that may be combination of medical terms. The process of query normalization and segmentation is thus required. The highest F-Measure (64.18%) is obtained when this process is realized before spelling-correction. Conclusions Despite the widely known high performance of the normalized edit distance of Levenshtein, we show in this paper that its combination with the Stoilos algorithm improved the results for misspelling correction of user queries. Accuracy is improved by combining spelling, phoneme-based information and string normalizations and segmentations into medical terms. These encouraging results have enabled the integration of this method into two projects funded by the French National Research Agency-Technologies for Health Care. The first aims to facilitate the coding process of clinical free texts contained in Electronic Health Records and discharge summaries, whereas the second aims at improving information retrieval through Electronic Health Records. PMID:23095521