Volume 29, Issue1 (January 2005)
Articles in the Current Issue:
Research Article
Homogenization framework for three-dimensional elastoplastic finite element analysis of a grouted pipe-roofing reinforcement method for tunnelling
NASA Astrophysics Data System (ADS)
Bae, G. J.; Shin, H. S.; Sicilia, C.; Choi, Y. G.; Lim, J. J.
2005-01-01
This paper deals with the grouted pipe-roofing reinforcement method that is used in the construction of tunnels through weak grounds. This system consists on installing, prior to the excavation of a length of tunnel, an array of pipes forming a kind of umbrella above the area to be excavated. In some cases, these pipes are later used to inject grout to strengthen the ground and connect the pipes.This system has proven to be very efficient in reducing tunnel convergence and water inflow when tunnelling through weak grounds. However, due to the geometrical and mechanical complexity of the problem, existing finite element frameworks are inappropriate to simulate tunnelling using this method.In this paper, a mathematical framework based on a homogenization technique to simulate grouted pipe-roofing reinforced ground and its implementation into a 3-D finite element programme that can consider stage construction situations are presented. The constitutive model developed allows considering the main design parameters of the problem and only requires geometrical and mechanical properties of the constituents. Additionally, the use of a homogenization approach implies that the generation of the finite element mesh can be easily produced and that re-meshing is not required as basic geometrical parameters such as the orientation of the pipes are changed.
Recovering complete and draft population genomes from metagenome datasets
Sangwan, Naseer; Xia, Fangfang; Gilbert, Jack A.
2016-03-08
Assembly of metagenomic sequence data into microbial genomes is of fundamental value to improving our understanding of microbial ecology and metabolism by elucidating the functional potential of hard-to-culture microorganisms. Here, we provide a synthesis of available methods to bin metagenomic contigs into species-level groups and highlight how genetic diversity, sequencing depth, and coverage influence binning success. Despite the computational cost on application to deeply sequenced complex metagenomes (e.g., soil), covarying patterns of contig coverage across multiple datasets significantly improves the binning process. We also discuss and compare current genome validation methods and reveal how these methods tackle the problem ofmore » chimeric genome bins i.e., sequences from multiple species. Finally, we explore how population genome assembly can be used to uncover biogeographic trends and to characterize the effect of in situ functional constraints on the genome-wide evolution.« less
Recovering complete and draft population genomes from metagenome datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sangwan, Naseer; Xia, Fangfang; Gilbert, Jack A.
Assembly of metagenomic sequence data into microbial genomes is of fundamental value to improving our understanding of microbial ecology and metabolism by elucidating the functional potential of hard-to-culture microorganisms. Here, we provide a synthesis of available methods to bin metagenomic contigs into species-level groups and highlight how genetic diversity, sequencing depth, and coverage influence binning success. Despite the computational cost on application to deeply sequenced complex metagenomes (e.g., soil), covarying patterns of contig coverage across multiple datasets significantly improves the binning process. We also discuss and compare current genome validation methods and reveal how these methods tackle the problem ofmore » chimeric genome bins i.e., sequences from multiple species. Finally, we explore how population genome assembly can be used to uncover biogeographic trends and to characterize the effect of in situ functional constraints on the genome-wide evolution.« less
SU-F-T-253: Volumetric Comparison Between 4D CT Amplitude and Phase Binning Mode
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, G; Ma, R; Reyngold, M
2016-06-15
Purpose: Motion artifact in 4DCT images can affect radiation treatment quality. To identify the most robust and accurate binning method, we compare the volume difference between targets delineated on amplitude and phase binned 4DCT scans. Methods: Varian RPM system and CT scanner were used to acquire 4DCTs of a Quasar phantom with embedded cubic and spherical objects having superior-inferior motion. Eight patients’ respiration waveforms were used to drive the phantom. The 4DCT scan was reconstructed into 10 phase and 10 amplitude bins (2 mm slices). A scan of the static phantom was also acquired. For each waveform, sphere and cubemore » volumes were generated automatically on each phase using HU thresholding. Phase (amplitude) ITVs were the union of object volumes over all phase (amplitude) binned images. The sphere and cube volumes measured in the static phantom scan were V{sub sphere}=4.19cc and V{sub cube}=27.0cc. Volume difference (VD) and dice similarity coefficient (DSC) of the ITVs, and mean volume error (MVE) defined as the average target volume percentage difference between each phase image and the static image, were used to evaluate the performance of amplitude and phase binning. Results: Averaged over the eight breathing traces, the VD and DSC of the internal target volume (ITV) between amplitude and phase binning were 3.4%±3.2% (mean ± std) and 95.9%±2.1% for sphere; 2.1%±3.3% and 98.0% ±1.5% for cube, respectively.For all waveforms, the average sphere MVE of amplitude and phase binning was 6.5% ± 5.0% and 8.2%±6.3%, respectively; and the average cube MVE of amplitude and phase binning was 5.7%±3.5%and 12.9%±8.9%, respectively. Conclusion: ITV volume and spatial overlap as assessed by VD and DSC are similar between amplitude and phase binning. Compared to phase binning, amplitude binning results in lower MVE suggesting it is less susceptible to motion artifact.« less
[Detecting fire smoke based on the multispectral image].
Wei, Ying-Zhuo; Zhang, Shao-Wu; Liu, Yan-Wei
2010-04-01
Smoke detection is very important for preventing forest-fire in the fire early process. Because the traditional technologies based on video and image processing are easily affected by the background dynamic information, three limitations exist in these technologies, i. e. lower anti-interference ability, higher false detection rate and the fire smoke and water fog being not easily distinguished. A novel detection method for detecting smoke based on the multispectral image was proposed in the present paper. Using the multispectral digital imaging technique, the multispectral image series of fire smoke and water fog were obtained in the band scope of 400 to 720 nm, and the images were divided into bins. The Euclidian distance among the bins was taken as a measurement for showing the difference of spectrogram. After obtaining the spectral feature vectors of dynamic region, the regions of fire smoke and water fog were extracted according to the spectrogram feature difference between target and background. The indoor and outdoor experiments show that the smoke detection method based on multispectral image can be applied to the smoke detection, which can effectively distinguish the fire smoke and water fog. Combined with video image processing method, the multispectral image detection method can also be applied to the forest fire surveillance, reducing the false alarm rate in forest fire detection.
-
Siletz, Anaar; Childers, Christopher P; Faltermeier, Claire; Singer, Emily S; Hu, Q Lina; Ko, Clifford Y; Kates, Stephen L; Maggard-Gibbons, Melinda; Wick, Elizabeth
2018-01-01
Enhanced recovery pathways (ERPs) have been shown to improve patient outcomes in a variety of contexts. This review summarizes the evidence and defines a protocol for perioperative care of patients with hip fracture and was conducted for the Agency for Healthcare Research and Quality safety program for improving surgical care and recovery. Perioperative care was divided into components or "bins." For each bin, a semisystematic review of the literature was conducted using MEDLINE with priority given to systematic reviews, meta-analyses, and randomized controlled trials. Observational studies were included when higher levels of evidence were not available. Existing guidelines for perioperative care were also incorporated. For convenience, the components of care that are under the auspices of anesthesia providers will be reported separately. Recommendations for an evidence-based protocol were synthesized based on review of this evidence. Eleven bins were identified. Preoperative risk factor bins included nutrition, diabetes mellitus, tobacco use, and anemia. Perioperative management bins included thromboprophylaxis, timing of surgery, fluid management, drain placement, early mobilization, early alimentation, and discharge criteria/planning. This review provides the evidence basis for an ERP for perioperative care of patients with hip fracture.
-
Nonlocal low-rank and sparse matrix decomposition for spectral CT reconstruction
NASA Astrophysics Data System (ADS)
Niu, Shanzhou; Yu, Gaohang; Ma, Jianhua; Wang, Jing
2018-02-01
Spectral computed tomography (CT) has been a promising technique in research and clinics because of its ability to produce improved energy resolution images with narrow energy bins. However, the narrow energy bin image is often affected by serious quantum noise because of the limited number of photons used in the corresponding energy bin. To address this problem, we present an iterative reconstruction method for spectral CT using nonlocal low-rank and sparse matrix decomposition (NLSMD), which exploits the self-similarity of patches that are collected in multi-energy images. Specifically, each set of patches can be decomposed into a low-rank component and a sparse component, and the low-rank component represents the stationary background over different energy bins, while the sparse component represents the rest of the different spectral features in individual energy bins. Subsequently, an effective alternating optimization algorithm was developed to minimize the associated objective function. To validate and evaluate the NLSMD method, qualitative and quantitative studies were conducted by using simulated and real spectral CT data. Experimental results show that the NLSMD method improves spectral CT images in terms of noise reduction, artifact suppression and resolution preservation.
-
Bin Ratio-Based Histogram Distances and Their Application to Image Classification.
Hu, Weiming; Xie, Nianhua; Hu, Ruiguang; Ling, Haibin; Chen, Qiang; Yan, Shuicheng; Maybank, Stephen
2014-12-01
Large variations in image background may cause partial matching and normalization problems for histogram-based representations, i.e., the histograms of the same category may have bins which are significantly different, and normalization may produce large changes in the differences between corresponding bins. In this paper, we deal with this problem by using the ratios between bin values of histograms, rather than bin values' differences which are used in the traditional histogram distances. We propose a bin ratio-based histogram distance (BRD), which is an intra-cross-bin distance, in contrast with previous bin-to-bin distances and cross-bin distances. The BRD is robust to partial matching and histogram normalization, and captures correlations between bins with only a linear computational complexity. We combine the BRD with the ℓ1 histogram distance and the χ(2) histogram distance to generate the ℓ1 BRD and the χ(2) BRD, respectively. These combinations exploit and benefit from the robustness of the BRD under partial matching and the robustness of the ℓ1 and χ(2) distances to small noise. We propose a method for assessing the robustness of histogram distances to partial matching. The BRDs and logistic regression-based histogram fusion are applied to image classification. The experimental results on synthetic data sets show the robustness of the BRDs to partial matching, and the experiments on seven benchmark data sets demonstrate promising results of the BRDs for image classification.
-
Tjondrokoesoemo, Andoria; Park, Ki Ho; Ferrante, Christopher; Komazaki, Shinji; Lesniak, Sebastian; Brotto, Marco; Ko, Jae-Kyun; Zhou, Jingsong; Weisleder, Noah; Ma, Jianjie
2011-01-01
Efficient intracellular Ca2+ ([Ca2+]i) homeostasis in skeletal muscle requires intact triad junctional complexes comprised of t-tubule invaginations of plasma membrane and terminal cisternae of sarcoplasmic reticulum. Bin1 consists of a specialized BAR domain that is associated with t-tubule development in skeletal muscle and involved in tethering the dihydropyridine receptors (DHPR) to the t-tubule. Here, we show that Bin1 is important for Ca2+ homeostasis in adult skeletal muscle. Since systemic ablation of Bin1 in mice results in postnatal lethality, in vivo electroporation mediated transfection method was used to deliver RFP-tagged plasmid that produced short –hairpin (sh)RNA targeting Bin1 (shRNA-Bin1) to study the effect of Bin1 knockdown in adult mouse FDB skeletal muscle. Upon confirming the reduction of endogenous Bin1 expression, we showed that shRNA-Bin1 muscle displayed swollen t-tubule structures, indicating that Bin1 is required for the maintenance of intact membrane structure in adult skeletal muscle. Reduced Bin1 expression led to disruption of t-tubule structure that was linked with alterations to intracellular Ca2+ release. Voltage-induced Ca2+ released in isolated single muscle fibers of shRNA-Bin1 showed that both the mean amplitude of Ca2+ current and SR Ca2+ transient were reduced when compared to the shRNA-control, indicating compromised coupling between DHPR and ryanodine receptor 1. The mean frequency of osmotic stress induced Ca2+ sparks was reduced in shRNA-Bin1, indicating compromised DHPR activation. ShRNA-Bin1 fibers also displayed reduced Ca2+ sparks' amplitude that was attributed to decreased total Ca2+ stores in the shRNA-Bin1 fibers. Human mutation of Bin1 is associated with centronuclear myopathy and SH3 domain of Bin1 is important for sarcomeric protein organization in skeletal muscle. Our study showing the importance of Bin1 in the maintenance of intact t-tubule structure and ([Ca2+]i) homeostasis in adult skeletal muscle could provide mechanistic insight on the potential role of Bin1 in skeletal muscle contractility and pathology of myopathy. PMID:21984944
-
A novel algorithm for fast and efficient multifocus wavefront shaping
NASA Astrophysics Data System (ADS)
Fayyaz, Zahra; Nasiriavanaki, Mohammadreza
2018-02-01
Wavefront shaping using spatial light modulator (SLM) is a popular method for focusing light through a turbid media, such as biological tissues. Usually, in iterative optimization methods, due to the very large number of pixels in SLM, larger pixels are formed, bins, and the phase value of the bins are changed to obtain an optimum phase map, hence a focus. In this study an efficient optimization algorithm is proposed to obtain an arbitrary map of focus utilizing all the SLM pixels or small bin sizes. The application of such methodology in dermatology, hair removal in particular, is explored and discussed
-
GenoGAM: genome-wide generalized additive models for ChIP-Seq analysis.
Stricker, Georg; Engelhardt, Alexander; Schulz, Daniel; Schmid, Matthias; Tresch, Achim; Gagneur, Julien
2017-08-01
Chromatin immunoprecipitation followed by deep sequencing (ChIP-Seq) is a widely used approach to study protein-DNA interactions. Often, the quantities of interest are the differential occupancies relative to controls, between genetic backgrounds, treatments, or combinations thereof. Current methods for differential occupancy of ChIP-Seq data rely however on binning or sliding window techniques, for which the choice of the window and bin sizes are subjective. Here, we present GenoGAM (Genome-wide Generalized Additive Model), which brings the well-established and flexible generalized additive models framework to genomic applications using a data parallelism strategy. We model ChIP-Seq read count frequencies as products of smooth functions along chromosomes. Smoothing parameters are objectively estimated from the data by cross-validation, eliminating ad hoc binning and windowing needed by current approaches. GenoGAM provides base-level and region-level significance testing for full factorial designs. Application to a ChIP-Seq dataset in yeast showed increased sensitivity over existing differential occupancy methods while controlling for type I error rate. By analyzing a set of DNA methylation data and illustrating an extension to a peak caller, we further demonstrate the potential of GenoGAM as a generic statistical modeling tool for genome-wide assays. Software is available from Bioconductor: https://www.bioconductor.org/packages/release/bioc/html/GenoGAM.html . gagneur@in.tum.de. Supplementary information is available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
-
Markov chain Monte Carlo linkage analysis: effect of bin width on the probability of linkage.
Slager, S L; Juo, S H; Durner, M; Hodge, S E
2001-01-01
We analyzed part of the Genetic Analysis Workshop (GAW) 12 simulated data using Monte Carlo Markov chain (MCMC) methods that are implemented in the computer program Loki. The MCMC method reports the "probability of linkage" (PL) across the chromosomal regions of interest. The point of maximum PL can then be taken as a "location estimate" for the location of the quantitative trait locus (QTL). However, Loki does not provide a formal statistical test of linkage. In this paper, we explore how the bin width used in the calculations affects the max PL and the location estimate. We analyzed age at onset (AO) and quantitative trait number 5, Q5, from 26 replicates of the general simulated data in one region where we knew a major gene, MG5, is located. For each trait, we found the max PL and the corresponding location estimate, using four different bin widths. We found that bin width, as expected, does affect the max PL and the location estimate, and we recommend that users of Loki explore how their results vary with different bin widths.
-
Siletz, Anaar; Faltermeier, Claire; Singer, Emily S.; Hu, Q. Lina; Ko, Clifford Y.; Kates, Stephen L.; Maggard-Gibbons, Melinda; Wick, Elizabeth
2018-01-01
Background: Enhanced recovery pathways (ERPs) have been shown to improve patient outcomes in a variety of contexts. This review summarizes the evidence and defines a protocol for perioperative care of patients with hip fracture and was conducted for the Agency for Healthcare Research and Quality safety program for improving surgical care and recovery. Study Design: Perioperative care was divided into components or “bins.” For each bin, a semisystematic review of the literature was conducted using MEDLINE with priority given to systematic reviews, meta-analyses, and randomized controlled trials. Observational studies were included when higher levels of evidence were not available. Existing guidelines for perioperative care were also incorporated. For convenience, the components of care that are under the auspices of anesthesia providers will be reported separately. Recommendations for an evidence-based protocol were synthesized based on review of this evidence. Results: Eleven bins were identified. Preoperative risk factor bins included nutrition, diabetes mellitus, tobacco use, and anemia. Perioperative management bins included thromboprophylaxis, timing of surgery, fluid management, drain placement, early mobilization, early alimentation, and discharge criteria/planning. Conclusions: This review provides the evidence basis for an ERP for perioperative care of patients with hip fracture. PMID:29844947
-
Interference Cognizant Network Scheduling
NASA Technical Reports Server (NTRS)
Hall, Brendan (Inventor); Bonk, Ted (Inventor); DeLay, Benjamin F. (Inventor); Varadarajan, Srivatsan (Inventor); Smithgall, William Todd (Inventor)
2017-01-01
Systems and methods for interference cognizant network scheduling are provided. In certain embodiments, a method of scheduling communications in a network comprises identifying a bin of a global timeline for scheduling an unscheduled virtual link, wherein a bin is a segment of the timeline; identifying a pre-scheduled virtual link in the bin; and determining if the pre-scheduled and unscheduled virtual links share a port. In certain embodiments, if the unscheduled and pre-scheduled virtual links don't share a port, scheduling transmission of the unscheduled virtual link to overlap with the scheduled transmission of the pre-scheduled virtual link; and if the unscheduled and pre-scheduled virtual links share a port: determining a start time delay for the unscheduled virtual link based on the port; and scheduling transmission of the unscheduled virtual link in the bin based on the start time delay to overlap part of the scheduled transmission of the pre-scheduled virtual link.
-
BIN1 is Reduced and Cav1.2 Trafficking is Impaired in Human Failing Cardiomyocytes
Hong, Ting-Ting; Smyth, James W.; Chu, Kevin Y.; Vogan, Jacob M.; Fong, Tina S.; Jensen, Brian C.; Fang, Kun; Halushka, Marc K.; Russell, Stuart D.; Colecraft, Henry; Hoopes, Charles W.; Ocorr, Karen; Chi, Neil C.; Shaw, Robin M.
2011-01-01
Background Heart failure is a growing epidemic and a typical aspect of heart failure pathophysiology is altered calcium transients. Normal cardiac calcium transients are initiated by Cav1.2 channels at cardiac T-tubules. BIN1 is a membrane scaffolding protein that causes Cav1.2 to traffic to T-tubules in healthy hearts. The mechanisms of Cav1.2 trafficking in heart failure are not known. Objective To study BIN1 expression and its effect on Cav1.2 trafficking in failing hearts. Methods Intact myocardium and freshly isolated cardiomyocytes from non-failing and end-stage failing human hearts were used to study BIN1 expression and Cav1.2 localization. To confirm Cav1.2 surface expression dependence on BIN1, patch clamp recordings were performed of Cav1.2 current in cell lines with and without trafficking competent BIN1. Also, in adult mouse cardiomyocytes, surface Cav1.2 and calcium transients were studied after shRNA mediated knockdown of BIN1. For a functional readout in intact heart, calcium transients and cardiac contractility were analyzed in a zebrafish model with morpholino mediated knockdown of BIN1. Results BIN1 expression is significantly decreased in failing cardiomyocytes at both mRNA (30% down) and protein (36% down) levels. Peripheral Cav1.2 is reduced 42% by imaging and biochemical T-tubule fraction of Cav1.2 is reduced 68%. Total calcium current is reduced 41% in a cell line expressing non-trafficking BIN1 mutant. In mouse cardiomyocytes, BIN1 knockdown decreases surface Cav1.2 and impairs calcium transients. In zebrafish hearts, BIN1 knockdown causes a 75% reduction in calcium transients and severe ventricular contractile dysfunction. Conclusions The data indicate that BIN1 is significantly reduced in human heart failure, and this reduction impairs Cav1.2 trafficking, calcium transients, and contractility. PMID:22138472
-
EMUDRA: Ensemble of Multiple Drug Repositioning Approaches to Improve Prediction Accuracy.
Zhou, Xianxiao; Wang, Minghui; Katsyv, Igor; Irie, Hanna; Zhang, Bin
2018-04-24
Availability of large-scale genomic, epigenetic and proteomic data in complex diseases makes it possible to objectively and comprehensively identify therapeutic targets that can lead to new therapies. The Connectivity Map has been widely used to explore novel indications of existing drugs. However, the prediction accuracy of the existing methods, such as Kolmogorov-Smirnov statistic remains low. Here we present a novel high-performance drug repositioning approach that improves over the state-of-the-art methods. We first designed an expression weighted cosine method (EWCos) to minimize the influence of the uninformative expression changes and then developed an ensemble approach termed EMUDRA (Ensemble of Multiple Drug Repositioning Approaches) to integrate EWCos and three existing state-of-the-art methods. EMUDRA significantly outperformed individual drug repositioning methods when applied to simulated and independent evaluation datasets. We predicted using EMUDRA and experimentally validated an antibiotic rifabutin as an inhibitor of cell growth in triple negative breast cancer. EMUDRA can identify drugs that more effectively target disease gene signatures and will thus be a useful tool for identifying novel therapies for complex diseases and predicting new indications for existing drugs. The EMUDRA R package is available at doi:10.7303/syn11510888. bin.zhang@mssm.edu or zhangb@hotmail.com. Supplementary data are available at Bioinformatics online.
-
Symmetric autocompensating quantum key distribution
NASA Astrophysics Data System (ADS)
Walton, Zachary D.; Sergienko, Alexander V.; Levitin, Lev B.; Saleh, Bahaa E. A.; Teich, Malvin C.
2004-08-01
We present quantum key distribution schemes which are autocompensating (require no alignment) and symmetric (Alice and Bob receive photons from a central source) for both polarization and time-bin qubits. The primary benefit of the symmetric configuration is that both Alice and Bob may have passive setups (neither Alice nor Bob is required to make active changes for each run of the protocol). We show that both the polarization and the time-bin schemes may be implemented with existing technology. The new schemes are related to previously described schemes by the concept of advanced waves.
-
NASA Astrophysics Data System (ADS)
Koglin, Johnathon
Accurate nuclear reaction data from a few keV to tens of MeV and across the table of nuclides is essential to a number of applications of nuclear physics, including national security, nuclear forensics, nuclear astrophysics, and nuclear energy. Precise determination of (n, f) and neutron capture cross sections for reactions in high- ux environments are particularly important for a proper understanding of nuclear reactor performance and stellar nucleosynthesis. In these extreme environments reactions on short-lived and otherwise difficult-to-produce isotopes play a significant role in system evolution and provide insights into the types of nuclear processes taking place; a detailed understanding of these processes is necessary to properly determine cross sections far from stability. Indirect methods are often attempted to measure cross sections on isotopes that are difficult to separate in a laboratory setting. Using the surrogate approach, the same compound nucleus from the reaction of interest is created through a "surrogate" reaction on a different isotope and the resulting decay is measured. This result is combined with appropriate reaction theory for compound nucleus population, from which the desired cross sections can be inferred. This method has shown promise, but the theoretical framework often lacks necessary experimental data to constrain models. In this work, dual arrays of silicon telescope particle identification detectors and photovoltaic (solar) cell fission fragment detectors have been used to measure the fission probability of the 240Pu(alpha, alpha'f) reaction - a surrogate for the 239Pu(n, f) - and fission of 35.9(2)MeV at eleven scattering angles from 40° to 140° in 10° intervals and at nuclear excitation energies up to 16MeV. Within experimental uncertainty, the maximum fission probability was observed at the neutron separation energy for each alpha scattering angle. Fission probabilities were separated into five 500 keV bins from 5:5MeV to 8:0MeV and one bin from 4:5MeV to 5:5MeV. Across energy bins the fission probability increases approximately linearly with increasing alpha' scattering angle. At 90° the fission probability increases from 0:069(6) in the lowest energy bin to 0:59(2) in the highest. Likewise, within a single energy bin the fission probability increases with alpha' scattering angle. Within the 6:5MeV and 7:0MeV energy bin, the fission probability increased from 0:41(1) at 60° to 0:81(10) at 140°. Fission fragment angular distributions were also measured integrated over each energy bin. These distributions were fit to theoretical distributions based on combinations of transitional nuclear vibrational and rotational excitations at the saddle point. Contributions from specific K vibrational states were extracted and combined with fission probability measurements to determine the relative fission probability of each state as a function of nuclear excitation energy. Within a given excitation energy bin, it is found that contributions from K states greater than the minimum K = 0 state tend to increase with the increasing alpha' scattering angle. This is attributed to an increase in the transferred angular momentum associated with larger scattering angles. The 90° alpha' scattering angle produced the highest quality results. The relative contributions of K states do not show a discernible trend across the energy spectrum. The energy-binned results confirm existing measurements that place a K = 2 state in the first energy bin with the opening of K = 1 and K = 4 states at energies above 5:5MeV. This experiment represents the first of its kind in which fission probabilities and angular distributions are simultaneously measured at a large number of scattering angles. The acquired fission probability, angular distribution, and K state contribution provide a diverse dataset against which microscopic fission models can be constrained and further the understanding of the properties of the 240Pu fission.
-
Huang, Jr-Chuan; Lee, Tsung-Yu; Teng, Tse-Yang; Chen, Yi-Chin; Huang, Cho-Ying; Lee, Cheing-Tung
2014-01-01
The exponent decay in landslide frequency-area distribution is widely used for assessing the consequences of landslides and with some studies arguing that the slope of the exponent decay is universal and independent of mechanisms and environmental settings. However, the documented exponent slopes are diverse and hence data processing is hypothesized for this inconsistency. An elaborated statistical experiment and two actual landslide inventories were used here to demonstrate the influences of the data processing on the determination of the exponent. Seven categories with different landslide numbers were generated from the predefined inverse-gamma distribution and then analyzed by three data processing procedures (logarithmic binning, LB, normalized logarithmic binning, NLB and cumulative distribution function, CDF). Five different bin widths were also considered while applying LB and NLB. Following that, the maximum likelihood estimation was used to estimate the exponent slopes. The results showed that the exponents estimated by CDF were unbiased while LB and NLB performed poorly. Two binning-based methods led to considerable biases that increased with the increase of landslide number and bin width. The standard deviations of the estimated exponents were dependent not just on the landslide number but also on binning method and bin width. Both extremely few and plentiful landslide numbers reduced the confidence of the estimated exponents, which could be attributed to limited landslide numbers and considerable operational bias, respectively. The diverse documented exponents in literature should therefore be adjusted accordingly. Our study strongly suggests that the considerable bias due to data processing and the data quality should be constrained in order to advance the understanding of landslide processes.
-
Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.; Kuncic, Zdenka; Keall, Paul J.
2014-01-01
Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets with various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR values were found to increase with decreasing RMSE values of projection angular gaps with strong correlations (r ≈ −0.7) regardless of the reconstruction algorithm used. Conclusions: Based on the authors’ results, displacement-based binning methods, better reconstruction algorithms, and the acquisition of even projection angular views are the most important factors to consider for improving thoracic 4D-CBCT image quality. In view of the practical issues with displacement-based binning and the fact that projection angular spacing is not currently directly controllable, development of better reconstruction algorithms represents the most effective strategy for improving image quality in thoracic 4D-CBCT for IGRT applications at the current stage. PMID:24694143
-
NASA Astrophysics Data System (ADS)
Sefton-Nash, E.; Williams, J.-P.; Greenhagen, B. T.; Aye, K.-M.; Paige, D. A.
2017-12-01
An approach is presented to efficiently produce high quality gridded data records from the large, global point-based dataset returned by the Diviner Lunar Radiometer Experiment aboard NASA's Lunar Reconnaissance Orbiter. The need to minimize data volume and processing time in production of science-ready map products is increasingly important with the growth in data volume of planetary datasets. Diviner makes on average >1400 observations per second of radiance that is reflected and emitted from the lunar surface, using 189 detectors divided into 9 spectral channels. Data management and processing bottlenecks are amplified by modeling every observation as a probability distribution function over the field of view, which can increase the required processing time by 2-3 orders of magnitude. Geometric corrections, such as projection of data points onto a digital elevation model, are numerically intensive and therefore it is desirable to perform them only once. Our approach reduces bottlenecks through parallel binning and efficient storage of a pre-processed database of observations. Database construction is via subdivision of a geodesic icosahedral grid, with a spatial resolution that can be tailored to suit the field of view of the observing instrument. Global geodesic grids with high spatial resolution are normally impractically memory intensive. We therefore demonstrate a minimum storage and highly parallel method to bin very large numbers of data points onto such a grid. A database of the pre-processed and binned points is then used for production of mapped data products that is significantly faster than if unprocessed points were used. We explore quality controls in the production of gridded data records by conditional interpolation, allowed only where data density is sufficient. The resultant effects on the spatial continuity and uncertainty in maps of lunar brightness temperatures is illustrated. We identify four binning regimes based on trades between the spatial resolution of the grid, the size of the FOV and the on-target spacing of observations. Our approach may be applicable and beneficial for many existing and future point-based planetary datasets.
-
DOE Office of Scientific and Technical Information (OSTI.GOV)
Islam, Md. Shafiqul, E-mail: shafique@eng.ukm.my; Hannan, M.A., E-mail: hannan@eng.ukm.my; Basri, Hassan
Highlights: • Solid waste bin level detection using Dynamic Time Warping (DTW). • Gabor wavelet filter is used to extract the solid waste image features. • Multi-Layer Perceptron classifier network is used for bin image classification. • The classification performance evaluated by ROC curve analysis. - Abstract: The increasing requirement for Solid Waste Management (SWM) has become a significant challenge for municipal authorities. A number of integrated systems and methods have introduced to overcome this challenge. Many researchers have aimed to develop an ideal SWM system, including approaches involving software-based routing, Geographic Information Systems (GIS), Radio-frequency Identification (RFID), or sensormore » intelligent bins. Image processing solutions for the Solid Waste (SW) collection have also been developed; however, during capturing the bin image, it is challenging to position the camera for getting a bin area centralized image. As yet, there is no ideal system which can correctly estimate the amount of SW. This paper briefly discusses an efficient image processing solution to overcome these problems. Dynamic Time Warping (DTW) was used for detecting and cropping the bin area and Gabor wavelet (GW) was introduced for feature extraction of the waste bin image. Image features were used to train the classifier. A Multi-Layer Perceptron (MLP) classifier was used to classify the waste bin level and estimate the amount of waste inside the bin. The area under the Receiver Operating Characteristic (ROC) curves was used to statistically evaluate classifier performance. The results of this developed system are comparable to previous image processing based system. The system demonstration using DTW with GW for feature extraction and an MLP classifier led to promising results with respect to the accuracy of waste level estimation (98.50%). The application can be used to optimize the routing of waste collection based on the estimated bin level.« less
-
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, L; O’Connell, D; Lee, P
2016-06-15
Purpose: A published 5DCT breathing motion model enables image reconstruction at any user-selected breathing phase, defined by the model as a specific amplitude (v) and rate (f). Generation of reconstructed phase-specific CT scans will be required for time-independent radiation dose distribution simulations. This work answers the question: how many amplitude and rate bins are required to describe the tumor motion with a specific spatial resolution? Methods: 19 lung-cancer patients with 21 tumors were scanned using a free-breathing 5DCT protocol, employing an abdominally positioned pneumatic-bellows breathing surrogate and yielding voxel-specific motion model parameters α and β corresponding to motion as amore » function of amplitude and rate, respectively. Tumor GTVs were contoured on the first (reference) of 25 successive free-breathing fast helical CT image sets. The tumor displacements were binned into widths of 1mm to 5mm in 1mm steps and the total required number of bins recorded. The simulation evaluated the number of bins needed to encompass 100% of the breathing-amplitude and between the 5th and 95th percentile amplitudes to exclude breathing outliers. Results: The mean respiration-induced tumor motion was 9.90mm ± 7.86mm with a maximum of 25mm. The number of bins required was a strong function of the spatial resolution and varied widely between patients. For example, for 2mm bins, between 1–13 amplitude bins and 1–9 rate bins were required to encompass 100% of the breathing amplitude, while 1–6 amplitude bins and 1–3 rate bins were required to encompass 90% of the breathing amplitude. Conclusion: The strong relationship between number of bins and spatial resolution as well as the large variation between patients implies that time-independent radiation dose distribution simulations should be conducted using patient-specific data and that the breathing conditions will have to be carefully considered. This work will lead to the assessment of the dosimetric impact of binning resolution. This study is supported by Siemens Healthcare.« less
-
Huang, Jr-Chuan; Lee, Tsung-Yu; Teng, Tse-Yang; Chen, Yi-Chin; Huang, Cho-Ying; Lee, Cheing-Tung
2014-01-01
The exponent decay in landslide frequency-area distribution is widely used for assessing the consequences of landslides and with some studies arguing that the slope of the exponent decay is universal and independent of mechanisms and environmental settings. However, the documented exponent slopes are diverse and hence data processing is hypothesized for this inconsistency. An elaborated statistical experiment and two actual landslide inventories were used here to demonstrate the influences of the data processing on the determination of the exponent. Seven categories with different landslide numbers were generated from the predefined inverse-gamma distribution and then analyzed by three data processing procedures (logarithmic binning, LB, normalized logarithmic binning, NLB and cumulative distribution function, CDF). Five different bin widths were also considered while applying LB and NLB. Following that, the maximum likelihood estimation was used to estimate the exponent slopes. The results showed that the exponents estimated by CDF were unbiased while LB and NLB performed poorly. Two binning-based methods led to considerable biases that increased with the increase of landslide number and bin width. The standard deviations of the estimated exponents were dependent not just on the landslide number but also on binning method and bin width. Both extremely few and plentiful landslide numbers reduced the confidence of the estimated exponents, which could be attributed to limited landslide numbers and considerable operational bias, respectively. The diverse documented exponents in literature should therefore be adjusted accordingly. Our study strongly suggests that the considerable bias due to data processing and the data quality should be constrained in order to advance the understanding of landslide processes. PMID:24852019
-
An optimal FFT-based anisotropic power spectrum estimator
NASA Astrophysics Data System (ADS)
Hand, Nick; Li, Yin; Slepian, Zachary; Seljak, Uroš
2017-07-01
Measurements of line-of-sight dependent clustering via the galaxy power spectrum's multipole moments constitute a powerful tool for testing theoretical models in large-scale structure. Recent work shows that this measurement, including a moving line-of-sight, can be accelerated using Fast Fourier Transforms (FFTs) by decomposing the Legendre polynomials into products of Cartesian vectors. Here, we present a faster, optimal means of using FFTs for this measurement. We avoid redundancy present in the Cartesian decomposition by using a spherical harmonic decomposition of the Legendre polynomials. With this method, a given multipole of order l requires only 2l+1 FFTs rather than the (l+1)(l+2)/2 FFTs of the Cartesian approach. For the hexadecapole (l = 4), this translates to 40% fewer FFTs, with increased savings for higher l. The reduction in wall-clock time enables the calculation of finely-binned wedges in P(k,μ), obtained by computing multipoles up to a large lmax and combining them. This transformation has a number of advantages. We demonstrate that by using non-uniform bins in μ, we can isolate plane-of-sky (angular) systematics to a narrow bin at 0μ simeq while eliminating the contamination from all other bins. We also show that the covariance matrix of clustering wedges binned uniformly in μ becomes ill-conditioned when combining multipoles up to large values of lmax, but that the problem can be avoided with non-uniform binning. As an example, we present results using lmax=16, for which our procedure requires a factor of 3.4 fewer FFTs than the Cartesian method, while removing the first μ bin leads only to a 7% increase in statistical error on f σ8, as compared to a 54% increase with lmax=4.
-
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hand, Nick; Seljak, Uroš; Li, Yin
Measurements of line-of-sight dependent clustering via the galaxy power spectrum's multipole moments constitute a powerful tool for testing theoretical models in large-scale structure. Recent work shows that this measurement, including a moving line-of-sight, can be accelerated using Fast Fourier Transforms (FFTs) by decomposing the Legendre polynomials into products of Cartesian vectors. Here, we present a faster, optimal means of using FFTs for this measurement. We avoid redundancy present in the Cartesian decomposition by using a spherical harmonic decomposition of the Legendre polynomials. With this method, a given multipole of order ℓ requires only 2ℓ+1 FFTs rather than the (ℓ+1)(ℓ+2)/2 FFTsmore » of the Cartesian approach. For the hexadecapole (ℓ = 4), this translates to 40% fewer FFTs, with increased savings for higher ℓ. The reduction in wall-clock time enables the calculation of finely-binned wedges in P ( k ,μ), obtained by computing multipoles up to a large ℓ{sub max} and combining them. This transformation has a number of advantages. We demonstrate that by using non-uniform bins in μ, we can isolate plane-of-sky (angular) systematics to a narrow bin at 0μ ≅ while eliminating the contamination from all other bins. We also show that the covariance matrix of clustering wedges binned uniformly in μ becomes ill-conditioned when combining multipoles up to large values of ℓ{sub max}, but that the problem can be avoided with non-uniform binning. As an example, we present results using ℓ{sub max}=16, for which our procedure requires a factor of 3.4 fewer FFTs than the Cartesian method, while removing the first μ bin leads only to a 7% increase in statistical error on f σ{sub 8}, as compared to a 54% increase with ℓ{sub max}=4.« less
-
Visual analytics of large multidimensional data using variable binned scatter plots
NASA Astrophysics Data System (ADS)
Hao, Ming C.; Dayal, Umeshwar; Sharma, Ratnesh K.; Keim, Daniel A.; Janetzko, Halldór
2010-01-01
The scatter plot is a well-known method of visualizing pairs of two-dimensional continuous variables. Multidimensional data can be depicted in a scatter plot matrix. They are intuitive and easy-to-use, but often have a high degree of overlap which may occlude a significant portion of data. In this paper, we propose variable binned scatter plots to allow the visualization of large amounts of data without overlapping. The basic idea is to use a non-uniform (variable) binning of the x and y dimensions and plots all the data points that fall within each bin into corresponding squares. Further, we map a third attribute to color for visualizing clusters. Analysts are able to interact with individual data points for record level information. We have applied these techniques to solve real-world problems on credit card fraud and data center energy consumption to visualize their data distribution and cause-effect among multiple attributes. A comparison of our methods with two recent well-known variants of scatter plots is included.
-
Fast and flexible 3D object recognition solutions for machine vision applications
NASA Astrophysics Data System (ADS)
Effenberger, Ira; Kühnle, Jens; Verl, Alexander
2013-03-01
In automation and handling engineering, supplying work pieces between different stages along the production process chain is of special interest. Often the parts are stored unordered in bins or lattice boxes and hence have to be separated and ordered for feeding purposes. An alternative to complex and spacious mechanical systems such as bowl feeders or conveyor belts, which are typically adapted to the parts' geometry, is using a robot to grip the work pieces out of a bin or from a belt. Such applications are in need of reliable and precise computer-aided object detection and localization systems. For a restricted range of parts, there exists a variety of 2D image processing algorithms that solve the recognition problem. However, these methods are often not well suited for the localization of randomly stored parts. In this paper we present a fast and flexible 3D object recognizer that localizes objects by identifying primitive features within the objects. Since technical work pieces typically consist to a substantial degree of geometric primitives such as planes, cylinders and cones, such features usually carry enough information in order to determine the position of the entire object. Our algorithms use 3D best-fitting combined with an intelligent data pre-processing step. The capability and performance of this approach is shown by applying the algorithms to real data sets of different industrial test parts in a prototypical bin picking demonstration system.
-
Redic, Kimberly A; Fang, Kayleen; Christen, Catherine; Chaffee, Bruce W
2018-03-01
Purpose This study was conducted to determine whether there is contamination on exterior drug packaging using shipping totes from the distributor and carousel storage bins as surrogate markers of external packaging contamination. Methods A two-part study was conducted to measure the presence of 5-fluorouracil, ifosfamide, cyclophosphamide, docetaxel and paclitaxel using surrogate markers for external drug packaging. In Part I, 10 drug distributor shipping totes designated for transport of hazardous drugs provided a snapshot view of contamination from regular use and transit in and out of the pharmacy. An additional two totes designated for transport of non-hazardous drugs served as controls. In Part II, old carousel storage bins (i.e. those in use pre-study) were wiped for snapshot view of hazardous drug contamination on storage bins. New carousel storage bins were then put into use for storage of the five tested drugs and used for routine storage and inventory maintenance activities. Carousel bins were wiped at time intervals 0, 8, 16 and 52 weeks to measure surface contamination. Results Two of the 10 hazardous shipping totes were contaminated. Three of the five-old carousel bins were contaminated with cyclophosphamide. One of the old carousel bins was also contaminated with ifosfamide. There were no detectable levels of hazardous drugs on any of the new storage bins at time 0, 8 or 16 weeks. However, at the Week 52, there was a detectable level of 5-FU present in the 5-FU carousel bin. Conclusions Contamination of the surrogate markers suggests that external packaging for hazardous drugs is contaminated, either during the manufacturing process or during routine chain of custody activities. These results demonstrate that occupational exposure may occur due to contamination from shipping totes and storage bins, and that handling practices including use of personal protective equipment is warranted.
-
Krawczel, P D; Klaiber, L M; Thibeau, S S; Dann, H M
2012-08-01
Assessing feeding behavior is important in understanding the effects of nutrition and management on the well-being of dairy cows. Historically, collection of these data from cows fed with a Calan Broadbent Feeding System (American Calan Inc., Northwood, NH) required the labor-intensive practices of direct observation or video review. The objective of this study was to evaluate the agreement between the output of a HOBO change-of-state data logger (Onset Computer Corp., Bourne, MA), mounted to the door shell and latch plate, and video data summarized with continuous sampling. Data (number of feed bin visits per day and feeding time in minutes per day) were recorded with both methods from 26 lactating cows and 10 nonlactating cows for 3 d per cow (n=108). The agreement of the data logger and video methods was evaluated using the REG procedure of SAS to compare the mean response of the methods against the difference between the methods. The maximum allowable difference (MAD) was set at ±3 for bin visits and ±20 min for feeding time. Ranges for feed bin visits (2 to 140 per d) and feeding time (28 to 267 min/d) were established from video data. Using the complete data set, agreement was partially established between the data logger and video methods for feed bin visits, but not established for feeding time. The complete data set generated by the data logger was screened to remove visits of a duration ≤3 s, reflecting a cow unable to enter a feed bin (representing 7% of all data) and ≥5,400 s, reflecting a failure of the data logger to align properly with its corresponding magnetic field (representing <1% of all data). Using the resulting screened data set, agreement was established for feed bin visits and feeding time. For bin visits, 4% of the data was beyond the MAD. For feeding time, 3% of the data was beyond the MAD and 74% of the data was ±1 min. The insignificant P-value, low coefficient of determination, and concentration of the data within the MAD indicate the agreement of the change-of-state data logger and video data. This suggests the usage of a change-of-state data logger to assess the feeding behavior of cows feeding from a Calan Broadbent Feeding System is appropriate. Use of the screening criteria for data analysis is recommended. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
-
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iwasaki, Akira; Kubota, Mamoru; Hirota, Junichi
2006-11-15
We have redeveloped a high-energy x-ray spectra estimation method reported by Iwasaki et al. [A. Iwasaki, H. Matsutani, M. Kubota, A. Fujimori, K. Suzaki, and Y. Abe, Radiat. Phys. Chem. 67, 81-91 (2003)]. The method is based on the iterative perturbation principle to minimize differences between measured and calculated transmission curves, originally proposed by Waggener et al. [R. G. Waggener, M. M. Blough, J. A. Terry, D. Chen, N. E. Lee, S. Zhang, and W. D. McDavid, Med. Phys. 26, 1269-1278 (1999)]. The method can estimate spectra applicable for media at least from water to lead using only about tenmore » energy bins. Estimating spectra of 4-15 MV x-ray beams from a linear accelerator, we describe characteristic features of the method with regard to parameters including the prespectrum, number of transmission measurements, number of energy bins, energy bin widths, and artifactual bipeaked spectrum production.« less
-
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.
Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets withmore » various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR values were found to increase with decreasing RMSE values of projection angular gaps with strong correlations (r ≈ −0.7) regardless of the reconstruction algorithm used. Conclusions: Based on the authors’ results, displacement-based binning methods, better reconstruction algorithms, and the acquisition of even projection angular views are the most important factors to consider for improving thoracic 4D-CBCT image quality. In view of the practical issues with displacement-based binning and the fact that projection angular spacing is not currently directly controllable, development of better reconstruction algorithms represents the most effective strategy for improving image quality in thoracic 4D-CBCT for IGRT applications at the current stage.« less
-
Zahiri, Reza; Lafontaine, J. Donald; Schmidt, B. Christian; deWaard, Jeremy R.; Zakharov, Evgeny V.; Hebert, Paul D. N.
2014-01-01
This study provides a first, comprehensive, diagnostic use of DNA barcodes for the Canadian fauna of noctuoids or “owlet” moths (Lepidoptera: Noctuoidea) based on vouchered records for 1,541 species (99.1% species coverage), and more than 30,000 sequences. When viewed from a Canada-wide perspective, DNA barcodes unambiguously discriminate 90% of the noctuoid species recognized through prior taxonomic study, and resolution reaches 95.6% when considered at a provincial scale. Barcode sharing is concentrated in certain lineages with 54% of the cases involving 1.8% of the genera. Deep intraspecific divergence exists in 7.7% of the species, but further studies are required to clarify whether these cases reflect an overlooked species complex or phylogeographic variation in a single species. Non-native species possess higher Nearest-Neighbour (NN) distances than native taxa, whereas generalist feeders have lower NN distances than those with more specialized feeding habits. We found high concordance between taxonomic names and sequence clusters delineated by the Barcode Index Number (BIN) system with 1,082 species (70%) assigned to a unique BIN. The cases of discordance involve both BIN mergers and BIN splits with 38 species falling into both categories, most likely reflecting bidirectional introgression. One fifth of the species are involved in a BIN merger reflecting the presence of 158 species sharing their barcode sequence with at least one other taxon, and 189 species with low, but diagnostic COI divergence. A very few cases (13) involved species whose members fell into both categories. Most of the remaining 140 species show a split into two or three BINs per species, while Virbia ferruginosa was divided into 16. The overall results confirm that DNA barcodes are effective for the identification of Canadian noctuoids. This study also affirms that BINs are a strong proxy for species, providing a pathway for a rapid, accurate estimation of animal diversity. PMID:24667847
-
Data-driven optimal binning for respiratory motion management in PET.
Kesner, Adam L; Meier, Joseph G; Burckhardt, Darrell D; Schwartz, Jazmin; Lynch, David A
2018-01-01
Respiratory gating has been used in PET imaging to reduce the amount of image blurring caused by patient motion. Optimal binning is an approach for using the motion-characterized data by binning it into a single, easy to understand/use, optimal bin. To date, optimal binning protocols have utilized externally driven motion characterization strategies that have been tuned with population-derived assumptions and parameters. In this work, we are proposing a new strategy with which to characterize motion directly from a patient's gated scan, and use that signal to create a patient/instance-specific optimal bin image. Two hundred and nineteen phase-gated FDG PET scans, acquired using data-driven gating as described previously, were used as the input for this study. For each scan, a phase-amplitude motion characterization was generated and normalized using principle component analysis. A patient-specific "optimal bin" window was derived using this characterization, via methods that mirror traditional optimal window binning strategies. The resulting optimal bin images were validated by correlating quantitative and qualitative measurements in the population of PET scans. In 53% (n = 115) of the image population, the optimal bin was determined to include 100% of the image statistics. In the remaining images, the optimal binning windows averaged 60% of the statistics and ranged between 20% and 90%. Tuning the algorithm, through a single acceptance window parameter, allowed for adjustments of the algorithm's performance in the population toward conservation of motion or reduced noise-enabling users to incorporate their definition of optimal. In the population of images that were deemed appropriate for segregation, average lesion SUV max were 7.9, 8.5, and 9.0 for nongated images, optimal bin, and gated images, respectively. The Pearson correlation of FWHM measurements between optimal bin images and gated images were better than with nongated images, 0.89 and 0.85, respectively. Generally, optimal bin images had better resolution than the nongated images and better noise characteristics than the gated images. We extended the concept of optimal binning to a data-driven form, updating a traditionally one-size-fits-all approach to a conformal one that supports adaptive imaging. This automated strategy was implemented easily within a large population and encapsulated motion information in an easy to use 3D image. Its simplicity and practicality may make this, or similar approaches ideal for use in clinical settings. © 2017 American Association of Physicists in Medicine.
-
Desale, Adino; Taye, Bineyam; Belay, Getachew; Nigatu, Alemayehu
2013-01-01
Introduction Logistics management information system for health commodities remained poorly implemented in most of developing countries. To assess the status of laboratory logistics management information system for HIV/AIDS and tuberculosis laboratory commodities in public health facilities in Addis Ababa. Methods A cross-sectional descriptive study was conducted from September 2010-January 2011 at selected public health facilities. A stratified random sampling method was used to include a total of 43 facilities which, were investigated through quantitative methods using structured questionnaires interviews. Focus group discussion with the designated supply chain managers and key informant interviews were conducted for the qualitative method. Results There exists a well-designed logistics system for laboratory commodities with trained pharmacy personnel, distributed standard LMIS formats and established inventory control procedures. However, majority of laboratory professionals were not trained in LMIS. Majority of the facilities (60.5%) were stocked out for at least one ART monitoring and TB laboratory reagents and the highest stock out rate was for chemistry reagents. Expired ART monitoring laboratory commodities were found in 25 (73.5%) of facilities. Fifty percent (50%) of the assessed hospitals and 54% of health centers were currently using stock/bin cards for all HIV/AIDS and TB laboratory commodities in main pharmacy store, among these only 25% and 20.8% of them were updated with accurate information matching with the physical count done at the time of visit for hospitals and health centers respectively. Conclusion Even though there exists a well designed laboratory LMIS, keeping quality stock/bin cards and LMIS reports were very low. Key ART monitoring laboratory commodities were stock out at many facilities at the day of visit and during the past six months. Based on findings, training of laboratory personnel's managing laboratory commodities and keeping accurate inventory control procedures were recommended. PMID:24106574
-
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mazur, T; Wang, Y; Fischer-Valuck, B
2015-06-15
Purpose: To develop a novel and rapid, SIFT-based algorithm for assessing feature motion on cine MR images acquired during MRI-guided radiotherapy treatments. In particular, we apply SIFT descriptors toward both partitioning cine images into respiratory states and tracking regions across frames. Methods: Among a training set of images acquired during a fraction, we densely assign SIFT descriptors to pixels within the images. We cluster these descriptors across all frames in order to produce a dictionary of trackable features. Associating the best-matching descriptors at every frame among the training images to these features, we construct motion traces for the features. Wemore » use these traces to define respiratory bins for sorting images in order to facilitate robust pixel-by-pixel tracking. Instead of applying conventional methods for identifying pixel correspondences across frames we utilize a recently-developed algorithm that derives correspondences via a matching objective for SIFT descriptors. Results: We apply these methods to a collection of lung, abdominal, and breast patients. We evaluate the procedure for respiratory binning using target sites exhibiting high-amplitude motion among 20 lung and abdominal patients. In particular, we investigate whether these methods yield minimal variation between images within a bin by perturbing the resulting image distributions among bins. Moreover, we compare the motion between averaged images across respiratory states to 4DCT data for these patients. We evaluate the algorithm for obtaining pixel correspondences between frames by tracking contours among a set of breast patients. As an initial case, we track easily-identifiable edges of lumpectomy cavities that show minimal motion over treatment. Conclusions: These SIFT-based methods reliably extract motion information from cine MR images acquired during patient treatments. While we performed our analysis retrospectively, the algorithm lends itself to prospective motion assessment. Applications of these methods include motion assessment, identifying treatment windows for gating, and determining optimal margins for treatment.« less
-
Han, Koeun; Jeong, Hee-Jin; Yang, Hee-Bum; Kang, Sung-Min; Kwon, Jin-Kyung; Kim, Seungill; Choi, Doil; Kang, Byoung-Cheorl
2016-04-01
Most agricultural traits are controlled by quantitative trait loci (QTLs); however, there are few studies on QTL mapping of horticultural traits in pepper (Capsicum spp.) due to the lack of high-density molecular maps and the sequence information. In this study, an ultra-high-density map and 120 recombinant inbred lines (RILs) derived from a cross between C. annuum'Perennial' and C. annuum'Dempsey' were used for QTL mapping of horticultural traits. Parental lines and RILs were resequenced at 18× and 1× coverage, respectively. Using a sliding window approach, an ultra-high-density bin map containing 2,578 bins was constructed. The total map length of the map was 1,372 cM, and the average interval between bins was 0.53 cM. A total of 86 significant QTLs controlling 17 horticultural traits were detected. Among these, 32 QTLs controlling 13 traits were major QTLs. Our research shows that the construction of bin maps using low-coverage sequence is a powerful method for QTL mapping, and that the short intervals between bins are helpful for fine-mapping of QTLs. Furthermore, bin maps can be used to improve the quality of reference genomes by elucidating the genetic order of unordered regions and anchoring unassigned scaffolds to linkage groups. © The Author 2016. Published by Oxford University Press on behalf of Kazusa DNA Research Institute.
-
Efficient visibility-driven medical image visualisation via adaptive binned visibility histogram.
Jung, Younhyun; Kim, Jinman; Kumar, Ashnil; Feng, David Dagan; Fulham, Michael
2016-07-01
'Visibility' is a fundamental optical property that represents the observable, by users, proportion of the voxels in a volume during interactive volume rendering. The manipulation of this 'visibility' improves the volume rendering processes; for instance by ensuring the visibility of regions of interest (ROIs) or by guiding the identification of an optimal rendering view-point. The construction of visibility histograms (VHs), which represent the distribution of all the visibility of all voxels in the rendered volume, enables users to explore the volume with real-time feedback about occlusion patterns among spatially related structures during volume rendering manipulations. Volume rendered medical images have been a primary beneficiary of VH given the need to ensure that specific ROIs are visible relative to the surrounding structures, e.g. the visualisation of tumours that may otherwise be occluded by neighbouring structures. VH construction and its subsequent manipulations, however, are computationally expensive due to the histogram binning of the visibilities. This limits the real-time application of VH to medical images that have large intensity ranges and volume dimensions and require a large number of histogram bins. In this study, we introduce an efficient adaptive binned visibility histogram (AB-VH) in which a smaller number of histogram bins are used to represent the visibility distribution of the full VH. We adaptively bin medical images by using a cluster analysis algorithm that groups the voxels according to their intensity similarities into a smaller subset of bins while preserving the distribution of the intensity range of the original images. We increase efficiency by exploiting the parallel computation and multiple render targets (MRT) extension of the modern graphical processing units (GPUs) and this enables efficient computation of the histogram. We show the application of our method to single-modality computed tomography (CT), magnetic resonance (MR) imaging and multi-modality positron emission tomography-CT (PET-CT). In our experiments, the AB-VH markedly improved the computational efficiency for the VH construction and thus improved the subsequent VH-driven volume manipulations. This efficiency was achieved without major degradation in the VH visually and numerical differences between the AB-VH and its full-bin counterpart. We applied several variants of the K-means clustering algorithm with varying Ks (the number of clusters) and found that higher values of K resulted in better performance at a lower computational gain. The AB-VH also had an improved performance when compared to the conventional method of down-sampling of the histogram bins (equal binning) for volume rendering visualisation. Copyright © 2016 Elsevier Ltd. All rights reserved.
-
Generalized index for spatial data sets as a measure of complete spatial randomness
NASA Astrophysics Data System (ADS)
Hackett-Jones, Emily J.; Davies, Kale J.; Binder, Benjamin J.; Landman, Kerry A.
2012-06-01
Spatial data sets, generated from a wide range of physical systems can be analyzed by counting the number of objects in a set of bins. Previous work has been limited to equal-sized bins, which are inappropriate for some domains (e.g., circular). We consider a nonequal size bin configuration whereby overlapping or nonoverlapping bins cover the domain. A generalized index, defined in terms of a variance between bin counts, is developed to indicate whether or not a spatial data set, generated from exclusion or nonexclusion processes, is at the complete spatial randomness (CSR) state. Limiting values of the index are determined. Using examples, we investigate trends in the generalized index as a function of density and compare the results with those using equal size bins. The smallest bin size must be much larger than the mean size of the objects. We can determine whether a spatial data set is at the CSR state or not by comparing the values of a generalized index for different bin configurations—the values will be approximately the same if the data is at the CSR state, while the values will differ if the data set is not at the CSR state. In general, the generalized index is lower than the limiting value of the index, since objects do not have access to the entire region due to blocking by other objects. These methods are applied to two applications: (i) spatial data sets generated from a cellular automata model of cell aggregation in the enteric nervous system and (ii) a known plant data distribution.
-
Discretising the velocity distribution for directional dark matter experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kavanagh, Bradley J., E-mail: bradley.kavanagh@cea.fr
2015-07-01
Dark matter (DM) direct detection experiments which are directionally-sensitive may be the only method of probing the full velocity distribution function (VDF) of the Galactic DM halo. We present an angular basis for the DM VDF which can be used to parametrise the distribution in order to mitigate astrophysical uncertainties in future directional experiments and extract information about the DM halo. This basis consists of discretising the VDF in a series of angular bins, with the VDF being only a function of the DM speed v within each bin. In contrast to other methods, such as spherical harmonic expansions, themore » use of this basis allows us to guarantee that the resulting VDF is everywhere positive and therefore physical. We present a recipe for calculating the event rates corresponding to the discrete VDF for an arbitrary number of angular bins N and investigate the discretisation error which is introduced in this way. For smooth, Standard Halo Model-like distribution functions, only N=3 angular bins are required to achieve an accuracy of around 01–30% in the number of events in each bin. Shortly after confirmation of the DM origin of the signal with around 50 events, this accuracy should be sufficient to allow the discretised velocity distribution to be employed reliably. For more extreme VDFs (such as streams), the discretisation error is typically much larger, but can be improved with increasing N. This method paves the way towards an astrophysics-independent analysis framework for the directional detection of dark matter.« less
-
Discretising the velocity distribution for directional dark matter experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kavanagh, Bradley J.; School of Physics & Astronomy, University of Nottingham,University Park, Nottingham, NG7 2RD
2015-07-13
Dark matter (DM) direct detection experiments which are directionally-sensitive may be the only method of probing the full velocity distribution function (VDF) of the Galactic DM halo. We present an angular basis for the DM VDF which can be used to parametrise the distribution in order to mitigate astrophysical uncertainties in future directional experiments and extract information about the DM halo. This basis consists of discretising the VDF in a series of angular bins, with the VDF being only a function of the DM speed v within each bin. In contrast to other methods, such as spherical harmonic expansions, themore » use of this basis allows us to guarantee that the resulting VDF is everywhere positive and therefore physical. We present a recipe for calculating the event rates corresponding to the discrete VDF for an arbitrary number of angular bins N and investigate the discretisation error which is introduced in this way. For smooth, Standard Halo Model-like distribution functions, only N=3 angular bins are required to achieve an accuracy of around 10–30% in the number of events in each bin. Shortly after confirmation of the DM origin of the signal with around 50 events, this accuracy should be sufficient to allow the discretised velocity distribution to be employed reliably. For more extreme VDFs (such as streams), the discretisation error is typically much larger, but can be improved with increasing N. This method paves the way towards an astrophysics-independent analysis framework for the directional detection of dark matter.« less
-
Reconfigurable generation and measurement of mutually unbiased bases for time-bin qudits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lukens, Joseph M.; Islam, Nurul T.; Lim, Charles Ci Wen
Here, we propose a method for implementing mutually unbiased generation and measurement of time-bin qudits using a cascade of electro-optic phase modulator–coded fiber Bragg grating pairs. Our approach requires only a single spatial mode and can switch rapidly between basis choices. We obtain explicit solutions for dimensions d = 2, 3, and 4 that realize all d + 1 possible mutually unbiased bases and analyze the performance of our approach in quantum key distribution. Given its practicality and compatibility with current technology, our approach provides a promising springboard for scalable processing of high-dimensional time-bin states.
-
Reconfigurable generation and measurement of mutually unbiased bases for time-bin qudits
Lukens, Joseph M.; Islam, Nurul T.; Lim, Charles Ci Wen; ...
2018-03-12
Here, we propose a method for implementing mutually unbiased generation and measurement of time-bin qudits using a cascade of electro-optic phase modulator–coded fiber Bragg grating pairs. Our approach requires only a single spatial mode and can switch rapidly between basis choices. We obtain explicit solutions for dimensions d = 2, 3, and 4 that realize all d + 1 possible mutually unbiased bases and analyze the performance of our approach in quantum key distribution. Given its practicality and compatibility with current technology, our approach provides a promising springboard for scalable processing of high-dimensional time-bin states.
-
Reconfigurable generation and measurement of mutually unbiased bases for time-bin qudits
NASA Astrophysics Data System (ADS)
Lukens, Joseph M.; Islam, Nurul T.; Lim, Charles Ci Wen; Gauthier, Daniel J.
2018-03-01
We propose a method for implementing mutually unbiased generation and measurement of time-bin qudits using a cascade of electro-optic phase modulator-coded fiber Bragg grating pairs. Our approach requires only a single spatial mode and can switch rapidly between basis choices. We obtain explicit solutions for dimensions d = 2, 3, and 4 that realize all d + 1 possible mutually unbiased bases and analyze the performance of our approach in quantum key distribution. Given its practicality and compatibility with current technology, our approach provides a promising springboard for scalable processing of high-dimensional time-bin states.
-
4D CT amplitude binning for the generation of a time-averaged 3D mid-position CT scan
NASA Astrophysics Data System (ADS)
Kruis, Matthijs F.; van de Kamer, Jeroen B.; Belderbos, José S. A.; Sonke, Jan-Jakob; van Herk, Marcel
2014-09-01
The purpose of this study was to develop a method to use amplitude binned 4D-CT (A-4D-CT) data for the construction of mid-position CT data and to compare the results with data created from phase-binned 4D-CT (P-4D-CT) data. For the latter purpose we have developed two measures which describe the regularity of the 4D data and we have tried to correlate these measures with the regularity of the external respiration signal. 4D-CT data was acquired for 27 patients on a combined PET-CT scanner. The 4D data were reconstructed twice, using phase and amplitude binning. The 4D frames of each dataset were registered using a quadrature-based optical flow method. After registration the deformation vector field was repositioned to the mid-position. Since amplitude-binned 4D data does not provide temporal information, we corrected the mid-position for the occupancy of the bins. We quantified the differences between the two mid-position datasets in terms of tumour offset and amplitude differences. Furthermore, we measured the standard deviation of the image intensity over the respiration after registration (σregistration) and the regularity of the deformation vector field (\\overline{\\Delta |J|} ) to quantify the quality of the 4D-CT data. These measures were correlated to the regularity of the external respiration signal (σsignal). The two irregularity measures, \\overline{\\Delta |J|} and σregistration, were dependent on each other (p < 0.0001, R2 = 0.80 for P-4D-CT, R2 = 0.74 for A-4D-CT). For all datasets amplitude binning resulted in lower \\overline{\\Delta |J|} and σregistration and large decreases led to visible quality improvements in the mid-position data. The quantity of artefact decrease was correlated to the irregularity of the external respiratory signal. The average tumour offset between the phase and amplitude binned mid-position without occupancy correction was 0.42 mm in the caudal direction (10.6% of the amplitude). After correction this was reduced to 0.16 mm in caudal direction (4.1% of the amplitude). Similar relative offsets were found at the diaphragm. We have devised a method to use amplitude binned 4D-CT to construct motion model and generate a mid-position planning CT for radiotherapy treatment purposes. We have decimated the systematic offset of this mid-position model with a motion model derived from P-4D-CT. We found that the A-4D-CT led to a decrease of local artefacts and that this decrease was correlated to the irregularity of the external respiration signal.
-
Treu, Laura; Kougias, Panagiotis G; Campanaro, Stefano; Bassani, Ilaria; Angelidaki, Irini
2016-09-01
This research aimed to better characterize the biogas microbiome by means of high throughput metagenomic sequencing and to elucidate the core microbial consortium existing in biogas reactors independently from the operational conditions. Assembly of shotgun reads followed by an established binning strategy resulted in the highest, up to now, extraction of microbial genomes involved in biogas producing systems. From the 236 extracted genome bins, it was remarkably found that the vast majority of them could only be characterized at high taxonomic levels. This result confirms that the biogas microbiome is comprised by a consortium of unknown species. A comparative analysis between the genome bins of the current study and those extracted from a previous metagenomic assembly demonstrated a similar phylogenetic distribution of the main taxa. Finally, this analysis led to the identification of a subset of common microbes that could be considered as the core essential group in biogas production. Copyright © 2016 Elsevier Ltd. All rights reserved.
-
Multifractal diffusion entropy analysis: Optimal bin width of probability histograms
NASA Astrophysics Data System (ADS)
Jizba, Petr; Korbel, Jan
2014-11-01
In the framework of Multifractal Diffusion Entropy Analysis we propose a method for choosing an optimal bin-width in histograms generated from underlying probability distributions of interest. The method presented uses techniques of Rényi’s entropy and the mean squared error analysis to discuss the conditions under which the error in the multifractal spectrum estimation is minimal. We illustrate the utility of our approach by focusing on a scaling behavior of financial time series. In particular, we analyze the S&P500 stock index as sampled at a daily rate in the time period 1950-2013. In order to demonstrate a strength of the method proposed we compare the multifractal δ-spectrum for various bin-widths and show the robustness of the method, especially for large values of q. For such values, other methods in use, e.g., those based on moment estimation, tend to fail for heavy-tailed data or data with long correlations. Connection between the δ-spectrum and Rényi’s q parameter is also discussed and elucidated on a simple example of multiscale time series.
-
Raupach, Michael J; Barco, Andrea; Steinke, Dirk; Beermann, Jan; Laakmann, Silke; Mohrbeck, Inga; Neumann, Hermann; Kihara, Terue C; Pointner, Karin; Radulovici, Adriana; Segelken-Voigt, Alexandra; Wesse, Christina; Knebelsberger, Thomas
2015-01-01
During the last years DNA barcoding has become a popular method of choice for molecular specimen identification. Here we present a comprehensive DNA barcode library of various crustacean taxa found in the North Sea, one of the most extensively studied marine regions of the world. Our data set includes 1,332 barcodes covering 205 species, including taxa of the Amphipoda, Copepoda, Decapoda, Isopoda, Thecostraca, and others. This dataset represents the most extensive DNA barcode library of the Crustacea in terms of species number to date. By using the Barcode of Life Data Systems (BOLD), unique BINs were identified for 198 (96.6%) of the analyzed species. Six species were characterized by two BINs (2.9%), and three BINs were found for the amphipod species Gammarus salinus Spooner, 1947 (0.4%). Intraspecific distances with values higher than 2.2% were revealed for 13 species (6.3%). Exceptionally high distances of up to 14.87% between two distinct but monophyletic clusters were found for the parasitic copepod Caligus elongatus Nordmann, 1832, supporting the results of previous studies that indicated the existence of an overlooked sea louse species. In contrast to these high distances, haplotype-sharing was observed for two decapod spider crab species, Macropodia parva Van Noort & Adema, 1985 and Macropodia rostrata (Linnaeus, 1761), underlining the need for a taxonomic revision of both species. Summarizing the results, our study confirms the application of DNA barcodes as highly effective identification system for the analyzed marine crustaceans of the North Sea and represents an important milestone for modern biodiversity assessment studies using barcode sequences.
-
Research and Development of a New Waste Collection Bin to Facilitate Education in Plastic Recycling
ERIC Educational Resources Information Center
Chow, Cheuk-fai; So, Wing-Mui Winnie; Cheung, Tsz-Yan
2016-01-01
Plastic recycling has been an alternative method for solid waste management apart from landfill and incineration. However, recycling quality is affected when all plastics are discarded into a single recycling bin that increases cross contaminations and operation cost to the recycling industry. Following the engineering design process, a new…
-
2015-10-01
capability to meet the task to the standard under the condition, nothing more or less, else the funding is wasted . Also, that funding for the...bin to segregate gaps qualitatively before the gap value model determined preference among gaps within the bins. Computation of a gap’s...for communication, interpretation, or processing by humans or by automatic means (as it pertains to modeling and simulation). Delphi Method -- a
-
Bin Packing, Number Balancing, and Rescaling Linear Programs
NASA Astrophysics Data System (ADS)
Hoberg, Rebecca
This thesis deals with several important algorithmic questions using techniques from diverse areas including discrepancy theory, machine learning and lattice theory. In Chapter 2, we construct an improved approximation algorithm for a classical NP-complete problem, the bin packing problem. In this problem, the goal is to pack items of sizes si ∈ [0,1] into as few bins as possible, where a set of items fits into a bin provided the sum of the item sizes is at most one. We give a polynomial-time rounding scheme for a standard linear programming relaxation of the problem, yielding a packing that uses at most OPT + O(log OPT) bins. This makes progress towards one of the "10 open problems in approximation algorithms" stated in the book of Shmoys and Williamson. In fact, based on related combinatorial lower bounds, Rothvoss conjectures that theta(logOPT) may be a tight bound on the additive integrality gap of this LP relaxation. In Chapter 3, we give a new polynomial-time algorithm for linear programming. Our algorithm is based on the multiplicative weights update (MWU) method, which is a general framework that is currently of great interest in theoretical computer science. An algorithm for linear programming based on MWU was known previously, but was not polynomial time--we remedy this by alternating between a MWU phase and a rescaling phase. The rescaling methods we introduce improve upon previous methods by reducing the number of iterations needed until one can rescale, and they can be used for any algorithm with a similar rescaling structure. Finally, we note that the MWU phase of the algorithm has a simple interpretation as gradient descent of a particular potential function, and we show we can speed up this phase by walking in a direction that decreases both the potential function and its gradient. In Chapter 4, we show that an approximate oracle for Minkowski's Theorem gives an approximate oracle for the number balancing problem, and conversely. Number balancing is the problem of minimizing | 〈a,x〉 | over x ∈ {-1,0,1}n \\ { 0}, given a ∈ [0,1]n. While an application of the pigeonhole principle shows that there always exists x with | 〈a,x〉| ≤ O(√ n/2n), the best known algorithm only guarantees |〈a,x〉| ≤ 2-ntheta(log n). We show that an oracle for Minkowski's Theorem with approximation factor rho would give an algorithm for NBP that guarantees | 〈a,x〉 | ≤ 2-ntheta(1/rho). In particular, this would beat the bound of Karmarkar and Karp provided rho ≤ O(logn/loglogn). In the other direction, we prove that any polynomial time algorithm for NBP that guarantees a solution of difference at most 2√n/2 n would give a polynomial approximation for Minkowski as well as a polynomial factor approximation algorithm for the Shortest Vector Problem.
-
Desale, Adino; Taye, Bineyam; Belay, Getachew; Nigatu, Alemayehu
2013-01-01
Logistics management information system for health commodities remained poorly implemented in most of developing countries. To assess the status of laboratory logistics management information system for HIV/AIDS and tuberculosis laboratory commodities in public health facilities in Addis Ababa. A cross-sectional descriptive study was conducted from September 2010-January 2011 at selected public health facilities. A stratified random sampling method was used to include a total of 43 facilities which, were investigated through quantitative methods using structured questionnaires interviews. Focus group discussion with the designated supply chain managers and key informant interviews were conducted for the qualitative method. There exists a well-designed logistics system for laboratory commodities with trained pharmacy personnel, distributed standard LMIS formats and established inventory control procedures. However, majority of laboratory professionals were not trained in LMIS. Majority of the facilities (60.5%) were stocked out for at least one ART monitoring and TB laboratory reagents and the highest stock out rate was for chemistry reagents. Expired ART monitoring laboratory commodities were found in 25 (73.5%) of facilities. Fifty percent (50%) of the assessed hospitals and 54% of health centers were currently using stock/bin cards for all HIV/AIDS and TB laboratory commodities in main pharmacy store, among these only 25% and 20.8% of them were updated with accurate information matching with the physical count done at the time of visit for hospitals and health centers respectively. Even though there exists a well designed laboratory LMIS, keeping quality stock/bin cards and LMIS reports were very low. Key ART monitoring laboratory commodities were stock out at many facilities at the day of visit and during the past six months. Based on findings, training of laboratory personnel's managing laboratory commodities and keeping accurate inventory control procedures were recommended.
-
Bayesian modelling of uncertainties of Monte Carlo radiative-transfer simulations
NASA Astrophysics Data System (ADS)
Beaujean, Frederik; Eggers, Hans C.; Kerzendorf, Wolfgang E.
2018-07-01
One of the big challenges in astrophysics is the comparison of complex simulations to observations. As many codes do not directly generate observables (e.g. hydrodynamic simulations), the last step in the modelling process is often a radiative-transfer treatment. For this step, the community relies increasingly on Monte Carlo radiative transfer due to the ease of implementation and scalability with computing power. We consider simulations in which the number of photon packets is Poisson distributed, while the weight assigned to a single photon packet follows any distribution of choice. We show how to estimate the statistical uncertainty of the sum of weights in each bin from the output of a single radiative-transfer simulation. Our Bayesian approach produces a posterior distribution that is valid for any number of packets in a bin, even zero packets, and is easy to implement in practice. Our analytic results for large number of packets show that we generalize existing methods that are valid only in limiting cases. The statistical problem considered here appears in identical form in a wide range of Monte Carlo simulations including particle physics and importance sampling. It is particularly powerful in extracting information when the available data are sparse or quantities are small.
-
Engagement Assessment Using EEG Signals
NASA Technical Reports Server (NTRS)
Li, Feng; Li, Jiang; McKenzie, Frederic; Zhang, Guangfan; Wang, Wei; Pepe, Aaron; Xu, Roger; Schnell, Thomas; Anderson, Nick; Heitkamp, Dean
2012-01-01
In this paper, we present methods to analyze and improve an EEG-based engagement assessment approach, consisting of data preprocessing, feature extraction and engagement state classification. During data preprocessing, spikes, baseline drift and saturation caused by recording devices in EEG signals are identified and eliminated, and a wavelet based method is utilized to remove ocular and muscular artifacts in the EEG recordings. In feature extraction, power spectrum densities with 1 Hz bin are calculated as features, and these features are analyzed using the Fisher score and the one way ANOVA method. In the classification step, a committee classifier is trained based on the extracted features to assess engagement status. Finally, experiment results showed that there exist significant differences in the extracted features among different subjects, and we have implemented a feature normalization procedure to mitigate the differences and significantly improved the engagement assessment performance.
-
Palm-Vein Classification Based on Principal Orientation Features
Zhou, Yujia; Liu, Yaqin; Feng, Qianjin; Yang, Feng; Huang, Jing; Nie, Yixiao
2014-01-01
Personal recognition using palm–vein patterns has emerged as a promising alternative for human recognition because of its uniqueness, stability, live body identification, flexibility, and difficulty to cheat. With the expanding application of palm–vein pattern recognition, the corresponding growth of the database has resulted in a long response time. To shorten the response time of identification, this paper proposes a simple and useful classification for palm–vein identification based on principal direction features. In the registration process, the Gaussian-Radon transform is adopted to extract the orientation matrix and then compute the principal direction of a palm–vein image based on the orientation matrix. The database can be classified into six bins based on the value of the principal direction. In the identification process, the principal direction of the test sample is first extracted to ascertain the corresponding bin. One-by-one matching with the training samples is then performed in the bin. To improve recognition efficiency while maintaining better recognition accuracy, two neighborhood bins of the corresponding bin are continuously searched to identify the input palm–vein image. Evaluation experiments are conducted on three different databases, namely, PolyU, CASIA, and the database of this study. Experimental results show that the searching range of one test sample in PolyU, CASIA and our database by the proposed method for palm–vein identification can be reduced to 14.29%, 14.50%, and 14.28%, with retrieval accuracy of 96.67%, 96.00%, and 97.71%, respectively. With 10,000 training samples in the database, the execution time of the identification process by the traditional method is 18.56 s, while that by the proposed approach is 3.16 s. The experimental results confirm that the proposed approach is more efficient than the traditional method, especially for a large database. PMID:25383715
-
Curve Set Feature-Based Robust and Fast Pose Estimation Algorithm
Hashimoto, Koichi
2017-01-01
Bin picking refers to picking the randomly-piled objects from a bin for industrial production purposes, and robotic bin picking is always used in automated assembly lines. In order to achieve a higher productivity, a fast and robust pose estimation algorithm is necessary to recognize and localize the randomly-piled parts. This paper proposes a pose estimation algorithm for bin picking tasks using point cloud data. A novel descriptor Curve Set Feature (CSF) is proposed to describe a point by the surface fluctuation around this point and is also capable of evaluating poses. The Rotation Match Feature (RMF) is proposed to match CSF efficiently. The matching process combines the idea of the matching in 2D space of origin Point Pair Feature (PPF) algorithm with nearest neighbor search. A voxel-based pose verification method is introduced to evaluate the poses and proved to be more than 30-times faster than the kd-tree-based verification method. Our algorithm is evaluated against a large number of synthetic and real scenes and proven to be robust to noise, able to detect metal parts, more accurately and more than 10-times faster than PPF and Oriented, Unique and Repeatable (OUR)-Clustered Viewpoint Feature Histogram (CVFH). PMID:28771216
-
Bingemann, Dieter; Allen, Rachel M.
2012-01-01
We describe a statistical method to analyze dual-channel photon arrival trajectories from single molecule spectroscopy model-free to identify break points in the intensity ratio. Photons are binned with a short bin size to calculate the logarithm of the intensity ratio for each bin. Stochastic photon counting noise leads to a near-normal distribution of this logarithm and the standard student t-test is used to find statistically significant changes in this quantity. In stochastic simulations we determine the significance threshold for the t-test’s p-value at a given level of confidence. We test the method’s sensitivity and accuracy indicating that the analysis reliably locates break points with significant changes in the intensity ratio with little or no error in realistic trajectories with large numbers of small change points, while still identifying a large fraction of the frequent break points with small intensity changes. Based on these results we present an approach to estimate confidence intervals for the identified break point locations and recommend a bin size to choose for the analysis. The method proves powerful and reliable in the analysis of simulated and actual data of single molecule reorientation in a glassy matrix. PMID:22837704
-
Constructing inverse probability weights for continuous exposures: a comparison of methods.
Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S
2014-03-01
Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.
-
Accelerated simulation of stochastic particle removal processes in particle-resolved aerosol models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis, J.H.; Michelotti, M.D.; Riemer, N.
2016-10-01
Stochastic particle-resolved methods have proven useful for simulating multi-dimensional systems such as composition-resolved aerosol size distributions. While particle-resolved methods have substantial benefits for highly detailed simulations, these techniques suffer from high computational cost, motivating efforts to improve their algorithmic efficiency. Here we formulate an algorithm for accelerating particle removal processes by aggregating particles of similar size into bins. We present the Binned Algorithm for particle removal processes and analyze its performance with application to the atmospherically relevant process of aerosol dry deposition. We show that the Binned Algorithm can dramatically improve the efficiency of particle removals, particularly for low removalmore » rates, and that computational cost is reduced without introducing additional error. In simulations of aerosol particle removal by dry deposition in atmospherically relevant conditions, we demonstrate about 50-times increase in algorithm efficiency.« less
-
SU-F-I-08: CT Image Ring Artifact Reduction Based On Prior Image
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, C; Qi, H; Chen, Z
Purpose: In computed tomography (CT) system, CT images with ring artifacts will be reconstructed when some adjacent bins of detector don’t work. The ring artifacts severely degrade CT image quality. We present a useful CT ring artifacts reduction based on projection data correction, aiming at estimating the missing data of projection data accurately, thus removing the ring artifacts of CT images. Methods: The method consists of ten steps: 1) Identification of abnormal pixel line in projection sinogram; 2) Linear interpolation within the pixel line of projection sinogram; 3) FBP reconstruction using interpolated projection data; 4) Filtering FBP image using meanmore » filter; 5) Forwarding projection of filtered FBP image; 6) Subtraction forwarded projection from original projection; 7) Linear interpolation of abnormal pixel line area in the subtraction projection; 8) Adding the interpolated subtraction projection on the forwarded projection; 9) FBP reconstruction using corrected projection data; 10) Return to step 4 until the pre-set iteration number is reached. The method is validated on simulated and real data to restore missing projection data and reconstruct ring artifact-free CT images. Results: We have studied impact of amount of dead bins of CT detector on the accuracy of missing data estimation in projection sinogram. For the simulated case with a resolution of 256 by 256 Shepp-Logan phantom, three iterations are sufficient to restore projection data and reconstruct ring artifact-free images when the dead bins rating is under 30%. The dead-bin-induced artifacts are substantially reduced. More iteration number is needed to reconstruct satisfactory images while the rating of dead bins increases. Similar results were found for a real head phantom case. Conclusion: A practical CT image ring artifact correction scheme based on projection data is developed. This method can produce ring artifact-free CT images feasibly and effectively.« less
-
How to improve the comfort of Kesawan Heritage Corridor, Medan City
NASA Astrophysics Data System (ADS)
Tegar; Ginting, Nurlisa; Suwantoro, H.
2018-03-01
Comfort is indispensable to make a friendly neighborhood or city. Especially the comfort of the infrastructure in the corridor. People must be able to feel comfortable to act rationally in their physical environment. Existing infrastructure must able to support Kesawan as a historic district. Kesawan is an area that is filled with so many unique buildings. Without comfort, how good the existing buildings’ architecture cannot be enjoyed. It will also affect the identity of a region or city. The aim of this research is to re-design the public facilities from Kesawan corridor’s comfort aspect: orientation, traffic calming, vegetation, signage, public facilities (toilet, seating place, bus station, bins), information center, parking and pedestrian path. It will translate the design concept in the form of design criteria. This research uses qualitative methods. Some facilities in this corridor are unsuitable even some of them are not available. So, we need some improvements and additions to the existing facilities. It is expected that by upgrading the existing facilities, visitors who come to Kesawan will be able to enjoy more and able to make Medan city more friendly.
-
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Tong, E-mail: tongzhu2@illinois.edu; Levin, Deborah A., E-mail: deblevin@illinois.edu; Li, Zheng, E-mail: zul107@psu.edu
2016-08-14
A high fidelity internal energy relaxation model for N{sub 2}–N suitable for use in direct simulation Monte Carlo (DSMC) modeling of chemically reacting flows is proposed. A novel two-dimensional binning approach with variable bin energy resolutions in the rotational and vibrational modes is developed for treating the internal mode of N{sub 2}. Both bin-to-bin and state-specific relaxation cross sections are obtained using the molecular dynamics/quasi-classical trajectory (MD/QCT) method with two potential energy surfaces as well as the state-specific database of Jaffe et al. The MD/QCT simulations of inelastic energy exchange between N{sub 2} and N show that there is amore » strong forward-preferential scattering behavior at high collision velocities. The 99 bin model is used in homogeneous DSMC relaxation simulations and is found to be able to recover the state-specific master equation results of Panesi et al. when the Jaffe state-specific cross sections are used. Rotational relaxation energy profiles and relaxation times obtained using the ReaxFF and Jaffe potential energy surfaces (PESs) are in general agreement but there are larger differences between the vibrational relaxation times. These differences become smaller as the translational temperature increases because the difference in the PES energy barrier becomes less important.« less
-
Intrication temporelle et communication quantique
NASA Astrophysics Data System (ADS)
Bussieres, Felix
Quantum communication is the art of transferring a quantum state from one place to another and the study of tasks that can be accomplished with it. This thesis is devoted to the development of tools and tasks for quantum communication in a real-world setting. These were implemented using an underground optical fibre link deployed in an urban environment. The technological and theoretical innovations presented here broaden the range of applications of time-bin entanglement through new methods of manipulating time-bin qubits, a novel model for characterizing sources of photon pairs, new ways of testing non-locality and the design and the first implementation of a new loss-tolerant quantum coin-flipping protocol. Manipulating time-bin qubits. A single photon is an excellent vehicle in which a qubit, the fundamental unit of quantum information, can be encoded. In particular, the time-bin encoding of photonic qubits is well suited for optical fibre transmission. Before this thesis, the applications of quantum communication based on the time-bin encoding were limited due to the lack of methods to implement arbitrary operations and measurements. We have removed this restriction by proposing the first methods to realize arbitrary deterministic operations on time-bin qubits as well as single qubit measurements in an arbitrary basis. We applied these propositions to the specific case of optical measurement-based quantum computing and showed how to implement the feedforward operations, which are essential to this model. This therefore opens new possibilities for creating an optical quantum computer, but also for other quantum communication tasks. Characterizing sources of photon pairs. Experimental quantum communication requires the creation of single photons and entangled photons. These two ingredients can be obtained from a source of photon pairs based on non-linear spontaneous processes. Several tasks in quantum communication require a precise knowledge of the properties of the source being used. We developed and implemented a fast and simple method to characterize a source of photon pairs. This method is well suited for a realistic setting where experimental conditions, such as channel transmittance, may fluctuate, and for which the characterization of the source has to be done in real time. Testing the non-locality of time-bin entanglement. Entanglement is a resource needed for the realization of many important tasks in quantum communication. It also allows two physical systems to be correlated in a way that cannot be explained by classical physics; this manifestation of entanglement is called non-locality. We built a source of time-bin entangled photonic qubits and characterized it with the new methods implementing arbitrary single qubit measurements that we developed. This allowed us to reveal the non-local nature of our source of entanglement in ways that were never implemented before. It also opens the door to study previously untested features of non-locality using this source. Theses experiments were performed in a realistic setting where quantum (non-local) correlations were observed even after transmission of one of the entangled qubits over 12.4 km of an underground optical fibre. Flipping quantum coins. Quantum coin-flipping is a quantum cryptographic primitive proposed in 1984, that is when the very first steps of quantum communication were being taken, where two players alternate in sending classical and quantum information in order to generate a shared random bit. The use of quantum information is such that a potential cheater cannot force the outcome to his choice with certainty. Classically, however, one of the players can always deterministically choose the outcome. Unfortunately, the security of all previous quantum coin-flipping protocols is seriously compromised in the presence of losses on the transmission channel, thereby making this task impractical. We found a solution to this problem and obtained the first loss-tolerant quantum coin-flipping protocol whose security is independent of the amount of the losses. We have also experimentally demonstrated our loss-tolerant protocol using our source of time-bin entanglement combined with our arbitrary single qubit measurement methods. This experiment took place in a realistic setting where qubits travelled over an underground optical fibre link. This new task thus joins quantum key distribution as a practical application of quantum communication. Keywords. quantum communication, photonics, time-bin encoding, source of photon pairs, heralded single photon source, entanglement, non-locality, time-bin entanglement, hybrid entanglement, quantum network, quantum cryptography, quantum coin-flipping, measurement-based quantum computation, telecommunication, optical fibre, nonlinear optics.
-
Consistency Check for the Bin Packing Constraint Revisited
NASA Astrophysics Data System (ADS)
Dupuis, Julien; Schaus, Pierre; Deville, Yves
The bin packing problem (BP) consists in finding the minimum number of bins necessary to pack a set of items so that the total size of the items in each bin does not exceed the bin capacity C. The bin capacity is common for all the bins.
-
40 CFR 63.9882 - What parts of my plant does this subpart cover?
Code of Federal Regulations, 2010 CFR
2010-07-01
... CATEGORIES (CONTINUED) National Emissions Standards for Hazardous Air Pollutants for Primary Magnesium... affected sources are each new and existing primary magnesium refining facility. (b) This subpart covers emissions from each spray dryer stack, magnesium chloride storage bins scrubber stack, melt/reactor system...
-
40 CFR 63.9882 - What parts of my plant does this subpart cover?
Code of Federal Regulations, 2012 CFR
2012-07-01
... CATEGORIES (CONTINUED) National Emissions Standards for Hazardous Air Pollutants for Primary Magnesium... affected sources are each new and existing primary magnesium refining facility. (b) This subpart covers emissions from each spray dryer stack, magnesium chloride storage bins scrubber stack, melt/reactor system...
-
40 CFR 63.9882 - What parts of my plant does this subpart cover?
Code of Federal Regulations, 2014 CFR
2014-07-01
... CATEGORIES (CONTINUED) National Emissions Standards for Hazardous Air Pollutants for Primary Magnesium... affected sources are each new and existing primary magnesium refining facility. (b) This subpart covers emissions from each spray dryer stack, magnesium chloride storage bins scrubber stack, melt/reactor system...
-
40 CFR 63.9882 - What parts of my plant does this subpart cover?
Code of Federal Regulations, 2011 CFR
2011-07-01
... CATEGORIES (CONTINUED) National Emissions Standards for Hazardous Air Pollutants for Primary Magnesium... affected sources are each new and existing primary magnesium refining facility. (b) This subpart covers emissions from each spray dryer stack, magnesium chloride storage bins scrubber stack, melt/reactor system...
-
40 CFR 63.9882 - What parts of my plant does this subpart cover?
Code of Federal Regulations, 2013 CFR
2013-07-01
... CATEGORIES (CONTINUED) National Emissions Standards for Hazardous Air Pollutants for Primary Magnesium... affected sources are each new and existing primary magnesium refining facility. (b) This subpart covers emissions from each spray dryer stack, magnesium chloride storage bins scrubber stack, melt/reactor system...
-
Comparison of recycling outcomes in three types of recycling collection units.
Andrews, Ashley; Gregoire, Mary; Rasmussen, Heather; Witowich, Gretchen
2013-03-01
Commercial institutions have many factors to consider when implementing an effective recycling program. This study examined the effectiveness of three different types of recycling bins on recycling accuracy by determining the percent weight of recyclable material placed in the recycling bins, comparing the percent weight of recyclable material by type of container used, and examining whether a change in signage increased recycling accuracy. Data were collected over 6 weeks totaling 30 days from 3 different recycling bin types at a Midwest University medical center. Five bin locations for each bin type were used. Bags from these bins were collected, sorted into recyclable and non-recyclable material, and weighed. The percent recyclable material was calculated using these weights. Common contaminates found in the bins were napkins and paper towels, plastic food wrapping, plastic bags, and coffee cups. The results showed a significant difference in percent recyclable material between bin types and bin locations. Bin type 2 was found to have one bin location to be statistically different (p=0.048), which may have been due to lack of a trash bin next to the recycling bin in that location. Bin type 3 had significantly lower percent recyclable material (p<0.001), which may have been due to lack of a trash bin next to the recycling bin and increased contamination due to the combination of commingled and paper into one bag. There was no significant change in percent recyclable material in recycling bins post signage change. These results suggest a signage change may not be an effective way, when used alone, to increase recycling compliance and accuracy. This study showed two or three-compartment bins located next to a trash bin may be the best bin type for recycling accuracy. Copyright © 2012 Elsevier Ltd. All rights reserved.
-
Monitoring household waste recycling centres performance using mean bin weight analyses.
Maynard, Sarah; Cherrett, Tom; Waterson, Ben
2009-02-01
This paper describes a modelling approach used to investigate the significance of key factors (vehicle type, compaction type, site design, temporal effects) in influencing the variability in observed nett amenity bin weights produced by household waste recycling centres (HWRCs). This new method can help to quickly identify sites that are producing significantly lighter bins, enabling detailed back-end analyses to be efficiently targeted and best practice in HWRC operation identified. Tested on weigh ticket data from nine HWRCs across West Sussex, UK, the model suggests that compaction technique, vehicle type, month and site design explained 76% of the variability in the observed nett amenity weights. For each factor, a weighting coefficient was calculated to generate a predicted nett weight for each bin transaction and three sites were subsequently identified as having similar characteristics but returned significantly different mean nett bin weights. Waste and site audits were then conducted at the three sites to try and determine the possible sources of the remaining variability. Significant differences were identified in the proportions of contained waste (bagged), wood, and dry recyclables entering the amenity waste stream, particularly at one site where significantly less contaminated waste and dry recyclables were observed.
-
Raupach, Michael J.; Barco, Andrea; Steinke, Dirk; Beermann, Jan; Laakmann, Silke; Mohrbeck, Inga; Neumann, Hermann; Kihara, Terue C.; Pointner, Karin; Radulovici, Adriana; Segelken-Voigt, Alexandra; Wesse, Christina; Knebelsberger, Thomas
2015-01-01
During the last years DNA barcoding has become a popular method of choice for molecular specimen identification. Here we present a comprehensive DNA barcode library of various crustacean taxa found in the North Sea, one of the most extensively studied marine regions of the world. Our data set includes 1,332 barcodes covering 205 species, including taxa of the Amphipoda, Copepoda, Decapoda, Isopoda, Thecostraca, and others. This dataset represents the most extensive DNA barcode library of the Crustacea in terms of species number to date. By using the Barcode of Life Data Systems (BOLD), unique BINs were identified for 198 (96.6%) of the analyzed species. Six species were characterized by two BINs (2.9%), and three BINs were found for the amphipod species Gammarus salinus Spooner, 1947 (0.4%). Intraspecific distances with values higher than 2.2% were revealed for 13 species (6.3%). Exceptionally high distances of up to 14.87% between two distinct but monophyletic clusters were found for the parasitic copepod Caligus elongatus Nordmann, 1832, supporting the results of previous studies that indicated the existence of an overlooked sea louse species. In contrast to these high distances, haplotype-sharing was observed for two decapod spider crab species, Macropodia parva Van Noort & Adema, 1985 and Macropodia rostrata (Linnaeus, 1761), underlining the need for a taxonomic revision of both species. Summarizing the results, our study confirms the application of DNA barcodes as highly effective identification system for the analyzed marine crustaceans of the North Sea and represents an important milestone for modern biodiversity assessment studies using barcode sequences. PMID:26417993
-
Low-Light Image Enhancement Using Adaptive Digital Pixel Binning
Yoo, Yoonjong; Im, Jaehyun; Paik, Joonki
2015-01-01
This paper presents an image enhancement algorithm for low-light scenes in an environment with insufficient illumination. Simple amplification of intensity exhibits various undesired artifacts: noise amplification, intensity saturation, and loss of resolution. In order to enhance low-light images without undesired artifacts, a novel digital binning algorithm is proposed that considers brightness, context, noise level, and anti-saturation of a local region in the image. The proposed algorithm does not require any modification of the image sensor or additional frame-memory; it needs only two line-memories in the image signal processor (ISP). Since the proposed algorithm does not use an iterative computation, it can be easily embedded in an existing digital camera ISP pipeline containing a high-resolution image sensor. PMID:26121609
-
Chan, Yiu C; Sinha, Rajiv K; Weijin Wang
2011-05-01
This study investigated greenhouse gas (GHG) emissions from three different home waste treatment methods in Brisbane, Australia. Gas samples were taken monthly from 34 backyard composting bins from January to April 2009. Averaged over the study period, the aerobic composting bins released lower amounts of CH(4) (2.2 mg m(- 2) h(-1)) than the anaerobic digestion bins (9.5 mg m(-2) h(-1)) and the vermicomposting bins (4.8 mg m(-2) h( -1)). The vermicomposting bins had lower N(2)O emission rates (1.2 mg m(-2) h(- 1)) than the others (1.5-1.6 mg m(-2) h( -1)). Total GHG emissions including both N(2)O and CH(4) were 463, 504 and 694 mg CO(2)-e m(- 2) h(-1) for vermicomposting, aerobic composting and anaerobic digestion, respectively, with N(2)O contributing >80% in the total budget. The GHG emissions varied substantially with time and were regulated by temperature, moisture content and the waste properties, indicating the potential to mitigate GHG emission through proper management of the composting systems. In comparison with other mainstream municipal waste management options including centralized composting and anaerobic digestion facilities, landfilling and incineration, home composting has the potential to reduce GHG emissions through both lower on-site emissions and the minimal need for transportation and processing. On account of the lower cost, the present results suggest that home composting provides an effective and feasible supplementary waste management method to a centralized facility in particular for cities with lower population density such as the Australian cities.
-
NASA Astrophysics Data System (ADS)
Verechagin, V.; Kris, R.; Schwarzband, I.; Milstein, A.; Cohen, B.; Shkalim, A.; Levy, S.; Price, D.; Bal, E.
2018-03-01
Over the years, mask and wafers defects dispositioning has become an increasingly challenging and time consuming task. With design rules getting smaller, OPC getting complex and scanner illumination taking on free-form shapes - the probability of a user to perform accurate and repeatable classification of defects detected by mask inspection tools into pass/fail bins is reducing. The critical challenging of mask defect metrology for small nodes ( < 30 nm) was reviewed in [1]. While Critical Dimension (CD) variation measurement is still the method of choice for determining a mask defect future impact on wafer, the high complexity of OPCs combined with high variability in pattern shapes poses a challenge for any automated CD variation measurement method. In this study, a novel approach for measurement generalization is presented. CD variation assessment performance is evaluated on multiple different complex shape patterns, and is benchmarked against an existing qualified measurement methodology.
-
NASA Astrophysics Data System (ADS)
Wang, Yonggang; Liu, Chong
2016-10-01
The common solution for a field programmable gate array (FPGA)-based time-to-digital converter (TDC) is constructing a tapped delay line (TDL) for time interpolation to yield a sub-clock time resolution. The granularity and uniformity of the delay elements of TDL determine the TDC time resolution. In this paper, we propose a dual-sampling TDL architecture and a bin decimation method that could make the delay elements as small and uniform as possible, so that the implemented TDCs can achieve a high time resolution beyond the intrinsic cell delay. Two identical full hardware-based TDCs were implemented in a Xilinx UltraScale FPGA for performance evaluation. For fixed time intervals in the range from 0 to 440 ns, the average time-interval RMS resolution is measured by the two TDCs with 4.2 ps, thus the timestamp resolution of single TDC is derived as 2.97 ps. The maximum hit rate of the TDC is as high as half the system clock rate of FPGA, namely 250 MHz in our demo prototype. Because the conventional online bin-by-bin calibration is not needed, the implementation of the proposed TDC is straightforward and relatively resource-saving.
-
Optimizing and evaluating the reconstruction of Metagenome-assembled microbial genomes.
Papudeshi, Bhavya; Haggerty, J Matthew; Doane, Michael; Morris, Megan M; Walsh, Kevin; Beattie, Douglas T; Pande, Dnyanada; Zaeri, Parisa; Silva, Genivaldo G Z; Thompson, Fabiano; Edwards, Robert A; Dinsdale, Elizabeth A
2017-11-28
Microbiome/host interactions describe characteristics that affect the host's health. Shotgun metagenomics includes sequencing a random subset of the microbiome to analyze its taxonomic and metabolic potential. Reconstruction of DNA fragments into genomes from metagenomes (called metagenome-assembled genomes) assigns unknown fragments to taxa/function and facilitates discovery of novel organisms. Genome reconstruction incorporates sequence assembly and sorting of assembled sequences into bins, characteristic of a genome. However, the microbial community composition, including taxonomic and phylogenetic diversity may influence genome reconstruction. We determine the optimal reconstruction method for four microbiome projects that had variable sequencing platforms (IonTorrent and Illumina), diversity (high or low), and environment (coral reefs and kelp forests), using a set of parameters to select for optimal assembly and binning tools. We tested the effects of the assembly and binning processes on population genome reconstruction using 105 marine metagenomes from 4 projects. Reconstructed genomes were obtained from each project using 3 assemblers (IDBA, MetaVelvet, and SPAdes) and 2 binning tools (GroopM and MetaBat). We assessed the efficiency of assemblers using statistics that including contig continuity and contig chimerism and the effectiveness of binning tools using genome completeness and taxonomic identification. We concluded that SPAdes, assembled more contigs (143,718 ± 124 contigs) of longer length (N50 = 1632 ± 108 bp), and incorporated the most sequences (sequences-assembled = 19.65%). The microbial richness and evenness were maintained across the assembly, suggesting low contig chimeras. SPAdes assembly was responsive to the biological and technological variations within the project, compared with other assemblers. Among binning tools, we conclude that MetaBat produced bins with less variation in GC content (average standard deviation: 1.49), low species richness (4.91 ± 0.66), and higher genome completeness (40.92 ± 1.75) across all projects. MetaBat extracted 115 bins from the 4 projects of which 66 bins were identified as reconstructed metagenome-assembled genomes with sequences belonging to a specific genus. We identified 13 novel genomes, some of which were 100% complete, but show low similarity to genomes within databases. In conclusion, we present a set of biologically relevant parameters for evaluation to select for optimal assembly and binning tools. For the tools we tested, SPAdes assembler and MetaBat binning tools reconstructed quality metagenome-assembled genomes for the four projects. We also conclude that metagenomes from microbial communities that have high coverage of phylogenetically distinct, and low taxonomic diversity results in highest quality metagenome-assembled genomes.
-
NASA Astrophysics Data System (ADS)
Rosenberg, Phil; Dean, Angela; Williams, Paul; Dorsey, James; Minikin, Andreas; Pickering, Martyn; Petzold, Andreas
2013-04-01
Optical Particle Counters (OPCs) are the de-facto standard for in-situ measurements of airborne aerosol size distributions and small cloud particles over a wide size range. This is particularly the case on airborne platforms where fast response is important. OPCs measure scattered light from individual particles and generally bin particles according to the measured peak amount of light scattered (the OPC's response). Most manufacturers provide a table along with their instrument which indicates the particle diameters which represent the edges of each bin. It is important to correct the particle size reported by OPCs for the refractive index of the particles being measured, which is often not the same as for those used during calibration. However, the OPC's response is not a monotonic function of particle diameter and obvious problems occur when refractive index corrections are attempted, but multiple diameters correspond to the same OPC response. Here we recommend that OPCs are calibrated in terms of particle scattering cross section as this is a monotonic (usually linear) function of an OPC's response. We present a method for converting a bin's boundaries in terms of scattering cross section into a bin centre and bin width in terms of diameter for any aerosol species for which the scattering properties are known. The relationship between diameter and scattering cross section can be arbitrarily complex and does not need to be monotonic; it can be based on Mie-Lorenz theory or any other scattering theory. Software has been provided on the Sourceforge open source repository for scientific users to implement such methods in their own measurement and calibration routines. As a case study data is presented showing data from Passive Cavity Aerosol Spectrometer Probe (PCASP) and a Cloud Droplet Probe (CDP) calibrated using polystyrene latex spheres and glass beads before being deployed as part of the Fennec project to measure airborne dust in the inaccessible regions of the Sahara.
-
Octree Bin-to-Bin Fractional-NTC Collisions
2015-09-17
Briefing Charts 3. DATES COVERED (From - To) 24 August 2015 – 17 September 2015 4. TITLE AND SUBTITLE Octree bin-to-bin fractional -NTC collisions...Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std. 239.18 OCTREE BIN-TO-BIN FRACTIONAL -NTC COLLISIONS Robert Martin ERC INC., SPACECRAFT PROPULSION...AFTC/PA clearance No. TBD ROBERT MARTIN (AFRL/RQRS) DISTRIBUTION A: APPROVED FOR PUBLIC RELEASE 1 / 15 OUTLINE 1 BACKGROUND 2 FRACTIONAL COLLISIONS 3 BIN
-
NASA Astrophysics Data System (ADS)
Yoo, Jinwon; Choi, Yujun; Cho, Young-Wook; Han, Sang-Wook; Lee, Sang-Yun; Moon, Sung; Oh, Kyunghwan; Kim, Yong-Su
2018-07-01
We present a detailed method to prepare and characterize four-dimensional pure quantum states or ququarts using polarization and time-bin modes of a single-photon. In particular, we provide a simple method to generate an arbitrary pure ququart and fully characterize the state with quantum state tomography. We also verify the reliability of the recipe by showing experimental preparation and characterization of 20 ququart states in mutually unbiased bases. As qudits provide superior properties over qubits in many fundamental tests of quantum physics and applications in quantum information processing, the presented method will be useful for photonic quantum information science.
-
A frequency domain analysis of respiratory variations in the seismocardiogram signal.
Pandia, Keya; Inan, Omer T; Kovacs, Gregory T A
2013-01-01
The seismocardiogram (SCG) signal traditionally measured using a chest-mounted accelerometer contains low-frequency (0-100 Hz) cardiac vibrations that can be used to derive diagnostically relevant information about cardiovascular and cardiopulmonary health. This work is aimed at investigating the effects of respiration on the frequency domain characteristics of SCG signals measured from 18 healthy subjects. Toward this end, the 0-100 Hz SCG signal bandwidth of interest was sub-divided into 5 Hz and 10 Hz frequency bins to compare the spectral energy in corresponding frequency bins of the SCG signal measured during three key conditions of respiration--inspiration, expiration, and apnea. Statistically significant differences were observed between the power in ensemble averaged inspiratory and expiratory SCG beats and between ensemble averaged inspiratory and apneaic beats across the 18 subjects for multiple frequency bins in the 10-40 Hz frequency range. Accordingly, the spectral analysis methods described in this paper could provide complementary and improved classification of respiratory modulations in the SCG signal over and above time-domain SCG analysis methods.
-
The Fifth Bin - Opportunity to Empower the National Four Bin Analysis Discussion
2012-06-01
Analysis and Methods for the Exploitation of ELICIT Experimental Data, ( Martin & McEver, 2008) the authors present illustrative examples of data...and Adm. Mullen from the Pentagon. (Egenhofer, et al., 2003) – Eggenhofer, Petra M., Reiner K. Huber, & Sebastian Richter, “Communication Processes...Environment”, 13th ICCRTS, Bellevue WA, 3008. http://www.dodccrp.org/events/13th_iccrts_2008/CD/html/papers/190.pdf ( Martin & McEver, 2008) – Martin
-
Smith, Matthew R.; Artz, Nathan S.; Koch, Kevin M.; Samsonov, Alexey; Reeder, Scott B.
2014-01-01
Purpose To demonstrate feasibility of exploiting the spatial distribution of off-resonance surrounding metallic implants for accelerating multispectral imaging techniques. Theory Multispectral imaging (MSI) techniques perform time-consuming independent 3D acquisitions with varying RF frequency offsets to address the extreme off-resonance from metallic implants. Each off-resonance bin provides a unique spatial sensitivity that is analogous to the sensitivity of a receiver coil, and therefore provides a unique opportunity for acceleration. Methods Fully sampled MSI was performed to demonstrate retrospective acceleration. A uniform sampling pattern across off-resonance bins was compared to several adaptive sampling strategies using a total hip replacement phantom. Monte Carlo simulations were performed to compare noise propagation of two of these strategies. With a total knee replacement phantom, positive and negative off-resonance bins were strategically sampled with respect to the B0 field to minimize aliasing. Reconstructions were performed with a parallel imaging framework to demonstrate retrospective acceleration. Results An adaptive sampling scheme dramatically improved reconstruction quality, which was supported by the noise propagation analysis. Independent acceleration of negative and positive off-resonance bins demonstrated reduced overlapping of aliased signal to improve the reconstruction. Conclusion This work presents the feasibility of acceleration in the presence of metal by exploiting the spatial sensitivities of off-resonance bins. PMID:24431210
-
Limited-angle effect compensation for respiratory binned cardiac SPECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, Wenyuan; Yang, Yongyi, E-mail: yy@ece.iit.edu; Wernick, Miles N.
Purpose: In cardiac single photon emission computed tomography (SPECT), respiratory-binned study is used to combat the motion blur associated with respiratory motion. However, owing to the variability in respiratory patterns during data acquisition, the acquired data counts can vary significantly both among respiratory bins and among projection angles within individual bins. If not properly accounted for, such variation could lead to artifacts similar to limited-angle effect in image reconstruction. In this work, the authors aim to investigate several reconstruction strategies for compensating the limited-angle effect in respiratory binned data for the purpose of reducing the image artifacts. Methods: The authorsmore » first consider a model based correction approach, in which the variation in acquisition time is directly incorporated into the imaging model, such that the data statistics are accurately described among both the projection angles and respiratory bins. Afterward, the authors consider an approximation approach, in which the acquired data are rescaled to accommodate the variation in acquisition time among different projection angles while the imaging model is kept unchanged. In addition, the authors also consider the use of a smoothing prior in reconstruction for suppressing the artifacts associated with limited-angle effect. In our evaluation study, the authors first used Monte Carlo simulated imaging with 4D NCAT phantom wherein the ground truth is known for quantitative comparison. The authors evaluated the accuracy of the reconstructed myocardium using a number of metrics, including regional and overall accuracy of the myocardium, uniformity and spatial resolution of the left ventricle (LV) wall, and detectability of perfusion defect using a channelized Hotelling observer. As a preliminary demonstration, the authors also tested the different approaches on five sets of clinical acquisitions. Results: The quantitative evaluation results show that the three compensation methods could all, but to different extents, reduce the reconstruction artifacts over no compensation. In particular, the model based approach reduced the mean-squared-error of the reconstructed myocardium by as much as 40%. Compared to the approach of data rescaling, the model based approach further improved both the overall and regional accuracy of the myocardium; it also further improved the lesion detectability and the uniformity of the LV wall. When ML reconstruction was used, the model based approach was notably more effective for improving the LV wall; when MAP reconstruction was used, the smoothing prior could reduce the noise level and artifacts with little or no increase in bias, but at the cost of a slight resolution loss of the LV wall. The improvements in image quality by the different compensation methods were also observed in the clinical acquisitions. Conclusions: Compensating for the uneven distribution of acquisition time among both projection angles and respiratory bins can effectively reduce the limited-angle artifacts in respiratory-binned cardiac SPECT reconstruction. Direct incorporation of the time variation into the imaging model together with a smoothing prior in reconstruction can lead to the most improvement in the accuracy of the reconstructed myocardium.« less
-
Zhang, Yinan; Chu, Chunli; Li, Tong; Xu, Shengguo; Liu, Lei; Ju, Meiting
2017-12-01
Severe water pollution and resource scarcity is a major problem in China, where it is necessary to establish water quality-oriented monitoring and intelligent watershed management. In this study, an effective watershed management method is explored, in which water quality is first assessed using the heavy metal pollution index and the human health risk index, and then by classifying the pollution and management grade based on cluster analysis and GIS visualization. Three marine reserves in Tianjin were selected and analyzed, namely the Tianjin Ancient Coastal Wetland National Nature Reserve (Qilihai Natural Reserve), the Tianjin DaShentang Oyster Reef National Marine Special Reserve (DaShentang Reserve), and the Tianjin Coastal Wetland National Marine Special Reserve (BinHai Wetland Reserve) which is under construction. The water quality and potential human health risks of 5 heavy metals (Pb, As, Cd, Hg, Cr) in the three reserves were assessed using the Nemerow index and USEPA methods. Moreover, ArcGIS10.2 software was used to visualize the heavy metal index and display their spatial distribution. Cluster analysis enabled classification of the heavy metals into 4 categories, which allowed for identification of the heavy metals whose pollution index and health risks were highest, and, thus, whose control in the reserve is a priority. Results indicate that heavy metal pollution exists in the Qilihai Natural Reserve and in the north and east of the DaShentang Reserve; furthermore, human health risks exist in the Qilihai Natural Reserve and in the BinHai Wetland Reserve. In each reserve, the main factor influencing the pollution and health risk were high concentrations of As and Pb that exceed the corresponding standards. Measures must be adopted to control and remediate the pollutants. Furthermore, to protect the marine reserves, management policies must be implemented to improve water quality, which is an urgent task for both local and national governments. Copyright © 2017 Elsevier B.V. All rights reserved.
-
optBINS: Optimal Binning for histograms
NASA Astrophysics Data System (ADS)
Knuth, Kevin H.
2018-03-01
optBINS (optimal binning) determines the optimal number of bins in a uniform bin-width histogram by deriving the posterior probability for the number of bins in a piecewise-constant density model after assigning a multinomial likelihood and a non-informative prior. The maximum of the posterior probability occurs at a point where the prior probability and the the joint likelihood are balanced. The interplay between these opposing factors effectively implements Occam's razor by selecting the most simple model that best describes the data.
- Save Title to My Profile
- Set E-Mail Alert
