Hue-preserving and saturation-improved color histogram equalization algorithm.
Song, Ki Sun; Kang, Hee; Kang, Moon Gi
2016-06-01
In this paper, an algorithm is proposed to improve contrast and saturation without color degradation. The local histogram equalization (HE) method offers better performance than the global HE method, whereas the local HE method sometimes produces undesirable results due to the block-based processing. The proposed contrast-enhancement (CE) algorithm reflects the characteristics of the global HE method in the local HE method to avoid the artifacts, while global and local contrasts are enhanced. There are two ways to apply the proposed CE algorithm to color images. One is luminance processing methods, and the other one is each channel processing methods. However, these ways incur excessive or reduced saturation and color degradation problems. The proposed algorithm solves these problems by using channel adaptive equalization and similarity of ratios between the channels. Experimental results show that the proposed algorithm enhances contrast and saturation while preserving the hue and producing better performance than existing methods in terms of objective evaluation metrics.
Robust volcano plot: identification of differential metabolites in the presence of outliers.
Kumar, Nishith; Hoque, Md Aminul; Sugimoto, Masahiro
2018-04-11
The identification of differential metabolites in metabolomics is still a big challenge and plays a prominent role in metabolomics data analyses. Metabolomics datasets often contain outliers because of analytical, experimental, and biological ambiguity, but the currently available differential metabolite identification techniques are sensitive to outliers. We propose a kernel weight based outlier-robust volcano plot for identifying differential metabolites from noisy metabolomics datasets. Two numerical experiments are used to evaluate the performance of the proposed technique against nine existing techniques, including the t-test and the Kruskal-Wallis test. Artificially generated data with outliers reveal that the proposed method results in a lower misclassification error rate and a greater area under the receiver operating characteristic curve compared with existing methods. An experimentally measured breast cancer dataset to which outliers were artificially added reveals that our proposed method produces only two non-overlapping differential metabolites whereas the other nine methods produced between seven and 57 non-overlapping differential metabolites. Our data analyses show that the performance of the proposed differential metabolite identification technique is better than that of existing methods. Thus, the proposed method can contribute to analysis of metabolomics data with outliers. The R package and user manual of the proposed method are available at https://github.com/nishithkumarpaul/Rvolcano .
Security Analysis and Improvements to the PsychoPass Method
2013-01-01
Background In a recent paper, Pietro Cipresso et al proposed the PsychoPass method, a simple way to create strong passwords that are easy to remember. However, the method has some security issues that need to be addressed. Objective To perform a security analysis on the PsychoPass method and outline the limitations of and possible improvements to the method. Methods We used the brute force analysis and dictionary attack analysis of the PsychoPass method to outline its weaknesses. Results The first issue with the Psychopass method is that it requires the password reproduction on the same keyboard layout as was used to generate the password. The second issue is a security weakness: although the produced password is 24 characters long, the password is still weak. We elaborate on the weakness and propose a solution that produces strong passwords. The proposed version first requires the use of the SHIFT and ALT-GR keys in combination with other keys, and second, the keys need to be 1-2 distances apart. Conclusions The proposed improved PsychoPass method yields passwords that can be broken only in hundreds of years based on current computing powers. The proposed PsychoPass method requires 10 keys, as opposed to 20 keys in the original method, for comparable password strength. PMID:23942458
Security analysis and improvements to the PsychoPass method.
Brumen, Bostjan; Heričko, Marjan; Rozman, Ivan; Hölbl, Marko
2013-08-13
In a recent paper, Pietro Cipresso et al proposed the PsychoPass method, a simple way to create strong passwords that are easy to remember. However, the method has some security issues that need to be addressed. To perform a security analysis on the PsychoPass method and outline the limitations of and possible improvements to the method. We used the brute force analysis and dictionary attack analysis of the PsychoPass method to outline its weaknesses. The first issue with the Psychopass method is that it requires the password reproduction on the same keyboard layout as was used to generate the password. The second issue is a security weakness: although the produced password is 24 characters long, the password is still weak. We elaborate on the weakness and propose a solution that produces strong passwords. The proposed version first requires the use of the SHIFT and ALT-GR keys in combination with other keys, and second, the keys need to be 1-2 distances apart. The proposed improved PsychoPass method yields passwords that can be broken only in hundreds of years based on current computing powers. The proposed PsychoPass method requires 10 keys, as opposed to 20 keys in the original method, for comparable password strength.
Environmental impact assessment for alternative-energy power plants in México.
González-Avila, María E; Beltrán-Morales, Luis Felipe; Braker, Elizabeth; Ortega-Rubio, Alfredo
2006-07-01
Ten Environmental Impact Assessment Reports (EIAR) were reviewed for projects involving alternative power plants in Mexico developed during the last twelve years. Our analysis focused on the methods used to assess the impacts produced by hydroelectric and geothermal power projects. These methods used to assess impacts in EIARs ranged from the most simple, descriptive criteria, to quantitative models. These methods are not concordant with the level of the EIAR required by the environmental authority or even, with the kind of project developed. It is concluded that there is no correlation between the tools used to assess impacts and the assigned type of the EIAR. Because the methods to assess impacts produced by these power projects have not changed during 2000 years, we propose a quantitative method, based on ecological criteria and tools, to assess the impacts produced by hydroelectric and geothermal plants, according to the specific characteristics of the project. The proposed method is supported by environmental norms, and can assist environmental authorities in assigning the correct level and tools to be applied to hydroelectric and geothermal projects. The proposed method can be adapted to other production activities in Mexico and to other countries.
Full-frame video stabilization with motion inpainting.
Matsushita, Yasuyuki; Ofek, Eyal; Ge, Weina; Tang, Xiaoou; Shum, Heung-Yeung
2006-07-01
Video stabilization is an important video enhancement technology which aims at removing annoying shaky motion from videos. We propose a practical and robust approach of video stabilization that produces full-frame stabilized videos with good visual quality. While most previous methods end up with producing smaller size stabilized videos, our completion method can produce full-frame videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. To achieve this, motion inpainting is proposed to enforce spatial and temporal consistency of the completion in both static and dynamic image areas. In addition, image quality in the stabilized video is enhanced with a new practical deblurring algorithm. Instead of estimating point spread functions, our method transfers and interpolates sharper image pixels of neighboring frames to increase the sharpness of the frame. The proposed video completion and deblurring methods enabled us to develop a complete video stabilizer which can naturally keep the original image quality in the stabilized videos. The effectiveness of our method is confirmed by extensive experiments over a wide variety of videos.
Object-based change detection method using refined Markov random field
NASA Astrophysics Data System (ADS)
Peng, Daifeng; Zhang, Yongjun
2017-01-01
In order to fully consider the local spatial constraints between neighboring objects in object-based change detection (OBCD), an OBCD approach is presented by introducing a refined Markov random field (MRF). First, two periods of images are stacked and segmented to produce image objects. Second, object spectral and textual histogram features are extracted and G-statistic is implemented to measure the distance among different histogram distributions. Meanwhile, object heterogeneity is calculated by combining spectral and textual histogram distance using adaptive weight. Third, an expectation-maximization algorithm is applied for determining the change category of each object and the initial change map is then generated. Finally, a refined change map is produced by employing the proposed refined object-based MRF method. Three experiments were conducted and compared with some state-of-the-art unsupervised OBCD methods to evaluate the effectiveness of the proposed method. Experimental results demonstrate that the proposed method obtains the highest accuracy among the methods used in this paper, which confirms its validness and effectiveness in OBCD.
A New Method to Produce Ni-Cr Ferroalloy Used for Stainless Steel Production
NASA Astrophysics Data System (ADS)
Chen, Pei-Xian; Chu, Shao-Jun; Zhang, Guo-Hua
2016-08-01
A new electrosilicothermic method has been proposed in the present paper to produce Ni-Cr ferroalloy, which can be used for the production of 300 series stainless steel. Based on this new process, the Ni-Si ferroalloy is first produced as the intermediate alloy, and then the desiliconization process of Ni-Si ferroalloy melt with chromium concentrate is carried out to generate Ni-Cr ferroalloy. The silicon content in the Ni-Si ferroalloy produced in the submerged arc furnace should be more than 15 mass% (for the propose of reducing dephosphorization), in order to make sure the phosphorus content in the subsequently produced Ni-Cr ferroalloy is less than 0.03 mass%. A high utilization ratio of Si and a high recovery ratio of Cr can be obtained after the desiliconization reaction between Ni-Si ferroalloy and chromium concentrate in the electric arc furnace (EAF)-shaking ladle (SL) process.
[An improved low spectral distortion PCA fusion method].
Peng, Shi; Zhang, Ai-Wu; Li, Han-Lun; Hu, Shao-Xing; Meng, Xian-Gang; Sun, Wei-Dong
2013-10-01
Aiming at the spectral distortion produced in PCA fusion process, the present paper proposes an improved low spectral distortion PCA fusion method. This method uses NCUT (normalized cut) image segmentation algorithm to make a complex hyperspectral remote sensing image into multiple sub-images for increasing the separability of samples, which can weaken the spectral distortions of traditional PCA fusion; Pixels similarity weighting matrix and masks were produced by using graph theory and clustering theory. These masks are used to cut the hyperspectral image and high-resolution image into some sub-region objects. All corresponding sub-region objects between the hyperspectral image and high-resolution image are fused by using PCA method, and all sub-regional integration results are spliced together to produce a new image. In the experiment, Hyperion hyperspectral data and Rapid Eye data were used. And the experiment result shows that the proposed method has the same ability to enhance spatial resolution and greater ability to improve spectral fidelity performance.
One step linear reconstruction method for continuous wave diffuse optical tomography
NASA Astrophysics Data System (ADS)
Ukhrowiyah, N.; Yasin, M.
2017-09-01
The method one step linear reconstruction method for continuous wave diffuse optical tomography is proposed and demonstrated for polyvinyl chloride based material and breast phantom. Approximation which used in this method is selecting regulation coefficient and evaluating the difference between two states that corresponding to the data acquired without and with a change in optical properties. This method is used to recovery of optical parameters from measured boundary data of light propagation in the object. The research is demonstrated by simulation and experimental data. Numerical object is used to produce simulation data. Chloride based material and breast phantom sample is used to produce experimental data. Comparisons of results between experiment and simulation data are conducted to validate the proposed method. The results of the reconstruction image which is produced by the one step linear reconstruction method show that the image reconstruction almost same as the original object. This approach provides a means of imaging that is sensitive to changes in optical properties, which may be particularly useful for functional imaging used continuous wave diffuse optical tomography of early diagnosis of breast cancer.
YoTube: Searching Action Proposal Via Recurrent and Static Regression Networks
NASA Astrophysics Data System (ADS)
Zhu, Hongyuan; Vial, Romain; Lu, Shijian; Peng, Xi; Fu, Huazhu; Tian, Yonghong; Cao, Xianbin
2018-06-01
In this paper, we present YoTube-a novel network fusion framework for searching action proposals in untrimmed videos, where each action proposal corresponds to a spatialtemporal video tube that potentially locates one human action. Our method consists of a recurrent YoTube detector and a static YoTube detector, where the recurrent YoTube explores the regression capability of RNN for candidate bounding boxes predictions using learnt temporal dynamics and the static YoTube produces the bounding boxes using rich appearance cues in a single frame. Both networks are trained using rgb and optical flow in order to fully exploit the rich appearance, motion and temporal context, and their outputs are fused to produce accurate and robust proposal boxes. Action proposals are finally constructed by linking these boxes using dynamic programming with a novel trimming method to handle the untrimmed video effectively and efficiently. Extensive experiments on the challenging UCF-101 and UCF-Sports datasets show that our proposed technique obtains superior performance compared with the state-of-the-art.
Optimal Management of Hydropower Systems
NASA Astrophysics Data System (ADS)
Bensalem, A.; Cherif, F.; Bennagoune, S.; Benbouza, M. S.; El-Maouhab, A.
In this study we propose a new model for solving the short term management of water reservoirs with variable waterfall. The stored water in these reservoirs is used to produce the electrical energy. The proposed model is based on the enhancement of the value of water by taking into account its location in any reservoir and its waterfall high. The water outflow in the upper reservoir to produce electrical energy is reused in the lower reservoirs to produce electrical energy too. On the other hand the amount of water flow necessary to produce the same amount of electrical energy decrease as the high of waterfall increases. Thus, the objective function is represented in function of the water potential energy stocked in all reservoirs. To analyze this model, we have developed an algorithm based on the discrete maximum principle. To solve the obtained equations, an iterative method based on the gradient method is used. And to satisfy the constraints we have used the Augmented Lagrangian method.
Methods to Control EMI Noises Produced in Power Converter Systems
NASA Astrophysics Data System (ADS)
Mutoh, Nobuyoshi; Ogata, Mitukatu
A new method to control EMI noises produced in power converters (rectifier and inverter) composed of IPMs (Intelligent Power Modules) is studied especially focusing on differential mode noises. The differential mode noises are occurred due to switching operations of the PWM control. As they are diffused into the ground through stray capacitors distributed between the ground and the power transmission lines and machine frames, differential mode noises should be confined and suppressed within the smallest area where power converters are laid out. It is impossible to control differential mode noises easily occurring diffusion by the conventional methods like filtering techniques. So, a new EMI noise control method using a multi-power circuit technique is proposed. The proposed method of the effectiveness has been verified through simulations and experiments.
Viscoacoustic anisotropic full waveform inversion
NASA Astrophysics Data System (ADS)
Qu, Yingming; Li, Zhenchun; Huang, Jianping; Li, Jinli
2017-01-01
A viscoacoustic vertical transverse isotropic (VTI) quasi-differential wave equation, which takes account for both the viscosity and anisotropy of media, is proposed for wavefield simulation in this study. The finite difference method is used to solve the equations, for which the attenuation terms are solved in the wavenumber domain, and all remaining terms in the time-space domain. To stabilize the adjoint wavefield, robust regularization operators are applied to the wave equation to eliminate the high-frequency component of the numerical noise produced during the backward propagation of the viscoacoustic wavefield. Based on these strategies, we derive the corresponding gradient formula and implement a viscoacoustic VTI full waveform inversion (FWI). Numerical tests verify that our proposed viscoacoustic VTI FWI can produce accurate and stable inversion results for viscoacoustic VTI data sets. In addition, we test our method's sensitivity to velocity, Q, and anisotropic parameters. Our results show that the sensitivity to velocity is much higher than that to Q and anisotropic parameters. As such, our proposed method can produce acceptable inversion results as long as the Q and anisotropic parameters are within predefined thresholds.
Wang, Hongzhi; Das, Sandhitsu R.; Suh, Jung Wook; Altinay, Murat; Pluta, John; Craige, Caryne; Avants, Brian; Yushkevich, Paul A.
2011-01-01
We propose a simple but generally applicable approach to improving the accuracy of automatic image segmentation algorithms relative to manual segmentations. The approach is based on the hypothesis that a large fraction of the errors produced by automatic segmentation are systematic, i.e., occur consistently from subject to subject, and serves as a wrapper method around a given host segmentation method. The wrapper method attempts to learn the intensity, spatial and contextual patterns associated with systematic segmentation errors produced by the host method on training data for which manual segmentations are available. The method then attempts to correct such errors in segmentations produced by the host method on new images. One practical use of the proposed wrapper method is to adapt existing segmentation tools, without explicit modification, to imaging data and segmentation protocols that are different from those on which the tools were trained and tuned. An open-source implementation of the proposed wrapper method is provided, and can be applied to a wide range of image segmentation problems. The wrapper method is evaluated with four host brain MRI segmentation methods: hippocampus segmentation using FreeSurfer (Fischl et al., 2002); hippocampus segmentation using multi-atlas label fusion (Artaechevarria et al., 2009); brain extraction using BET (Smith, 2002); and brain tissue segmentation using FAST (Zhang et al., 2001). The wrapper method generates 72%, 14%, 29% and 21% fewer erroneously segmented voxels than the respective host segmentation methods. In the hippocampus segmentation experiment with multi-atlas label fusion as the host method, the average Dice overlap between reference segmentations and segmentations produced by the wrapper method is 0.908 for normal controls and 0.893 for patients with mild cognitive impairment. Average Dice overlaps of 0.964, 0.905 and 0.951 are obtained for brain extraction, white matter segmentation and gray matter segmentation, respectively. PMID:21237273
Modeling a color-rendering operator for high dynamic range images using a cone-response function
NASA Astrophysics Data System (ADS)
Choi, Ho-Hyoung; Kim, Gi-Seok; Yun, Byoung-Ju
2015-09-01
Tone-mapping operators are the typical algorithms designed to produce visibility and the overall impression of brightness, contrast, and color of high dynamic range (HDR) images on low dynamic range (LDR) display devices. Although several new tone-mapping operators have been proposed in recent years, the results of these operators have not matched those of the psychophysical experiments based on the human visual system. A color-rendering model that is a combination of tone-mapping and cone-response functions using an XYZ tristimulus color space is presented. In the proposed method, the tone-mapping operator produces visibility and the overall impression of brightness, contrast, and color in HDR images when mapped onto relatively LDR devices. The tone-mapping resultant image is obtained using chromatic and achromatic colors to avoid well-known color distortions shown in the conventional methods. The resulting image is then processed with a cone-response function wherein emphasis is placed on human visual perception (HVP). The proposed method covers the mismatch between the actual scene and the rendered image based on HVP. The experimental results show that the proposed method yields an improved color-rendering performance compared to conventional methods.
Finding Dantzig Selectors with a Proximity Operator based Fixed-point Algorithm
2014-11-01
experiments showed that this method usually outperforms the method in [2] in terms of CPU time while producing solutions of comparable quality. The... method proposed in [19]. To alleviate the difficulty caused by the subprob- lem without a closed form solution , a linearized ADM was proposed for the...a closed form solution , but the β-related subproblem does not and is solved approximately by using the nonmonotone gradient method in [18]. The
Zweigenbaum, P.; Bouaud, J.; Bachimont, B.; Charlet, J.; Boisvieux, J. F.
1997-01-01
The Menelas project aimed to produce a normalized conceptual representation from natural language patient discharge summaries. Because of the complex and detailed nature of conceptual representations, evaluating the quality of output of such a system is difficult. We present the method designed to measure the quality of Menelas output, and its application to the state of the French Menelas prototype as of the end of the project. We examine this method in the framework recently proposed by Friedman and Hripcsak. We also propose two conditions which enable to reduce the evaluation preparation workload. PMID:9357694
Zlotnik, V.A.; McGuire, V.L.
1998-01-01
Using the developed theory and modified Springer-Gelhar (SG) model, an identification method is proposed for estimating hydraulic conductivity from multi-level slug tests. The computerized algorithm calculates hydraulic conductivity from both monotonic and oscillatory well responses obtained using a double-packer system. Field verification of the method was performed at a specially designed fully penetrating well of 0.1-m diameter with a 10-m screen in a sand and gravel alluvial aquifer (MSEA site, Shelton, Nebraska). During well installation, disturbed core samples were collected every 0.6 m using a split-spoon sampler. Vertical profiles of hydraulic conductivity were produced on the basis of grain-size analysis of the disturbed core samples. These results closely correlate with the vertical profile of horizontal hydraulic conductivity obtained by interpreting multi-level slug test responses using the modified SG model. The identification method was applied to interpret the response from 474 slug tests in 156 locations at the MSEA site. More than 60% of responses were oscillatory. The method produced a good match to experimental data for both oscillatory and monotonic responses using an automated curve matching procedure. The proposed method allowed us to drastically increase the efficiency of each well used for aquifer characterization and to process massive arrays of field data. Recommendations generalizing this experience to massive application of the proposed method are developed.Using the developed theory and modified Springer-Gelhar (SG) model, an identification method is proposed for estimating hydraulic conductivity from multi-level slug tests. The computerized algorithm calculates hydraulic conductivity from both monotonic and oscillatory well responses obtained using a double-packer system. Field verification of the method was performed at a specially designed fully penetrating well of 0.1-m diameter with a 10-m screen in a sand and gravel alluvial aquifer (MSEA site, Shelton, Nebraska). During well installation, disturbed core samples were collected every 0.6 m using a split-spoon sampler. Vertical profiles of hydraulic conductivity were produced on the basis of grain-size analysis of the disturbed core samples. These results closely correlate with the vertical profile of horizontal hydraulic conductivity obtained by interpreting multi-level slug test responses using the modified SG model. The identification method was applied to interpret the response from 474 slug tests in 156 locations at the MSEA site. More than 60% of responses were oscillatory. The method produced a good match to experimental data for both oscillatory and monotonic responses using an automated curve matching procedure. The proposed method allowed us to drastically increase the efficiency of each well used for aquifer characterization and to process massive arrays of field data. Recommendations generalizing this experience to massive application of the proposed method are developed.
Estimation of group means when adjusting for covariates in generalized linear models.
Qu, Yongming; Luo, Junxiang
2015-01-01
Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.
An algebraic method for constructing stable and consistent autoregressive filters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, University Park, PA 16802; Hong, Hoon, E-mail: hong@ncsu.edu
2015-02-15
In this paper, we introduce an algebraic method to construct stable and consistent univariate autoregressive (AR) models of low order for filtering and predicting nonlinear turbulent signals with memory depth. By stable, we refer to the classical stability condition for the AR model. By consistent, we refer to the classical consistency constraints of Adams–Bashforth methods of order-two. One attractive feature of this algebraic method is that the model parameters can be obtained without directly knowing any training data set as opposed to many standard, regression-based parameterization methods. It takes only long-time average statistics as inputs. The proposed method provides amore » discretization time step interval which guarantees the existence of stable and consistent AR model and simultaneously produces the parameters for the AR models. In our numerical examples with two chaotic time series with different characteristics of decaying time scales, we find that the proposed AR models produce significantly more accurate short-term predictive skill and comparable filtering skill relative to the linear regression-based AR models. These encouraging results are robust across wide ranges of discretization times, observation times, and observation noise variances. Finally, we also find that the proposed model produces an improved short-time prediction relative to the linear regression-based AR-models in forecasting a data set that characterizes the variability of the Madden–Julian Oscillation, a dominant tropical atmospheric wave pattern.« less
Tensor-based Dictionary Learning for Spectral CT Reconstruction
Zhang, Yanbo; Wang, Ge
2016-01-01
Spectral computed tomography (CT) produces an energy-discriminative attenuation map of an object, extending a conventional image volume with a spectral dimension. In spectral CT, an image can be sparsely represented in each of multiple energy channels, and are highly correlated among energy channels. According to this characteristics, we propose a tensor-based dictionary learning method for spectral CT reconstruction. In our method, tensor patches are extracted from an image tensor, which is reconstructed using the filtered backprojection (FBP), to form a training dataset. With the Candecomp/Parafac decomposition, a tensor-based dictionary is trained, in which each atom is a rank-one tensor. Then, the trained dictionary is used to sparsely represent image tensor patches during an iterative reconstruction process, and the alternating minimization scheme is adapted for optimization. The effectiveness of our proposed method is validated with both numerically simulated and real preclinical mouse datasets. The results demonstrate that the proposed tensor-based method generally produces superior image quality, and leads to more accurate material decomposition than the currently popular popular methods. PMID:27541628
An effective hair detection algorithm for dermoscopic melanoma images of skin lesions
NASA Astrophysics Data System (ADS)
Chakraborti, Damayanti; Kaur, Ravneet; Umbaugh, Scott; LeAnder, Robert
2016-09-01
Dermoscopic images are obtained using the method of skin surface microscopy. Pigmented skin lesions are evaluated in terms of texture features such as color and structure. Artifacts, such as hairs, bubbles, black frames, ruler-marks, etc., create obstacles that prevent accurate detection of skin lesions by both clinicians and computer-aided diagnosis. In this article, we propose a new algorithm for the automated detection of hairs, using an adaptive, Canny edge-detection method, followed by morphological filtering and an arithmetic addition operation. The algorithm was applied to 50 dermoscopic melanoma images. In order to ascertain this method's relative detection accuracy, it was compared to the Razmjooy hair-detection method [1], using segmentation error (SE), true detection rate (TDR) and false positioning rate (FPR). The new method produced 6.57% SE, 96.28% TDR and 3.47% FPR, compared to 15.751% SE, 86.29% TDR and 11.74% FPR produced by the Razmjooy method [1]. Because of the 7.27-9.99% improvement in those parameters, we conclude that the new algorithm produces much better results for detecting thick, thin, dark and light hairs. The new method proposed here, shows an appreciable difference in the rate of detecting bubbles, as well.
Multiratio fusion change detection with adaptive thresholding
NASA Astrophysics Data System (ADS)
Hytla, Patrick C.; Balster, Eric J.; Vasquez, Juan R.; Neuroth, Robert M.
2017-04-01
A ratio-based change detection method known as multiratio fusion (MRF) is proposed and tested. The MRF framework builds on other change detection components proposed in this work: dual ratio (DR) and multiratio (MR). The DR method involves two ratios coupled with adaptive thresholds to maximize detected changes and minimize false alarms. The use of two ratios is shown to outperform the single ratio case when the means of the image pairs are not equal. MR change detection builds on the DR method by including negative imagery to produce four total ratios with adaptive thresholds. Inclusion of negative imagery is shown to improve detection sensitivity and to boost detection performance in certain target and background cases. MRF further expands this concept by fusing together the ratio outputs using a routine in which detections must be verified by two or more ratios to be classified as a true changed pixel. The proposed method is tested with synthetically generated test imagery and real datasets with results compared to other methods found in the literature. DR is shown to significantly outperform the standard single ratio method. MRF produces excellent change detection results that exhibit up to a 22% performance improvement over other methods from the literature at low false-alarm rates.
NASA Astrophysics Data System (ADS)
Coimbra-Araújo, Carlos H.; Anjos, Rita C.
2017-01-01
A fraction of the magnetic luminosity (LB) produced by Kerr black holes in some active galactic nuclei (AGNs) can produce the necessary energy to accelerate ultra high energy cosmic rays (UHECRs) beyond the GZK limit, observed, e.g., by the Pierre Auger experiment. Nevertheless, the direct detection of those UHECRs has a lack of information about the direction of the source from where those cosmic rays are coming, since charged particles are deflected by the intergalactic magnetic field. This problem arises the needing of alternative methods to evaluate the luminosity of UHECRs (LCR) from a given source. Methods proposed in literature range from the observation of upper limits in gamma rays to the observation of upper limits in neutrinos produced by cascade effects during the propagation of UHECRs. In this aspect, the present work proposes a method to calculate limits of the main possible conversion fractions ηCR = LCR/LB for nine UHECR AGN Seyfert sources based on the respective observation of gamma ray upper limits from Fermi-LAT data.
An Approach for Integrating the Prioritization of Functional and Nonfunctional Requirements
Dabbagh, Mohammad; Lee, Sai Peck
2014-01-01
Due to the budgetary deadlines and time to market constraints, it is essential to prioritize software requirements. The outcome of requirements prioritization is an ordering of requirements which need to be considered first during the software development process. To achieve a high quality software system, both functional and nonfunctional requirements must be taken into consideration during the prioritization process. Although several requirements prioritization methods have been proposed so far, no particular method or approach is presented to consider both functional and nonfunctional requirements during the prioritization stage. In this paper, we propose an approach which aims to integrate the process of prioritizing functional and nonfunctional requirements. The outcome of applying the proposed approach produces two separate prioritized lists of functional and non-functional requirements. The effectiveness of the proposed approach has been evaluated through an empirical experiment aimed at comparing the approach with the two state-of-the-art-based approaches, analytic hierarchy process (AHP) and hybrid assessment method (HAM). Results show that our proposed approach outperforms AHP and HAM in terms of actual time-consumption while preserving the quality of the results obtained by our proposed approach at a high level of agreement in comparison with the results produced by the other two approaches. PMID:24982987
An approach for integrating the prioritization of functional and nonfunctional requirements.
Dabbagh, Mohammad; Lee, Sai Peck
2014-01-01
Due to the budgetary deadlines and time to market constraints, it is essential to prioritize software requirements. The outcome of requirements prioritization is an ordering of requirements which need to be considered first during the software development process. To achieve a high quality software system, both functional and nonfunctional requirements must be taken into consideration during the prioritization process. Although several requirements prioritization methods have been proposed so far, no particular method or approach is presented to consider both functional and nonfunctional requirements during the prioritization stage. In this paper, we propose an approach which aims to integrate the process of prioritizing functional and nonfunctional requirements. The outcome of applying the proposed approach produces two separate prioritized lists of functional and non-functional requirements. The effectiveness of the proposed approach has been evaluated through an empirical experiment aimed at comparing the approach with the two state-of-the-art-based approaches, analytic hierarchy process (AHP) and hybrid assessment method (HAM). Results show that our proposed approach outperforms AHP and HAM in terms of actual time-consumption while preserving the quality of the results obtained by our proposed approach at a high level of agreement in comparison with the results produced by the other two approaches.
NASA Astrophysics Data System (ADS)
Marius Andrei, Mihalache; Gheorghe, Nagit; Gavril, Musca; Vasile, Merticaru, Jr.; Marius Ionut, Ripanu
2016-11-01
In the present study the authors propose a new algorithm for identifying the right loads that act upon a functional connecting rod during a full engine cycle. The loads are then divided into three categories depending on the results they produce, as static, semi-dynamic and dynamic ones Because an engine cycle extends up to 720°, the authors aim to identify a method of substitution of values that produce the same effect as a previous value of a considered angle did. In other words, the proposed method aims to pin point the critical values that produce an effect different as the one seen before during a full engine cycle. Only those values will then be considered as valid loads that act upon the connecting rod inside FEA analyses. This technique has been applied to each of the three categories mentioned above and did produced different critical values for each one of them. The whole study relies on a theoretical mechanical project which was developed in order to identify the right values that correspond to each degree of the entire engine cycle of a Daewoo Tico automobile.
NASA Astrophysics Data System (ADS)
Anayah, F. M.; Kaluarachchi, J. J.
2014-06-01
Reliable estimation of evapotranspiration (ET) is important for the purpose of water resources planning and management. Complementary methods, including complementary relationship areal evapotranspiration (CRAE), advection aridity (AA) and Granger and Gray (GG), have been used to estimate ET because these methods are simple and practical in estimating regional ET using meteorological data only. However, prior studies have found limitations in these methods especially in contrasting climates. This study aims to develop a calibration-free universal method using the complementary relationships to compute regional ET in contrasting climatic and physical conditions with meteorological data only. The proposed methodology consists of a systematic sensitivity analysis using the existing complementary methods. This work used 34 global FLUXNET sites where eddy covariance (EC) fluxes of ET are available for validation. A total of 33 alternative model variations from the original complementary methods were proposed. Further analysis using statistical methods and simplified climatic class definitions produced one distinctly improved GG-model-based alternative. The proposed model produced a single-step ET formulation with results equal to or better than the recent studies using data-intensive, classical methods. Average root mean square error (RMSE), mean absolute bias (BIAS) and R2 (coefficient of determination) across 34 global sites were 20.57 mm month-1, 10.55 mm month-1 and 0.64, respectively. The proposed model showed a step forward toward predicting ET in large river basins with limited data and requiring no calibration.
A new method for testing the scale-factor performance of fiber optical gyroscope
NASA Astrophysics Data System (ADS)
Zhao, Zhengxin; Yu, Haicheng; Li, Jing; Li, Chao; Shi, Haiyang; Zhang, Bingxin
2015-10-01
Fiber optical gyro (FOG) is a kind of solid-state optical gyroscope with good environmental adaptability, which has been widely used in national defense, aviation, aerospace and other civilian areas. In some applications, FOG will experience environmental conditions such as vacuum, radiation, vibration and so on, and the scale-factor performance is concerned as an important accuracy indicator. However, the scale-factor performance of FOG under these environmental conditions is difficult to test using conventional methods, as the turntable can't work under these environmental conditions. According to the phenomenon that the physical effects of FOG produced by the sawtooth voltage signal under static conditions is consistent with the physical effects of FOG produced by a turntable in uniform rotation, a new method for the scale-factor performance test of FOG without turntable is proposed in this paper. In this method, the test system of the scale-factor performance is constituted by an external operational amplifier circuit and a FOG which the modulation signal and Y waveguied are disconnected. The external operational amplifier circuit is used to superimpose the externally generated sawtooth voltage signal and the modulation signal of FOG, and to exert the superimposed signal on the Y waveguide of the FOG. The test system can produce different equivalent angular velocities by changing the cycle of the sawtooth signal in the scale-factor performance test. In this paper, the system model of FOG superimposed with an externally generated sawtooth is analyzed, and a conclusion that the effect of the equivalent input angular velocity produced by the sawtooth voltage signal is consistent with the effect of input angular velocity produced by the turntable is obtained. The relationship between the equivalent angular velocity and the parameters such as sawtooth cycle and so on is presented, and the correction method for the equivalent angular velocity is also presented by analyzing the influence of each parameter error on the equivalent angular velocity. A comparative experiment of the method proposed in this paper and the method of turntable calibration was conducted, and the scale-factor performance test results of the same FOG using the two methods were consistent. Using the method proposed in this paper to test the scale-factor performance of FOG, the input angular velocity is the equivalent effect produced by a sawtooth voltage signal, and there is no need to use a turntable to produce mechanical rotation, so this method can be used to test the performance of FOG at the ambient conditions which turntable can not work.
Myocardial strains from 3D displacement encoded magnetic resonance imaging
2012-01-01
Background The ability to measure and quantify myocardial motion and deformation provides a useful tool to assist in the diagnosis, prognosis and management of heart disease. The recent development of magnetic resonance imaging methods, such as harmonic phase analysis of tagging and displacement encoding with stimulated echoes (DENSE), make detailed non-invasive 3D kinematic analyses of human myocardium possible in the clinic and for research purposes. A robust analysis method is required, however. Methods We propose to estimate strain using a polynomial function which produces local models of the displacement field obtained with DENSE. Given a specific polynomial order, the model is obtained as the least squares fit of the acquired displacement field. These local models are subsequently used to produce estimates of the full strain tensor. Results The proposed method is evaluated on a numerical phantom as well as in vivo on a healthy human heart. The evaluation showed that the proposed method produced accurate results and showed low sensitivity to noise in the numerical phantom. The method was also demonstrated in vivo by assessment of the full strain tensor and to resolve transmural strain variations. Conclusions Strain estimation within a 3D myocardial volume based on polynomial functions yields accurate and robust results when validated on an analytical model. The polynomial field is capable of resolving the measured material positions from the in vivo data, and the obtained in vivo strains values agree with previously reported myocardial strains in normal human hearts. PMID:22533791
Single Wall Carbon Nanotube Alignment Mechanisms for Non-Destructive Evaluation
NASA Technical Reports Server (NTRS)
Hong, Seunghun
2002-01-01
As proposed in our original proposal, we developed a new innovative method to assemble millions of single wall carbon nanotube (SWCNT)-based circuit components as fast as conventional microfabrication processes. This method is based on surface template assembly strategy. The new method solves one of the major bottlenecks in carbon nanotube based electrical applications and, potentially, may allow us to mass produce a large number of SWCNT-based integrated devices of critical interests to NASA.
An Improved Pansharpening Method for Misaligned Panchromatic and Multispectral Data
Jing, Linhai; Tang, Yunwei; Ding, Haifeng
2018-01-01
Numerous pansharpening methods were proposed in recent decades for fusing low-spatial-resolution multispectral (MS) images with high-spatial-resolution (HSR) panchromatic (PAN) bands to produce fused HSR MS images, which are widely used in various remote sensing tasks. The effect of misregistration between MS and PAN bands on quality of fused products has gained much attention in recent years. An improved method for misaligned MS and PAN imagery is proposed, through two improvements made on a previously published method named RMI (reduce misalignment impact). The performance of the proposed method was assessed by comparing with some outstanding fusion methods, such as adaptive Gram-Schmidt and generalized Laplacian pyramid. Experimental results show that the improved version can reduce spectral distortions of fused dark pixels and sharpen boundaries between different image objects, as well as obtain similar quality indexes with the original RMI method. In addition, the proposed method was evaluated with respect to its sensitivity to misalignments between MS and PAN bands. It is certified that the proposed method is more robust to misalignments between MS and PAN bands than the other methods. PMID:29439502
An Improved Pansharpening Method for Misaligned Panchromatic and Multispectral Data.
Li, Hui; Jing, Linhai; Tang, Yunwei; Ding, Haifeng
2018-02-11
Numerous pansharpening methods were proposed in recent decades for fusing low-spatial-resolution multispectral (MS) images with high-spatial-resolution (HSR) panchromatic (PAN) bands to produce fused HSR MS images, which are widely used in various remote sensing tasks. The effect of misregistration between MS and PAN bands on quality of fused products has gained much attention in recent years. An improved method for misaligned MS and PAN imagery is proposed, through two improvements made on a previously published method named RMI (reduce misalignment impact). The performance of the proposed method was assessed by comparing with some outstanding fusion methods, such as adaptive Gram-Schmidt and generalized Laplacian pyramid. Experimental results show that the improved version can reduce spectral distortions of fused dark pixels and sharpen boundaries between different image objects, as well as obtain similar quality indexes with the original RMI method. In addition, the proposed method was evaluated with respect to its sensitivity to misalignments between MS and PAN bands. It is certified that the proposed method is more robust to misalignments between MS and PAN bands than the other methods.
Blessy, S A Praylin Selva; Sulochana, C Helen
2015-01-01
Segmentation of brain tumor from Magnetic Resonance Imaging (MRI) becomes very complicated due to the structural complexities of human brain and the presence of intensity inhomogeneities. To propose a method that effectively segments brain tumor from MR images and to evaluate the performance of unsupervised optimal fuzzy clustering (UOFC) algorithm for segmentation of brain tumor from MR images. Segmentation is done by preprocessing the MR image to standardize intensity inhomogeneities followed by feature extraction, feature fusion and clustering. Different validation measures are used to evaluate the performance of the proposed method using different clustering algorithms. The proposed method using UOFC algorithm produces high sensitivity (96%) and low specificity (4%) compared to other clustering methods. Validation results clearly show that the proposed method with UOFC algorithm effectively segments brain tumor from MR images.
Contourlet domain multiband deblurring based on color correlation for fluid lens cameras.
Tzeng, Jack; Liu, Chun-Chen; Nguyen, Truong Q
2010-10-01
Due to the novel fluid optics, unique image processing challenges are presented by the fluidic lens camera system. Developed for surgical applications, unique properties, such as no moving parts while zooming and better miniaturization than traditional glass optics, are advantages of the fluid lens. Despite these abilities, sharp color planes and blurred color planes are created by the nonuniform reaction of the liquid lens to different color wavelengths. Severe axial color aberrations are caused by this reaction. In order to deblur color images without estimating a point spread function, a contourlet filter bank system is proposed. Information from sharp color planes is used by this multiband deblurring method to improve blurred color planes. Compared to traditional Lucy-Richardson and Wiener deconvolution algorithms, significantly improved sharpness and reduced ghosting artifacts are produced by a previous wavelet-based method. Directional filtering is used by the proposed contourlet-based system to adjust to the contours of the image. An image is produced by the proposed method which has a similar level of sharpness to the previous wavelet-based method and has fewer ghosting artifacts. Conditions for when this algorithm will reduce the mean squared error are analyzed. While improving the blue color plane by using information from the green color plane is the primary focus of this paper, these methods could be adjusted to improve the red color plane. Many multiband systems such as global mapping, infrared imaging, and computer assisted surgery are natural extensions of this work. This information sharing algorithm is beneficial to any image set with high edge correlation. Improved results in the areas of deblurring, noise reduction, and resolution enhancement can be produced by the proposed algorithm.
Automatic comic page image understanding based on edge segment analysis
NASA Astrophysics Data System (ADS)
Liu, Dong; Wang, Yongtao; Tang, Zhi; Li, Luyuan; Gao, Liangcai
2013-12-01
Comic page image understanding aims to analyse the layout of the comic page images by detecting the storyboards and identifying the reading order automatically. It is the key technique to produce the digital comic documents suitable for reading on mobile devices. In this paper, we propose a novel comic page image understanding method based on edge segment analysis. First, we propose an efficient edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input comic page image; second, we propose a top-down scheme to detect line segments within each obtained edge segment; third, we develop a novel method to detect the storyboards by selecting the border lines and further identify the reading order of these storyboards. The proposed method is performed on a data set consisting of 2000 comic page images from ten printed comic series. The experimental results demonstrate that the proposed method achieves satisfactory results on different comics and outperforms the existing methods.
Remote sensing imagery classification using multi-objective gravitational search algorithm
NASA Astrophysics Data System (ADS)
Zhang, Aizhu; Sun, Genyun; Wang, Zhenjie
2016-10-01
Simultaneous optimization of different validity measures can capture different data characteristics of remote sensing imagery (RSI) and thereby achieving high quality classification results. In this paper, two conflicting cluster validity indices, the Xie-Beni (XB) index and the fuzzy C-means (FCM) (Jm) measure, are integrated with a diversity-enhanced and memory-based multi-objective gravitational search algorithm (DMMOGSA) to present a novel multi-objective optimization based RSI classification method. In this method, the Gabor filter method is firstly implemented to extract texture features of RSI. Then, the texture features are syncretized with the spectral features to construct the spatial-spectral feature space/set of the RSI. Afterwards, cluster of the spectral-spatial feature set is carried out on the basis of the proposed method. To be specific, cluster centers are randomly generated initially. After that, the cluster centers are updated and optimized adaptively by employing the DMMOGSA. Accordingly, a set of non-dominated cluster centers are obtained. Therefore, numbers of image classification results of RSI are produced and users can pick up the most promising one according to their problem requirements. To quantitatively and qualitatively validate the effectiveness of the proposed method, the proposed classification method was applied to classifier two aerial high-resolution remote sensing imageries. The obtained classification results are compared with that produced by two single cluster validity index based and two state-of-the-art multi-objective optimization algorithms based classification results. Comparison results show that the proposed method can achieve more accurate RSI classification.
Design of k-Space Channel Combination Kernels and Integration with Parallel Imaging
Beatty, Philip J.; Chang, Shaorong; Holmes, James H.; Wang, Kang; Brau, Anja C. S.; Reeder, Scott B.; Brittain, Jean H.
2014-01-01
Purpose In this work, a new method is described for producing local k-space channel combination kernels using a small amount of low-resolution multichannel calibration data. Additionally, this work describes how these channel combination kernels can be combined with local k-space unaliasing kernels produced by the calibration phase of parallel imaging methods such as GRAPPA, PARS and ARC. Methods Experiments were conducted to evaluate both the image quality and computational efficiency of the proposed method compared to a channel-by-channel parallel imaging approach with image-space sum-of-squares channel combination. Results Results indicate comparable image quality overall, with some very minor differences seen in reduced field-of-view imaging. It was demonstrated that this method enables a speed up in computation time on the order of 3–16X for 32-channel data sets. Conclusion The proposed method enables high quality channel combination to occur earlier in the reconstruction pipeline, reducing computational and memory requirements for image reconstruction. PMID:23943602
Real-Time Occlusion Handling in Augmented Reality Based on an Object Tracking Approach
Tian, Yuan; Guan, Tao; Wang, Cheng
2010-01-01
To produce a realistic augmentation in Augmented Reality, the correct relative positions of real objects and virtual objects are very important. In this paper, we propose a novel real-time occlusion handling method based on an object tracking approach. Our method is divided into three steps: selection of the occluding object, object tracking and occlusion handling. The user selects the occluding object using an interactive segmentation method. The contour of the selected object is then tracked in the subsequent frames in real-time. In the occlusion handling step, all the pixels on the tracked object are redrawn on the unprocessed augmented image to produce a new synthesized image in which the relative position between the real and virtual object is correct. The proposed method has several advantages. First, it is robust and stable, since it remains effective when the camera is moved through large changes of viewing angles and volumes or when the object and the background have similar colors. Second, it is fast, since the real object can be tracked in real-time. Last, a smoothing technique provides seamless merging between the augmented and virtual object. Several experiments are provided to validate the performance of the proposed method. PMID:22319278
Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei
2016-01-01
Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation. PMID:27649190
Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei
2016-09-15
Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation.
Proposed hybrid-classifier ensemble algorithm to map snow cover area
NASA Astrophysics Data System (ADS)
Nijhawan, Rahul; Raman, Balasubramanian; Das, Josodhir
2018-01-01
Metaclassification ensemble approach is known to improve the prediction performance of snow-covered area. The methodology adopted in this case is based on neural network along with four state-of-art machine learning algorithms: support vector machine, artificial neural networks, spectral angle mapper, K-mean clustering, and a snow index: normalized difference snow index. An AdaBoost ensemble algorithm related to decision tree for snow-cover mapping is also proposed. According to available literature, these methods have been rarely used for snow-cover mapping. Employing the above techniques, a study was conducted for Raktavarn and Chaturangi Bamak glaciers, Uttarakhand, Himalaya using multispectral Landsat 7 ETM+ (enhanced thematic mapper) image. The study also compares the results with those obtained from statistical combination methods (majority rule and belief functions) and accuracies of individual classifiers. Accuracy assessment is performed by computing the quantity and allocation disagreement, analyzing statistic measures (accuracy, precision, specificity, AUC, and sensitivity) and receiver operating characteristic curves. A total of 225 combinations of parameters for individual classifiers were trained and tested on the dataset and results were compared with the proposed approach. It was observed that the proposed methodology produced the highest classification accuracy (95.21%), close to (94.01%) that was produced by the proposed AdaBoost ensemble algorithm. From the sets of observations, it was concluded that the ensemble of classifiers produced better results compared to individual classifiers.
Reusable EGaIn-Injected Substrate-Integrated-Waveguide Resonator for Wireless Sensor Applications
Memon, Muhammad Usman; Lim, Sungjoon
2015-01-01
The proposed structure in this research is constructed on substrate integrated waveguide (SIW) technology and has a mechanism that produces 16 different and distinct resonant frequencies between 2.45 and 3.05 GHz by perturbing a fundamental TE10 mode. It is a unique method for producing multiple resonances in a radio frequency planar structure without any extra circuitry or passive elements is developed. The proposed SIW structure has four vertical fluidic holes (channels); injecting eutectic gallium indium (EGaIn), also known commonly as liquid metal (LM), into these vertical channels produces different resonant frequencies. Either a channel is empty, or it is filled with LM. In total, the combination of different frequencies produced from four vertical channels is 16. PMID:26569257
NASA Astrophysics Data System (ADS)
Karlita, Tita; Yuniarno, Eko Mulyanto; Purnama, I. Ketut Eddy; Purnomo, Mauridhi Hery
2017-06-01
Analyzing ultrasound (US) images to get the shapes and structures of particular anatomical regions is an interesting field of study since US imaging is a non-invasive method to capture internal structures of a human body. However, bone segmentation of US images is still challenging because it is strongly influenced by speckle noises and it has poor image quality. This paper proposes a combination of local phase symmetry and quadratic polynomial fitting methods to extract bone outer contour (BOC) from two dimensional (2D) B-modes US image as initial steps of three-dimensional (3D) bone surface reconstruction. By using local phase symmetry, the bone is initially extracted from US images. BOC is then extracted by scanning one pixel on the bone boundary in each column of the US images using first phase features searching method. Quadratic polynomial fitting is utilized to refine and estimate the pixel location that fails to be detected during the extraction process. Hole filling method is then applied by utilize the polynomial coefficients to fill the gaps with new pixel. The proposed method is able to estimate the new pixel position and ensures smoothness and continuity of the contour path. Evaluations are done using cow and goat bones by comparing the resulted BOCs with the contours produced by manual segmentation and contours produced by canny edge detection. The evaluation shows that our proposed methods produces an excellent result with average MSE before and after hole filling at the value of 0.65.
Robust rotational-velocity-Verlet integration methods.
Rozmanov, Dmitri; Kusalik, Peter G
2010-05-01
Two rotational integration algorithms for rigid-body dynamics are proposed in velocity-Verlet formulation. The first method uses quaternion dynamics and was derived from the original rotational leap-frog method by Svanberg [Mol. Phys. 92, 1085 (1997)]; it produces time consistent positions and momenta. The second method is also formulated in terms of quaternions but it is not quaternion specific and can be easily adapted for any other orientational representation. Both the methods are tested extensively and compared to existing rotational integrators. The proposed integrators demonstrated performance at least at the level of previously reported rotational algorithms. The choice of simulation parameters is also discussed.
Robust rotational-velocity-Verlet integration methods
NASA Astrophysics Data System (ADS)
Rozmanov, Dmitri; Kusalik, Peter G.
2010-05-01
Two rotational integration algorithms for rigid-body dynamics are proposed in velocity-Verlet formulation. The first method uses quaternion dynamics and was derived from the original rotational leap-frog method by Svanberg [Mol. Phys. 92, 1085 (1997)]; it produces time consistent positions and momenta. The second method is also formulated in terms of quaternions but it is not quaternion specific and can be easily adapted for any other orientational representation. Both the methods are tested extensively and compared to existing rotational integrators. The proposed integrators demonstrated performance at least at the level of previously reported rotational algorithms. The choice of simulation parameters is also discussed.
Modified microplate method for rapid and efficient estimation of siderophore produced by bacteria.
Arora, Naveen Kumar; Verma, Maya
2017-12-01
In this study, siderophore production by various bacteria amongst the plant-growth-promoting rhizobacteria was quantified by a rapid and efficient method. In total, 23 siderophore-producing bacterial isolates/strains were taken to estimate their siderophore-producing ability by the standard method (chrome azurol sulphonate assay) as well as 96 well microplate method. Production of siderophore was estimated in percent siderophore unit by both the methods. It was observed that data obtained by both methods correlated positively with each other proving the correctness of microplate method. By the modified microplate method, siderophore production by several bacterial strains can be estimated both qualitatively and quantitatively at one go, saving time, chemicals, making it very less tedious, and also being cheaper in comparison with the method currently in use. The modified microtiter plate method as proposed here makes it far easier to screen the plant-growth-promoting character of plant-associated bacteria.
NASA Astrophysics Data System (ADS)
Ariyarit, Atthaphon; Sugiura, Masahiko; Tanabe, Yasutada; Kanazaki, Masahiro
2018-06-01
A multi-fidelity optimization technique by an efficient global optimization process using a hybrid surrogate model is investigated for solving real-world design problems. The model constructs the local deviation using the kriging method and the global model using a radial basis function. The expected improvement is computed to decide additional samples that can improve the model. The approach was first investigated by solving mathematical test problems. The results were compared with optimization results from an ordinary kriging method and a co-kriging method, and the proposed method produced the best solution. The proposed method was also applied to aerodynamic design optimization of helicopter blades to obtain the maximum blade efficiency. The optimal shape obtained by the proposed method achieved performance almost equivalent to that obtained using the high-fidelity, evaluation-based single-fidelity optimization. Comparing all three methods, the proposed method required the lowest total number of high-fidelity evaluation runs to obtain a converged solution.
NASA Astrophysics Data System (ADS)
Zhang, Yi; Wu, Yulong; Yan, Jianguo; Wang, Haoran; Rodriguez, J. Alexis P.; Qiu, Yue
2018-04-01
In this paper, we propose an inverse method for full gravity gradient tensor data in the spherical coordinate system. As opposed to the traditional gravity inversion in the Cartesian coordinate system, our proposed method takes the curvature of the Earth, the Moon, or other planets into account, using tesseroid bodies to produce gravity gradient effects in forward modeling. We used both synthetic and observed datasets to test the stability and validity of the proposed method. Our results using synthetic gravity data show that our new method predicts the depth of the density anomalous body efficiently and accurately. Using observed gravity data for the Mare Smythii area on the moon, the density distribution of the crust in this area reveals its geological structure. These results validate the proposed method and potential application for large area data inversion of planetary geological structures.[Figure not available: see fulltext.
Robust Mosaicking of Stereo Digital Elevation Models from the Ames Stereo Pipeline
NASA Technical Reports Server (NTRS)
Kim, Tae Min; Moratto, Zachary M.; Nefian, Ara Victor
2010-01-01
Robust estimation method is proposed to combine multiple observations and create consistent, accurate, dense Digital Elevation Models (DEMs) from lunar orbital imagery. The NASA Ames Intelligent Robotics Group (IRG) aims to produce higher-quality terrain reconstructions of the Moon from Apollo Metric Camera (AMC) data than is currently possible. In particular, IRG makes use of a stereo vision process, the Ames Stereo Pipeline (ASP), to automatically generate DEMs from consecutive AMC image pairs. However, the DEMs currently produced by the ASP often contain errors and inconsistencies due to image noise, shadows, etc. The proposed method addresses this problem by making use of multiple observations and by considering their goodness of fit to improve both the accuracy and robustness of the estimate. The stepwise regression method is applied to estimate the relaxed weight of each observation.
Sohaib, Ali; Farooq, Abdul R; Atkinson, Gary A; Smith, Lyndon N; Smith, Melvyn L; Warr, Robert
2013-03-01
This paper proposes and describes an implementation of a photometric stereo-based technique for in vivo assessment of three-dimensional (3D) skin topography in the presence of interreflections. The proposed method illuminates skin with red, green, and blue colored lights and uses the resulting variation in surface gradients to mitigate the effects of interreflections. Experiments were carried out on Caucasian, Asian, and African American subjects to demonstrate the accuracy of our method and to validate the measurements produced by our system. Our method produced significant improvement in 3D surface reconstruction for all Caucasian, Asian, and African American skin types. The results also illustrate the differences in recovered skin topography due to the nondiffuse bidirectional reflectance distribution function (BRDF) for each color illumination used, which also concur with the existing multispectral BRDF data available for skin.
Improved patch-based learning for image deblurring
NASA Astrophysics Data System (ADS)
Dong, Bo; Jiang, Zhiguo; Zhang, Haopeng
2015-05-01
Most recent image deblurring methods only use valid information found in input image as the clue to fill the deblurring region. These methods usually have the defects of insufficient prior information and relatively poor adaptiveness. Patch-based method not only uses the valid information of the input image itself, but also utilizes the prior information of the sample images to improve the adaptiveness. However the cost function of this method is quite time-consuming and the method may also produce ringing artifacts. In this paper, we propose an improved non-blind deblurring algorithm based on learning patch likelihoods. On one hand, we consider the effect of the Gaussian mixture model with different weights and normalize the weight values, which can optimize the cost function and reduce running time. On the other hand, a post processing method is proposed to solve the ringing artifacts produced by traditional patch-based method. Extensive experiments are performed. Experimental results verify that our method can effectively reduce the execution time, suppress the ringing artifacts effectively, and keep the quality of deblurred image.
Construction of prediction intervals for Palmer Drought Severity Index using bootstrap
NASA Astrophysics Data System (ADS)
Beyaztas, Ufuk; Bickici Arikan, Bugrayhan; Beyaztas, Beste Hamiye; Kahya, Ercan
2018-04-01
In this study, we propose an approach based on the residual-based bootstrap method to obtain valid prediction intervals using monthly, short-term (three-months) and mid-term (six-months) drought observations. The effects of North Atlantic and Arctic Oscillation indexes on the constructed prediction intervals are also examined. Performance of the proposed approach is evaluated for the Palmer Drought Severity Index (PDSI) obtained from Konya closed basin located in Central Anatolia, Turkey. The finite sample properties of the proposed method are further illustrated by an extensive simulation study. Our results revealed that the proposed approach is capable of producing valid prediction intervals for future PDSI values.
Quantitative prediction of drug side effects based on drug-related features.
Niu, Yanqing; Zhang, Wen
2017-09-01
Unexpected side effects of drugs are great concern in the drug development, and the identification of side effects is an important task. Recently, machine learning methods are proposed to predict the presence or absence of interested side effects for drugs, but it is difficult to make the accurate prediction for all of them. In this paper, we transform side effect profiles of drugs as their quantitative scores, by summing up their side effects with weights. The quantitative scores may measure the dangers of drugs, and thus help to compare the risk of different drugs. Here, we attempt to predict quantitative scores of drugs, namely the quantitative prediction. Specifically, we explore a variety of drug-related features and evaluate their discriminative powers for the quantitative prediction. Then, we consider several feature combination strategies (direct combination, average scoring ensemble combination) to integrate three informative features: chemical substructures, targets, and treatment indications. Finally, the average scoring ensemble model which produces the better performances is used as the final quantitative prediction model. Since weights for side effects are empirical values, we randomly generate different weights in the simulation experiments. The experimental results show that the quantitative method is robust to different weights, and produces satisfying results. Although other state-of-the-art methods cannot make the quantitative prediction directly, the prediction results can be transformed as the quantitative scores. By indirect comparison, the proposed method produces much better results than benchmark methods in the quantitative prediction. In conclusion, the proposed method is promising for the quantitative prediction of side effects, which may work cooperatively with existing state-of-the-art methods to reveal dangers of drugs.
Li, Jing; Zhang, Miao; Chen, Lin; Cai, Congbo; Sun, Huijun; Cai, Shuhui
2015-06-01
We employ an amplitude-modulated chirp pulse to selectively excite spins in one or more regions of interest (ROIs) to realize reduced field-of-view (rFOV) imaging based on single-shot spatiotemporally encoded (SPEN) sequence and Fourier transform reconstruction. The proposed rFOV imaging method was theoretically analyzed and illustrated with numerical simulation and tested with phantom experiments and in vivo rat experiments. In addition, point spread function was applied to demonstrate the feasibility of the proposed method. To evaluate the proposed method, the rFOV results were compared with those obtained using the EPI method with orthogonal RF excitation. The simulation and experimental results show that the proposed method can image one or two separated ROIs along the SPEN dimension in a single shot with higher spatial resolution, less sensitive to field inhomogeneity, and practically no aliasing artifacts. In addition, the proposed method may produce rFOV images with comparable signal-to-noise ratio to the rFOV EPI images. The proposed method is promising for the applications under severe susceptibility heterogeneities and for imaging separate ROIs simultaneously. Copyright © 2015 Elsevier Inc. All rights reserved.
Many-body physics using cold atoms
NASA Astrophysics Data System (ADS)
Sundar, Bhuvanesh
Advances in experiments on dilute ultracold atomic gases have given us access to highly tunable quantum systems. In particular, there have been substantial improvements in achieving different kinds of interaction between atoms. As a result, utracold atomic gases oer an ideal platform to simulate many-body phenomena in condensed matter physics, and engineer other novel phenomena that are a result of the exotic interactions produced between atoms. In this dissertation, I present a series of studies that explore the physics of dilute ultracold atomic gases in different settings. In each setting, I explore a different form of the inter-particle interaction. Motivated by experiments which induce artificial spin-orbit coupling for cold fermions, I explore this system in my first project. In this project, I propose a method to perform universal quantum computation using the excitations of interacting spin-orbit coupled fermions, in which effective p-wave interactions lead to the formation of a topological superfluid. Motivated by experiments which explore the physics of exotic interactions between atoms trapped inside optical cavities, I explore this system in a second project. I calculate the phase diagram of lattice bosons trapped in an optical cavity, where the cavity modes mediates effective global range checkerboard interactions between the atoms. I compare this phase diagram with one that was recently measured experimentally. In two other projects, I explore quantum simulation of condensed matter phenomena due to spin-dependent interactions between particles. I propose a method to produce tunable spin-dependent interactions between atoms, using an optical Feshbach resonance. In one project, I use these spin-dependent interactions in an ultracold Bose-Fermi system, and propose a method to produce the Kondo model. I propose an experiment to directly observe the Kondo effect in this system. In another project, I propose using lattice bosons with a large hyperfine spin, which have Feshbach-induced spin-dependent interactions, to produce a quantum dimer model. I propose an experiment to detect the ground state in this system. In a final project, I develop tools to simulate the dynamics of fermionic superfluids in which fermions interact via a short-range interaction.
Le, Nguyen-Quoc-Khanh; Ou, Yu-Yen
2016-07-30
Cellular respiration is a catabolic pathway for producing adenosine triphosphate (ATP) and is the most efficient process through which cells harvest energy from consumed food. When cells undergo cellular respiration, they require a pathway to keep and transfer electrons (i.e., the electron transport chain). Due to oxidation-reduction reactions, the electron transport chain produces a transmembrane proton electrochemical gradient. In case protons flow back through this membrane, this mechanical energy is converted into chemical energy by ATP synthase. The convert process is involved in producing ATP which provides energy in a lot of cellular processes. In the electron transport chain process, flavin adenine dinucleotide (FAD) is one of the most vital molecules for carrying and transferring electrons. Therefore, predicting FAD binding sites in the electron transport chain is vital for helping biologists understand the electron transport chain process and energy production in cells. We used an independent data set to evaluate the performance of the proposed method, which had an accuracy of 69.84 %. We compared the performance of the proposed method in analyzing two newly discovered electron transport protein sequences with that of the general FAD binding predictor presented by Mishra and Raghava and determined that the accuracy of the proposed method improved by 9-45 % and its Matthew's correlation coefficient was 0.14-0.5. Furthermore, the proposed method enabled reducing the number of false positives significantly and can provide useful information for biologists. We developed a method that is based on PSSM profiles and SAAPs for identifying FAD binding sites in newly discovered electron transport protein sequences. This approach achieved a significant improvement after we added SAAPs to PSSM features to analyze FAD binding proteins in the electron transport chain. The proposed method can serve as an effective tool for predicting FAD binding sites in electron transport proteins and can help biologists understand the functions of the electron transport chain, particularly those of FAD binding sites. We also developed a web server which identifies FAD binding sites in electron transporters available for academics.
Figure-ground segmentation based on class-independent shape priors
NASA Astrophysics Data System (ADS)
Li, Yang; Liu, Yang; Liu, Guojun; Guo, Maozu
2018-01-01
We propose a method to generate figure-ground segmentation by incorporating shape priors into the graph-cuts algorithm. Given an image, we first obtain a linear representation of an image and then apply directional chamfer matching to generate class-independent, nonparametric shape priors, which provide shape clues for the graph-cuts algorithm. We then enforce shape priors in a graph-cuts energy function to produce object segmentation. In contrast to previous segmentation methods, the proposed method shares shape knowledge for different semantic classes and does not require class-specific model training. Therefore, the approach obtains high-quality segmentation for objects. We experimentally validate that the proposed method outperforms previous approaches using the challenging PASCAL VOC 2010/2012 and Berkeley (BSD300) segmentation datasets.
A New Multiconstraint Method for Determining the Optimal Cable Stresses in Cable-Stayed Bridges
Asgari, B.; Osman, S. A.; Adnan, A.
2014-01-01
Cable-stayed bridges are one of the most popular types of long-span bridges. The structural behaviour of cable-stayed bridges is sensitive to the load distribution between the girder, pylons, and cables. The determination of pretensioning cable stresses is critical in the cable-stayed bridge design procedure. By finding the optimum stresses in cables, the load and moment distribution of the bridge can be improved. In recent years, different research works have studied iterative and modern methods to find optimum stresses of cables. However, most of the proposed methods have limitations in optimising the structural performance of cable-stayed bridges. This paper presents a multiconstraint optimisation method to specify the optimum cable forces in cable-stayed bridges. The proposed optimisation method produces less bending moments and stresses in the bridge members and requires shorter simulation time than other proposed methods. The results of comparative study show that the proposed method is more successful in restricting the deck and pylon displacements and providing uniform deck moment distribution than unit load method (ULM). The final design of cable-stayed bridges can be optimised considerably through proposed multiconstraint optimisation method. PMID:25050400
A new multiconstraint method for determining the optimal cable stresses in cable-stayed bridges.
Asgari, B; Osman, S A; Adnan, A
2014-01-01
Cable-stayed bridges are one of the most popular types of long-span bridges. The structural behaviour of cable-stayed bridges is sensitive to the load distribution between the girder, pylons, and cables. The determination of pretensioning cable stresses is critical in the cable-stayed bridge design procedure. By finding the optimum stresses in cables, the load and moment distribution of the bridge can be improved. In recent years, different research works have studied iterative and modern methods to find optimum stresses of cables. However, most of the proposed methods have limitations in optimising the structural performance of cable-stayed bridges. This paper presents a multiconstraint optimisation method to specify the optimum cable forces in cable-stayed bridges. The proposed optimisation method produces less bending moments and stresses in the bridge members and requires shorter simulation time than other proposed methods. The results of comparative study show that the proposed method is more successful in restricting the deck and pylon displacements and providing uniform deck moment distribution than unit load method (ULM). The final design of cable-stayed bridges can be optimised considerably through proposed multiconstraint optimisation method.
CuBe: parametric modeling of 3D foveal shape using cubic Bézier
Yadav, Sunil Kumar; Motamedi, Seyedamirhosein; Oberwahrenbrock, Timm; Oertel, Frederike Cosima; Polthier, Konrad; Paul, Friedemann; Kadas, Ella Maria; Brandt, Alexander U.
2017-01-01
Optical coherence tomography (OCT) allows three-dimensional (3D) imaging of the retina, and is commonly used for assessing pathological changes of fovea and macula in many diseases. Many neuroinflammatory conditions are known to cause modifications to the fovea shape. In this paper, we propose a method for parametric modeling of the foveal shape. Our method exploits invariant features of the macula from OCT data and applies a cubic Bézier polynomial along with a least square optimization to produce a best fit parametric model of the fovea. Additionally, we provide several parameters of the foveal shape based on the proposed 3D parametric modeling. Our quantitative and visual results show that the proposed model is not only able to reconstruct important features from the foveal shape, but also produces less error compared to the state-of-the-art methods. Finally, we apply the model in a comparison of healthy control eyes and eyes from patients with neuroinflammatory central nervous system disorders and optic neuritis, and show that several derived model parameters show significant differences between the two groups. PMID:28966857
A tuned mesh-generation strategy for image representation based on data-dependent triangulation.
Li, Ping; Adams, Michael D
2013-05-01
A mesh-generation framework for image representation based on data-dependent triangulation is proposed. The proposed framework is a modified version of the frameworks of Rippa and Garland and Heckbert that facilitates the development of more effective mesh-generation methods. As the proposed framework has several free parameters, the effects of different choices of these parameters on mesh quality are studied, leading to the recommendation of a particular set of choices for these parameters. A mesh-generation method is then introduced that employs the proposed framework with these best parameter choices. This method is demonstrated to produce meshes of higher quality (both in terms of squared error and subjectively) than those generated by several competing approaches, at a relatively modest computational and memory cost.
Real-time PCR assays for detection and quantification of aflatoxin-producing molds in foods.
Rodríguez, Alicia; Rodríguez, Mar; Luque, M Isabel; Martín, Alberto; Córdoba, Juan J
2012-08-01
Aflatoxins are among the most toxic mycotoxins. Early detection and quantification of aflatoxin-producing species is crucial to improve food safety. In the present work, two protocols of real-time PCR (qPCR) based on SYBR Green and TaqMan were developed, and their sensitivity and specificity were evaluated. Primers and probes were designed from the o-methyltransferase gene (omt-1) involved in aflatoxin biosynthesis. Fifty-three mold strains representing aflatoxin producers and non-producers of different species, usually reported in food products, were used as references. All strains were tested for aflatoxins production by high-performance liquid chromatography-mass spectrometry (HPLC-MS). The functionality of the proposed qPCR method was demonstrated by the strong linear relationship of the standard curves constructed with the omt-1 gene copy number and Ct values for the different aflatoxin producers tested. The ability of the qPCR protocols to quantify aflatoxin-producing molds was evaluated in different artificially inoculated foods. A good linear correlation was obtained over the range 4 to 1 log cfu/g per reaction for all qPCR assays in the different food matrices (peanuts, spices and dry-fermented sausages). The detection limit in all inoculated foods ranged from 1 to 2 log cfu/g for SYBR Green and TaqMan assays. No significant effect was observed due to the different equipment, operator, and qPCR methodology used in the tests of repeatability and reproducibility for different foods. The proposed methods quantified with high efficiency the fungal load in foods. These qPCR protocols are proposed for use to quantify aflatoxin-producing molds in food products. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Shu-tao; Yang, Xue-ying; Kong, De-ming; Wang, Yu-tian
2017-11-01
A new noise reduction method based on ensemble empirical mode decomposition (EEMD) is proposed to improve the detection effect for fluorescence spectra. Polycyclic aromatic hydrocarbons (PAHs) pollutants, as a kind of important current environmental pollution source, are highly oncogenic. Using the fluorescence spectroscopy method, the PAHs pollutants can be detected. However, instrument will produce noise in the experiment. Weak fluorescent signals can be affected by noise, so we propose a way to denoise and improve the detection effect. Firstly, we use fluorescence spectrometer to detect PAHs to obtain fluorescence spectra. Subsequently, noises are reduced by EEMD algorithm. Finally, the experiment results show the proposed method is feasible.
Event by event analysis and entropy of multiparticle systems
NASA Astrophysics Data System (ADS)
Bialas, A.; Czyz, W.
2000-04-01
The coincidence method of measuring the entropy of a system, proposed some time ago by Ma, is generalized to include systems out of equilibrium. It is suggested that the method can be adapted to analyze multiparticle states produced in high-energy collisions.
Micro-Employees Employment, Enhanced Oil-Recovery Improvement
NASA Astrophysics Data System (ADS)
Allahtavakoli, M.; Allahtavakoli, Y.
2009-04-01
Employment of Micro-organisms, as profitable micro-employees in improvement of Enhanced Oil Recovery (EOR), leads us to a famous method named "MEOR". Applying micro-organisms in MEOR makes it more lucrative than other EOR ways because feeding these micro-employees is highly economical and their metabolic processes require some cheap food-resources such as molasses. In addition, utilizing the local micro-organism in reservoirs will reduce the costs effectively; Furthermore these micro-organisms are safety and innocuous to some extent. In MEOR, the micro-organisms are always employed for two purposes, "Restoring pressure to reservoir" and "Decreasing Oil-Viscosity". As often as more, the former is achievable by In-Situ Mechanism or by applying the micro-organisms producing Biopolymers and the latter is also reachable by applying the micro-organisms producing Bio-surfactants. This paper as a proposal which was propounded to National Iranian Oil Company (NIOC) is an argument for studying and reviewing "Interaction between Micro-organisms and Reservoir physiochemical properties", "Biopolymer producers and Bio-Surfactant Producers", "In-Situ Mechanism", "Proposed Methods in MEOR" and their limitations.
O'Gorman, Thomas W
2018-05-01
In the last decade, it has been shown that an adaptive testing method could be used, along with the Robbins-Monro search procedure, to obtain confidence intervals that are often narrower than traditional confidence intervals. However, these confidence interval limits require a great deal of computation and some familiarity with stochastic search methods. We propose a method for estimating the limits of confidence intervals that uses only a few tests of significance. We compare these limits to those obtained by a lengthy Robbins-Monro stochastic search and find that the proposed method is nearly as accurate as the Robbins-Monro search. Adaptive confidence intervals that are produced by the proposed method are often narrower than traditional confidence intervals when the distributions are long-tailed, skewed, or bimodal. Moreover, the proposed method of estimating confidence interval limits is easy to understand, because it is based solely on the p-values from a few tests of significance.
Energy Consumption Forecasting Using Semantic-Based Genetic Programming with Local Search Optimizer.
Castelli, Mauro; Trujillo, Leonardo; Vanneschi, Leonardo
2015-01-01
Energy consumption forecasting (ECF) is an important policy issue in today's economies. An accurate ECF has great benefits for electric utilities and both negative and positive errors lead to increased operating costs. The paper proposes a semantic based genetic programming framework to address the ECF problem. In particular, we propose a system that finds (quasi-)perfect solutions with high probability and that generates models able to produce near optimal predictions also on unseen data. The framework blends a recently developed version of genetic programming that integrates semantic genetic operators with a local search method. The main idea in combining semantic genetic programming and a local searcher is to couple the exploration ability of the former with the exploitation ability of the latter. Experimental results confirm the suitability of the proposed method in predicting the energy consumption. In particular, the system produces a lower error with respect to the existing state-of-the art techniques used on the same dataset. More importantly, this case study has shown that including a local searcher in the geometric semantic genetic programming system can speed up the search process and can result in fitter models that are able to produce an accurate forecasting also on unseen data.
NASA Astrophysics Data System (ADS)
Yoneda, Makoto; Dohmeki, Hideo
The position control system with the advantage large torque, low vibration, and high resolution can be obtained by the constant current micro step drive applied to hybrid stepping motor. However loss is large, in order not to be concerned with load torque but to control current uniformly. As the one technique of a position control system in which high efficiency is realizable, the same sensorless control as a permanent magnet motor is effective. But, it was the purpose that the control method proposed until now controls speed. Then, this paper proposed changing the drive method of micro step drive and sensorless drive. The change of the drive method was verified from the simulation and the experiment. On no load, it was checked not producing change of a large speed at the time of a change by making electrical angle and carrying out zero reset of the integrator. On load, it was checked that a large speed change arose. The proposed system could change drive method by setting up the initial value of an integrator using the estimated result, without producing speed change. With this technique, the low loss position control system, which employed the advantage of the hybrid stepping motor, has been built.
Oil core microcapsules by inverse gelation technique.
Martins, Evandro; Renard, Denis; Davy, Joëlle; Marquis, Mélanie; Poncelet, Denis
2015-01-01
A promising technique for oil encapsulation in Ca-alginate capsules by inverse gelation was proposed by Abang et al. This method consists of emulsifying calcium chloride solution in oil and then adding it dropwise in an alginate solution to produce Ca-alginate capsules. Spherical capsules with diameters around 3 mm were produced by this technique, however the production of smaller capsules was not demonstrated. The objective of this study is to propose a new method of oil encapsulation in a Ca-alginate membrane by inverse gelation. The optimisation of the method leads to microcapsules with diameters around 500 μm. In a search of microcapsules with improved diffusion characteristics, the size reduction is an essential factor to broaden the applications in food, cosmetics and pharmaceuticals areas. This work contributes to a better understanding of the inverse gelation technique and allows the production of microcapsules with a well-defined shell-core structure.
Metal artifact reduction using a patch-based reconstruction for digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Borges, Lucas R.; Bakic, Predrag R.; Maidment, Andrew D. A.; Vieira, Marcelo A. C.
2017-03-01
Digital breast tomosynthesis (DBT) is rapidly emerging as the main clinical tool for breast cancer screening. Although several reconstruction methods for DBT are described by the literature, one common issue is the interplane artifacts caused by out-of-focus features. For breasts containing highly attenuating features, such as surgical clips and large calcifications, the artifacts are even more apparent and can limit the detection and characterization of lesions by the radiologist. In this work, we propose a novel method of combining backprojected data into tomographic slices using a patch-based approach, commonly used in denoising. Preliminary tests were performed on a geometry phantom and on an anthropomorphic phantom containing metal inserts. The reconstructed images were compared to a commercial reconstruction solution. Qualitative assessment of the reconstructed images provides evidence that the proposed method reduces artifacts while maintaining low noise levels. Objective assessment supports the visual findings. The artifact spread function shows that the proposed method is capable of suppressing artifacts generated by highly attenuating features. The signal difference to noise ratio shows that the noise levels of the proposed and commercial methods are comparable, even though the commercial method applies post-processing filtering steps, which were not implemented on the proposed method. Thus, the proposed method can produce tomosynthesis reconstructions with reduced artifacts and low noise levels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Viani, Alberto, E-mail: viani@itam.cas.cz; Sotiriadis, Konstantinos; Len, Adél
Full characterization of fired-clay bricks is crucial for the improvement of process variables in manufacturing and, in case of old bricks, for restoration/replacement purposes. To this aim, five bricks produced in a plant in Czech Republic in the past have been investigated with a combination of analytical techniques in order to derive information on the firing process. An additional old brick from another brickyard was also used to study the influence of different raw materials on sample microstructure. The potential of X-ray diffraction with the Rietveld method and small angle neutron scattering technique has been exploited to describe the phasemore » transformations taking place during firing and characterize the brick microstructure. Unit-cell parameter of spinel and amount of hematite are proposed as indicators of the maximum firing temperature, although for the latter, limited to bricks produced from the same raw material. The fractal quality of the surface area of pores obtained from small angle neutron scattering is also suggested as a method to distinguish between bricks produced from different raw clays. - Highlights: • Rietveld method helps in describing microstructure and physical properties of bricks. • XRPD derived cell parameter of spinel is proposed as an indicator of firing temperature. • SANS effectively describes brick micro and nanostructure, including closed porosity. • Fractal quality of pore surface is proposed as ‘fingerprint’ of brick manufacturing.« less
Qin, Guoyou; Zhang, Jiajia; Zhu, Zhongyi; Fung, Wing
2016-12-20
Outliers, measurement error, and missing data are commonly seen in longitudinal data because of its data collection process. However, no method can address all three of these issues simultaneously. This paper focuses on the robust estimation of partially linear models for longitudinal data with dropouts and measurement error. A new robust estimating equation, simultaneously tackling outliers, measurement error, and missingness, is proposed. The asymptotic properties of the proposed estimator are established under some regularity conditions. The proposed method is easy to implement in practice by utilizing the existing standard generalized estimating equations algorithms. The comprehensive simulation studies show the strength of the proposed method in dealing with longitudinal data with all three features. Finally, the proposed method is applied to data from the Lifestyle Education for Activity and Nutrition study and confirms the effectiveness of the intervention in producing weight loss at month 9. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
76 FR 39860 - Proposed Agency Information Collection
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-07
... ``Broad Program Area Categories'' (BPACs) for purposes of conducting the research. For each evaluation... data collection methods than those prescribed for high-rigor. For example, data may be collected by... methods to produce energy savings and outcome estimates. A range of qualitative, quantitative (survey), on...
Kara, Derya; Fisher, Andrew; Hill, Steve
2015-12-01
The aim of this study is to develop a new method for the extraction and preconcentration of trace elements from edible oils via an ultrasound-assisted extraction using ethylenediaminetetraacetic acid (EDTA) producing detergentless microemulsions. These were then analyzed using ICP-MS against matrix matched standards. Optimum experimental conditions were determined and the applicability of the proposed ultrasound-assisted extraction method was investigated. Under the optimal conditions, the detection limits (μg kg(-1)) were 2.47, 2.81, 0.013, 0.037, 1.37, 0.050, 0.049, 0.47, 0.032 and 0.087 for Al, Ca, Cd, Cu, Mg, Mn, Ni, Ti, V and Zn respectively for edible oils (3Sb/m). The accuracy of the developed method was checked by analyzing certified reference material. The proposed method was applied to different edible oils such as sunflower seed oil, rapeseed oil, olive oil and cod liver oil. Copyright © 2015 Elsevier Ltd. All rights reserved.
Xia, Meng-lei; Wang, Lan; Yang, Zhi-xia; Chen, Hong-zhang
2016-04-01
This work proposed a new method which applied image processing and support vector machine (SVM) for screening of mold strains. Taking Monascus as example, morphological characteristics of Monascus colony were quantified by image processing. And the association between the characteristics and pigment production capability was determined by SVM. On this basis, a highly automated screening strategy was achieved. The accuracy of the proposed strategy is 80.6 %, which is compatible with the existing methods (81.1 % for microplate and 85.4 % for flask). Meanwhile, the screening of 500 colonies only takes 20-30 min, which is the highest rate among all published results. By applying this automated method, 13 strains with high-predicted production were obtained and the best one produced as 2.8-fold (226 U/mL) of pigment and 1.9-fold (51 mg/L) of lovastatin compared with the parent strain. The current study provides us with an effective and promising method for strain improvement.
Amin, A S
2001-03-01
A fairly sensitive, simple and rapid spectrophotometric method for the determination of some beta-lactam antibiotics, namely ampicillin (Amp), amoxycillin (Amox), 6-aminopenicillanic acid (6APA), cloxacillin (Clox), dicloxacillin (Diclox) and flucloxacillin sodium (Fluclox) in bulk samples and in pharmaceutical dosage forms is described. The proposed method involves the use of pyrocatechol violet as a chromogenic reagent. These drugs produce a reddish brown coloured ion pair with absorption maximum at 604, 641, 645, 604, 649 and 641 nm for Amp, Amox, 6APA, Clox, Diclox and Flucolx, respectively. The colours produced obey Beer's law and are suitable for the quantitative determination of the named compounds. The optimization of different experimental conditions is described. The molar ratio of the ion pairs was established and a proposal for the reaction pathway is given. The procedure described was applied successfully to determine the examined drugs in dosage forms and the results obtained were comparable to those obtained with the official methods.
A New Shape Description Method Using Angular Radial Transform
NASA Astrophysics Data System (ADS)
Lee, Jong-Min; Kim, Whoi-Yul
Shape is one of the primary low-level image features in content-based image retrieval. In this paper we propose a new shape description method that consists of a rotationally invariant angular radial transform descriptor (IARTD). The IARTD is a feature vector that combines the magnitude and aligned phases of the angular radial transform (ART) coefficients. A phase correction scheme is employed to produce the aligned phase so that the IARTD is invariant to rotation. The distance between two IARTDs is defined by combining differences in the magnitudes and aligned phases. In an experiment using the MPEG-7 shape dataset, the proposed method outperforms existing methods; the average BEP of the proposed method is 57.69%, while the average BEPs of the invariant Zernike moments descriptor and the traditional ART are 41.64% and 36.51%, respectively.
Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering
NASA Astrophysics Data System (ADS)
Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki
2018-03-01
We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.
Ellis, Sam; Reader, Andrew J
2018-04-26
Many clinical contexts require the acquisition of multiple positron emission tomography (PET) scans of a single subject, for example, to observe and quantitate changes in functional behaviour in tumors after treatment in oncology. Typically, the datasets from each of these scans are reconstructed individually, without exploiting the similarities between them. We have recently shown that sharing information between longitudinal PET datasets by penalizing voxel-wise differences during image reconstruction can improve reconstructed images by reducing background noise and increasing the contrast-to-noise ratio of high-activity lesions. Here, we present two additional novel longitudinal difference-image priors and evaluate their performance using two-dimesional (2D) simulation studies and a three-dimensional (3D) real dataset case study. We have previously proposed a simultaneous difference-image-based penalized maximum likelihood (PML) longitudinal image reconstruction method that encourages sparse difference images (DS-PML), and in this work we propose two further novel prior terms. The priors are designed to encourage longitudinal images with corresponding differences which have (a) low entropy (DE-PML), and (b) high sparsity in their spatial gradients (DTV-PML). These two new priors and the originally proposed longitudinal prior were applied to 2D-simulated treatment response [ 18 F]fluorodeoxyglucose (FDG) brain tumor datasets and compared to standard maximum likelihood expectation-maximization (MLEM) reconstructions. These 2D simulation studies explored the effects of penalty strengths, tumor behaviour, and interscan coupling on reconstructed images. Finally, a real two-scan longitudinal data series acquired from a head and neck cancer patient was reconstructed with the proposed methods and the results compared to standard reconstruction methods. Using any of the three priors with an appropriate penalty strength produced images with noise levels equivalent to those seen when using standard reconstructions with increased counts levels. In tumor regions, each method produces subtly different results in terms of preservation of tumor quantitation and reconstruction root mean-squared error (RMSE). In particular, in the two-scan simulations, the DE-PML method produced tumor means in close agreement with MLEM reconstructions, while the DTV-PML method produced the lowest errors due to noise reduction within the tumor. Across a range of tumor responses and different numbers of scans, similar results were observed, with DTV-PML producing the lowest errors of the three priors and DE-PML producing the lowest bias. Similar improvements were observed in the reconstructions of the real longitudinal datasets, although imperfect alignment of the two PET images resulted in additional changes in the difference image that affected the performance of the proposed methods. Reconstruction of longitudinal datasets by penalizing difference images between pairs of scans from a data series allows for noise reduction in all reconstructed images. An appropriate choice of penalty term and penalty strength allows for this noise reduction to be achieved while maintaining reconstruction performance in regions of change, either in terms of quantitation of mean intensity via DE-PML, or in terms of tumor RMSE via DTV-PML. Overall, improving the image quality of longitudinal datasets via simultaneous reconstruction has the potential to improve upon currently used methods, allow dose reduction, or reduce scan time while maintaining image quality at current levels. © 2018 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Robust finger vein ROI localization based on flexible segmentation.
Lu, Yu; Xie, Shan Juan; Yoon, Sook; Yang, Jucheng; Park, Dong Sun
2013-10-24
Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system.
Robust Finger Vein ROI Localization Based on Flexible Segmentation
Lu, Yu; Xie, Shan Juan; Yoon, Sook; Yang, Jucheng; Park, Dong Sun
2013-01-01
Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system. PMID:24284769
Latent component-based gear tooth fault detection filter using advanced parametric modeling
NASA Astrophysics Data System (ADS)
Ettefagh, M. M.; Sadeghi, M. H.; Rezaee, M.; Chitsaz, S.
2009-10-01
In this paper, a new parametric model-based filter is proposed for gear tooth fault detection. The designing of the filter consists of identifying the most proper latent component (LC) of the undamaged gearbox signal by analyzing the instant modules (IMs) and instant frequencies (IFs) and then using the component with lowest IM as the proposed filter output for detecting fault of the gearbox. The filter parameters are estimated by using the LC theory in which an advanced parametric modeling method has been implemented. The proposed method is applied on the signals, extracted from simulated gearbox for detection of the simulated gear faults. In addition, the method is used for quality inspection of the produced Nissan-Junior vehicle gearbox by gear profile error detection in an industrial test bed. For evaluation purpose, the proposed method is compared with the previous parametric TAR/AR-based filters in which the parametric model residual is considered as the filter output and also Yule-Walker and Kalman filter are implemented for estimating the parameters. The results confirm the high performance of the new proposed fault detection method.
Reconstructing latent dynamical noise for better forecasting observables
NASA Astrophysics Data System (ADS)
Hirata, Yoshito
2018-03-01
I propose a method for reconstructing multi-dimensional dynamical noise inspired by the embedding theorem of Muldoon et al. [Dyn. Stab. Syst. 13, 175 (1998)] by regarding multiple predictions as different observables. Then, applying the embedding theorem by Stark et al. [J. Nonlinear Sci. 13, 519 (2003)] for a forced system, I produce time series forecast by supplying the reconstructed past dynamical noise as auxiliary information. I demonstrate the proposed method on toy models driven by auto-regressive models or independent Gaussian noise.
Horizontal decomposition of data table for finding one reduct
NASA Astrophysics Data System (ADS)
Hońko, Piotr
2018-04-01
Attribute reduction, being one of the most essential tasks in rough set theory, is a challenge for data that does not fit in the available memory. This paper proposes new definitions of attribute reduction using horizontal data decomposition. Algorithms for computing superreduct and subsequently exact reducts of a data table are developed and experimentally verified. In the proposed approach, the size of subtables obtained during the decomposition can be arbitrarily small. Reducts of the subtables are computed independently from one another using any heuristic method for finding one reduct. Compared with standard attribute reduction methods, the proposed approach can produce superreducts that usually inconsiderably differ from an exact reduct. The approach needs comparable time and much less memory to reduce the attribute set. The method proposed for removing unnecessary attributes from superreducts executes relatively fast for bigger databases.
NASA Astrophysics Data System (ADS)
Al-Temeemy, Ali A.
2018-03-01
A descriptor is proposed for use in domiciliary healthcare monitoring systems. The descriptor is produced from chromatic methodology to extract robust features from the monitoring system's images. It has superior discrimination capabilities, is robust to events that normally disturb monitoring systems, and requires less computational time and storage space to achieve recognition. A method of human region segmentation is also used with this descriptor. The performance of the proposed descriptor was evaluated using experimental data sets, obtained through a series of experiments performed in the Centre for Intelligent Monitoring Systems, University of Liverpool. The evaluation results show high recognition performance for the proposed descriptor in comparison to traditional descriptors, such as moments invariant. The results also show the effectiveness of the proposed segmentation method regarding distortion effects associated with domiciliary healthcare systems.
Local Feature Selection for Data Classification.
Armanfard, Narges; Reilly, James P; Komeili, Majid
2016-06-01
Typical feature selection methods choose an optimal global feature subset that is applied over all regions of the sample space. In contrast, in this paper we propose a novel localized feature selection (LFS) approach whereby each region of the sample space is associated with its own distinct optimized feature set, which may vary both in membership and size across the sample space. This allows the feature set to optimally adapt to local variations in the sample space. An associated method for measuring the similarities of a query datum to each of the respective classes is also proposed. The proposed method makes no assumptions about the underlying structure of the samples; hence the method is insensitive to the distribution of the data over the sample space. The method is efficiently formulated as a linear programming optimization problem. Furthermore, we demonstrate the method is robust against the over-fitting problem. Experimental results on eleven synthetic and real-world data sets demonstrate the viability of the formulation and the effectiveness of the proposed algorithm. In addition we show several examples where localized feature selection produces better results than a global feature selection method.
Method of Making Large Area Nanostructures
NASA Technical Reports Server (NTRS)
Marks, Alvin M.
1995-01-01
A method which enables the high speed formation of nanostructures on large area surfaces is described. The method uses a super sub-micron beam writer (Supersebter). The Supersebter uses a large area multi-electrode (Spindt type emitter source) to produce multiple electron beams simultaneously scanned to form a pattern on a surface in an electron beam writer. A 100,000 x 100,000 array of electron point sources, demagnified in a long electron beam writer to simultaneously produce 10 billion nano-patterns on a 1 meter squared surface by multi-electron beam impact on a 1 cm squared surface of an insulating material is proposed.
Xu, Kefeng; Chen, Zhonghui; Zhou, Ling; Zheng, Ou; Wu, Xiaoping; Guo, Longhua; Qiu, Bin; Lin, Zhenyu; Chen, Guonan
2015-01-06
A fluorometric method for pyrophosphatase (PPase) activity detection was developed based on click chemistry. Cu(II) can coordinate with pyrophosphate (PPi), the addition of pyrophosphatase (PPase) into the above system can destroy the coordinate compound because PPase catalyzes the hydrolysis of PPi into inorganic phosphate and produces free Cu(II), and free Cu(II) can be reduced by sodium ascorbate (SA) to form Cu(I), which in turn initiates the ligating reaction between nonfluorescent 3-azidocoumarins and terminal alkynes to produce a highly fluorescent triazole complex, based on which, a simple and sensitive turn on fluorometric method for PPase can be developed. The fluorescence intensity of the system has a linear relationship with the logarithm of the PPase concentration in the range of 0.5 and 10 mU with a detection limit down to 0.2 mU (S/N = 3). This method is cost-effective and convenient without any labels or complicated operations. The proposed system was applied to screen the potential PPase inhibitor with high efficiency. The proposed method can be applied to diagnosis of PPase-related diseases.
Elimination of RF inhomogeneity effects in segmentation.
Agus, Onur; Ozkan, Mehmed; Aydin, Kubilay
2007-01-01
There are various methods proposed for the segmentation and analysis of MR images. However the efficiency of these techniques is effected by various artifacts that occur in the imaging system. One of the most encountered problems is the intensity variation across an image. To overcome this problem different methods are used. In this paper we propose a method for the elimination of intensity artifacts in segmentation of MRI images. Inter imager variations are also minimized to produce the same tissue segmentation for the same patient. A well-known multivariate classification algorithm, maximum likelihood is employed to illustrate the enhancement in segmentation.
Symbiotic organisms search algorithm for dynamic economic dispatch with valve-point effects
NASA Astrophysics Data System (ADS)
Sonmez, Yusuf; Kahraman, H. Tolga; Dosoglu, M. Kenan; Guvenc, Ugur; Duman, Serhat
2017-05-01
In this study, symbiotic organisms search (SOS) algorithm is proposed to solve the dynamic economic dispatch with valve-point effects problem, which is one of the most important problems of the modern power system. Some practical constraints like valve-point effects, ramp rate limits and prohibited operating zones have been considered as solutions. Proposed algorithm was tested on five different test cases in 5 units, 10 units and 13 units systems. The obtained results have been compared with other well-known metaheuristic methods reported before. Results show that proposed algorithm has a good convergence and produces better results than other methods.
An evidence-based patient-centered method makes the biopsychosocial model scientific.
Smith, Robert C; Fortin, Auguste H; Dwamena, Francesca; Frankel, Richard M
2013-06-01
To review the scientific status of the biopsychosocial (BPS) model and to propose a way to improve it. Engel's BPS model added patients' psychological and social health concerns to the highly successful biomedical model. He proposed that the BPS model could make medicine more scientific, but its use in education, clinical care, and, especially, research remains minimal. Many aver correctly that the present model cannot be defined in a consistent way for the individual patient, making it untestable and non-scientific. This stems from not obtaining relevant BPS data systematically, where one interviewer obtains the same information another would. Recent research by two of the authors has produced similar patient-centered interviewing methods that are repeatable and elicit just the relevant patient information needed to define the model at each visit. We propose that the field adopt these evidence-based methods as the standard for identifying the BPS model. Identifying a scientific BPS model in each patient with an agreed-upon, evidence-based patient-centered interviewing method can produce a quantum leap ahead in both research and teaching. A scientific BPS model can give us more confidence in being humanistic. In research, we can conduct more rigorous studies to inform better practices. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Investigation of methods to produce a uniform cloud of fuel particles in a flame tube
NASA Technical Reports Server (NTRS)
Siegert, Clifford E.; Pla, Frederic G.; Rubinstein, Robert; Niezgoda, Thomas F.; Burns, Robert J.; Johnson, Jerome A.
1990-01-01
The combustion of a uniform, quiescent cloud of 30-micron fuel particles in a flame tube was proposed as a space-based, low-gravity experiment. The subject is the normal- and low-gravity testing of several methods to produce such a cloud, including telescoping propeller fans, air pumps, axial and quadrature acoustical speakers, and combinations of these devices. When operated in steady state, none of the methods produced an acceptably uniform cloud (+ or - 5 percent of the mean concentration), and voids in the cloud were clearly visible. In some cases, severe particle agglomeration was observed; however, these clusters could be broken apart by a short acoustic burst from an axially in-line speaker. Analyses and experiments reported elsewhere suggest that transient, acoustic mixing methods can enhance cloud uniformity while minimizing particle agglomeration.
Discovery of Boolean metabolic networks: integer linear programming based approach.
Qiu, Yushan; Jiang, Hao; Ching, Wai-Ki; Cheng, Xiaoqing
2018-04-11
Traditional drug discovery methods focused on the efficacy of drugs rather than their toxicity. However, toxicity and/or lack of efficacy are produced when unintended targets are affected in metabolic networks. Thus, identification of biological targets which can be manipulated to produce the desired effect with minimum side-effects has become an important and challenging topic. Efficient computational methods are required to identify the drug targets while incurring minimal side-effects. In this paper, we propose a graph-based computational damage model that summarizes the impact of enzymes on compounds in metabolic networks. An efficient method based on Integer Linear Programming formalism is then developed to identify the optimal enzyme-combination so as to minimize the side-effects. The identified target enzymes for known successful drugs are then verified by comparing the results with those in the existing literature. Side-effects reduction plays a crucial role in the study of drug development. A graph-based computational damage model is proposed and the theoretical analysis states the captured problem is NP-completeness. The proposed approaches can therefore contribute to the discovery of drug targets. Our developed software is available at " http://hkumath.hku.hk/~wkc/APBC2018-metabolic-network.zip ".
NASA Astrophysics Data System (ADS)
Qin, Xinqiang; Hu, Gang; Hu, Kai
2018-01-01
The decomposition of multiple source images using bidimensional empirical mode decomposition (BEMD) often produces mismatched bidimensional intrinsic mode functions, either by their number or their frequency, making image fusion difficult. A solution to this problem is proposed using a fixed number of iterations and a union operation in the sifting process. By combining the local regional features of the images, an image fusion method has been developed. First, the source images are decomposed using the proposed BEMD to produce the first intrinsic mode function (IMF) and residue component. Second, for the IMF component, a selection and weighted average strategy based on local area energy is used to obtain a high-frequency fusion component. Third, for the residue component, a selection and weighted average strategy based on local average gray difference is used to obtain a low-frequency fusion component. Finally, the fused image is obtained by applying the inverse BEMD transform. Experimental results show that the proposed algorithm provides superior performance over methods based on wavelet transform, line and column-based EMD, and complex empirical mode decomposition, both in terms of visual quality and objective evaluation criteria.
Alves, R S; Teodoro, P E; Farias, F C; Farias, F J C; Carvalho, L P; Rodrigues, J I S; Bhering, L L; Resende, M D V
2017-08-17
Cotton produces one of the most important textile fibers of the world and has great relevance in the world economy. It is an economically important crop in Brazil, which is the world's fifth largest producer. However, studies evaluating the genotype x environment (G x E) interactions in cotton are scarce in this country. Therefore, the goal of this study was to evaluate the G x E interactions in two important traits in cotton (fiber yield and fiber length) using the method proposed by Eberhart and Russell (simple linear regression) and reaction norm models (random regression). Eight trials with sixteen upland cotton genotypes, conducted in a randomized block design, were used. It was possible to identify a genotype with wide adaptability and stability for both traits. Reaction norm models have excellent theoretical and practical properties and led to more informative and accurate results than the method proposed by Eberhart and Russell and should, therefore, be preferred. Curves of genotypic values as a function of the environmental gradient, which predict the behavior of the genotypes along the environmental gradient, were generated. These curves make possible the recommendation to untested environmental levels.
Incorrect Match Detection Method for Arctic Sea-Ice Reconstruction Using Uav Images
NASA Astrophysics Data System (ADS)
Kim, J.-I.; Kim, H.-C.
2018-05-01
Shapes and surface roughness, which are considered as key indicators in understanding Arctic sea-ice, can be measured from the digital surface model (DSM) of the target area. Unmanned aerial vehicle (UAV) flying at low altitudes enables theoretically accurate DSM generation. However, the characteristics of sea-ice with textureless surface and incessant motion make image matching difficult for DSM generation. In this paper, we propose a method for effectively detecting incorrect matches before correcting a sea-ice DSM derived from UAV images. The proposed method variably adjusts the size of search window to analyze the matching results of DSM generated and distinguishes incorrect matches. Experimental results showed that the sea-ice DSM produced large errors along the textureless surfaces, and that the incorrect matches could be effectively detected by the proposed method.
Adaptive control of a jet turboshaft engine driving a variable pitch propeller using multiple models
NASA Astrophysics Data System (ADS)
Ahmadian, Narjes; Khosravi, Alireza; Sarhadi, Pouria
2017-08-01
In this paper, a multiple model adaptive control (MMAC) method is proposed for a gas turbine engine. The model of a twin spool turbo-shaft engine driving a variable pitch propeller includes various operating points. Variations in fuel flow and propeller pitch inputs produce different operating conditions which force the controller to be adopted rapidly. Important operating points are three idle, cruise and full thrust cases for the entire flight envelope. A multi-input multi-output (MIMO) version of second level adaptation using multiple models is developed. Also, stability analysis using Lyapunov method is presented. The proposed method is compared with two conventional first level adaptation and model reference adaptive control techniques. Simulation results for JetCat SPT5 turbo-shaft engine demonstrate the performance and fidelity of the proposed method.
NASA Astrophysics Data System (ADS)
Liang, Li-Feng; Zhang, Hong-Bing; Dan, Zhi-Wei; Xu, Zi-Qiang; Liu, Xiu-Juan; Cao, Cheng-Hao
2017-03-01
Simultaneous prestack inversion is based on the modified Fatti equation and uses the ratio of the P- and S-wave velocity as constraints. We use the relation of P-wave impedance and density (PID) and S-wave impedance and density (SID) to replace the constant Vp/Vs constraint, and we propose the improved constrained Fatti equation to overcome the effect of P-wave impedance on density. We compare the sensitivity of both methods using numerical simulations and conclude that the density inversion sensitivity improves when using the proposed method. In addition, the random conjugate-gradient method is used in the inversion because it is fast and produces global solutions. The use of synthetic and field data suggests that the proposed inversion method is effective in conventional and nonconventional lithologies.
Chen, Weitian; Sica, Christopher T; Meyer, Craig H
2008-11-01
Off-resonance effects can cause image blurring in spiral scanning and various forms of image degradation in other MRI methods. Off-resonance effects can be caused by both B0 inhomogeneity and concomitant gradient fields. Previously developed off-resonance correction methods focus on the correction of a single source of off-resonance. This work introduces a computationally efficient method of correcting for B0 inhomogeneity and concomitant gradients simultaneously. The method is a fast alternative to conjugate phase reconstruction, with the off-resonance phase term approximated by Chebyshev polynomials. The proposed algorithm is well suited for semiautomatic off-resonance correction, which works well even with an inaccurate or low-resolution field map. The proposed algorithm is demonstrated using phantom and in vivo data sets acquired by spiral scanning. Semiautomatic off-resonance correction alone is shown to provide a moderate amount of correction for concomitant gradient field effects, in addition to B0 imhomogeneity effects. However, better correction is provided by the proposed combined method. The best results were produced using the semiautomatic version of the proposed combined method.
Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction.
Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan
2017-02-27
Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 10 16 electrons/m²) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed method is better than the ordinary Kriging and polynomial interpolation by about 1.2 TECU and 0.7 TECU, respectively. The root mean squared error of the proposed new Kriging with variance components is within 1.5 TECU and is smaller than those from other methods under comparison by about 1 TECU. When compared with ionospheric grid points, the mean squared error of the proposed method is within 6 TECU and smaller than Kriging, indicating that the proposed method can produce more accurate ionospheric delays and better estimation accuracy over China regional area.
Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction
Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan
2017-01-01
Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed method is better than the ordinary Kriging and polynomial interpolation by about 1.2 TECU and 0.7 TECU, respectively. The root mean squared error of the proposed new Kriging with variance components is within 1.5 TECU and is smaller than those from other methods under comparison by about 1 TECU. When compared with ionospheric grid points, the mean squared error of the proposed method is within 6 TECU and smaller than Kriging, indicating that the proposed method can produce more accurate ionospheric delays and better estimation accuracy over China regional area. PMID:28264424
Reliable prediction intervals with regression neural networks.
Papadopoulos, Harris; Haralambous, Haris
2011-10-01
This paper proposes an extension to conventional regression neural networks (NNs) for replacing the point predictions they produce with prediction intervals that satisfy a required level of confidence. Our approach follows a novel machine learning framework, called Conformal Prediction (CP), for assigning reliable confidence measures to predictions without assuming anything more than that the data are independent and identically distributed (i.i.d.). We evaluate the proposed method on four benchmark datasets and on the problem of predicting Total Electron Content (TEC), which is an important parameter in trans-ionospheric links; for the latter we use a dataset of more than 60000 TEC measurements collected over a period of 11 years. Our experimental results show that the prediction intervals produced by our method are both well calibrated and tight enough to be useful in practice. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gilles, Antonin; Gioia, Patrick; Cozot, Rémi; Morin, Luce
2015-09-01
The hybrid point-source/wave-field method is a newly proposed approach for Computer-Generated Hologram (CGH) calculation, based on the slicing of the scene into several depth layers parallel to the hologram plane. The complex wave scattered by each depth layer is then computed using either a wave-field or a point-source approach according to a threshold criterion on the number of points within the layer. Finally, the complex waves scattered by all the depth layers are summed up in order to obtain the final CGH. Although outperforming both point-source and wave-field methods without producing any visible artifact, this approach has not yet been used for animated holograms, and the possible exploitation of temporal redundancies has not been studied. In this paper, we propose a fast computation of video holograms by taking into account those redundancies. Our algorithm consists of three steps. First, intensity and depth data of the current 3D video frame are extracted and compared with those of the previous frame in order to remove temporally redundant data. Then the CGH pattern for this compressed frame is generated using the hybrid point-source/wave-field approach. The resulting CGH pattern is finally transmitted to the video output and stored in the previous frame buffer. Experimental results reveal that our proposed method is able to produce video holograms at interactive rates without producing any visible artifact.
An efficient intensity-based ready-to-use X-ray image stitcher.
Wang, Junchen; Zhang, Xiaohui; Sun, Zhen; Yuan, Fuzhen
2018-06-14
The limited field of view of the X-ray image intensifier makes it difficult to cover a large target area with a single X-ray image. X-ray image stitching techniques have been proposed to produce a panoramic X-ray image. This paper presents an efficient intensity-based X-ray image stitcher, which does not rely on accurate C-arm motion control or auxiliary devices and hence is ready to use in clinic. The stitcher consumes sequentially captured X-ray images with overlap areas and automatically produces a panoramic image. The gradient information for optimization of image alignment is obtained using a back-propagation scheme so that it is convenient to adopt various image warping models. The proposed stitcher has the following advantages over existing methods: (1) no additional hardware modification or auxiliary markers are needed; (2) it is robust against feature-based approaches; (3) arbitrary warping models and shapes of the region of interest are supported; (4) seamless stitching is achieved using multi-band blending. Experiments have been performed to confirm the effectiveness of the proposed method. The proposed X-ray image stitcher is efficient, accurate and ready to use in clinic. Copyright © 2018 John Wiley & Sons, Ltd.
Wang, Shuang; Yue, Bo; Liang, Xuefeng; Jiao, Licheng
2018-03-01
Wisely utilizing the internal and external learning methods is a new challenge in super-resolution problem. To address this issue, we analyze the attributes of two methodologies and find two observations of their recovered details: 1) they are complementary in both feature space and image plane and 2) they distribute sparsely in the spatial space. These inspire us to propose a low-rank solution which effectively integrates two learning methods and then achieves a superior result. To fit this solution, the internal learning method and the external learning method are tailored to produce multiple preliminary results. Our theoretical analysis and experiment prove that the proposed low-rank solution does not require massive inputs to guarantee the performance, and thereby simplifying the design of two learning methods for the solution. Intensive experiments show the proposed solution improves the single learning method in both qualitative and quantitative assessments. Surprisingly, it shows more superior capability on noisy images and outperforms state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Moghim, S.; Hsu, K.; Bras, R. L.
2013-12-01
General Circulation Models (GCMs) are used to predict circulation and energy transfers between the atmosphere and the land. It is known that these models produce biased results that will have impact on their uses. This work proposes a new method for bias correction: the equidistant cumulative distribution function-artificial neural network (EDCDFANN) procedure. The method uses artificial neural networks (ANNs) as a surrogate model to estimate bias-corrected temperature, given an identification of the system derived from GCM models output variables. A two-layer feed forward neural network is trained with observations during a historical period and then the adjusted network can be used to predict bias-corrected temperature for future periods. To capture the extreme values this method is combined with the equidistant CDF matching method (EDCDF, Li et al. 2010). The proposed method is tested with the Community Climate System Model (CCSM3) outputs using air and skin temperature, specific humidity, shortwave and longwave radiation as inputs to the ANN. This method decreases the mean square error and increases the spatial correlation between the modeled temperature and the observed one. The results indicate the EDCDFANN has potential to remove the biases of the model outputs.
Integrating Multiple Data Sources for Combinatorial Marker Discovery: A Study in Tumorigenesis.
Bandyopadhyay, Sanghamitra; Mallik, Saurav
2018-01-01
Identification of combinatorial markers from multiple data sources is a challenging task in bioinformatics. Here, we propose a novel computational framework for identifying significant combinatorial markers ( s) using both gene expression and methylation data. The gene expression and methylation data are integrated into a single continuous data as well as a (post-discretized) boolean data based on their intrinsic (i.e., inverse) relationship. A novel combined score of methylation and expression data (viz., ) is introduced which is computed on the integrated continuous data for identifying initial non-redundant set of genes. Thereafter, (maximal) frequent closed homogeneous genesets are identified using a well-known biclustering algorithm applied on the integrated boolean data of the determined non-redundant set of genes. A novel sample-based weighted support ( ) is then proposed that is consecutively calculated on the integrated boolean data of the determined non-redundant set of genes in order to identify the non-redundant significant genesets. The top few resulting genesets are identified as potential s. Since our proposed method generates a smaller number of significant non-redundant genesets than those by other popular methods, the method is much faster than the others. Application of the proposed technique on an expression and a methylation data for Uterine tumor or Prostate Carcinoma produces a set of significant combination of markers. We expect that such a combination of markers will produce lower false positives than individual markers.
Contrast-dependent saturation adjustment for outdoor image enhancement.
Wang, Shuhang; Cho, Woon; Jang, Jinbeum; Abidi, Mongi A; Paik, Joonki
2017-01-01
Outdoor images captured in bad-weather conditions usually have poor intensity contrast and color saturation since the light arriving at the camera is severely scattered or attenuated. The task of improving image quality in poor conditions remains a challenge. Existing methods of image quality improvement are usually effective for a small group of images but often fail to produce satisfactory results for a broader variety of images. In this paper, we propose an image enhancement method, which makes it applicable to enhance outdoor images by using content-adaptive contrast improvement as well as contrast-dependent saturation adjustment. The main contribution of this work is twofold: (1) we propose the content-adaptive histogram equalization based on the human visual system to improve the intensity contrast; and (2) we introduce a simple yet effective prior for adjusting the color saturation depending on the intensity contrast. The proposed method is tested with different kinds of images, compared with eight state-of-the-art methods: four enhancement methods and four haze removal methods. Experimental results show the proposed method can more effectively improve the visibility and preserve the naturalness of the images, as opposed to the compared methods.
Feathering effect detection and artifact agglomeration index-based video deinterlacing technique
NASA Astrophysics Data System (ADS)
Martins, André Luis; Rodrigues, Evandro Luis Linhari; de Paiva, Maria Stela Veludo
2018-03-01
Several video deinterlacing techniques have been developed, and each one presents a better performance in certain conditions. Occasionally, even the most modern deinterlacing techniques create frames with worse quality than primitive deinterlacing processes. This paper validates that the final image quality can be improved by combining different types of deinterlacing techniques. The proposed strategy is able to select between two types of deinterlaced frames and, if necessary, make the local correction of the defects. This decision is based on an artifact agglomeration index obtained from a feathering effect detection map. Starting from a deinterlaced frame produced by the "interfield average" method, the defective areas are identified, and, if deemed appropriate, these areas are replaced by pixels generated through the "edge-based line average" method. Test results have proven that the proposed technique is able to produce video frames with higher quality than applying a single deinterlacing technique through getting what is good from intra- and interfield methods.
Integration of scheduling and discrete event simulation systems to improve production flow planning
NASA Astrophysics Data System (ADS)
Krenczyk, D.; Paprocka, I.; Kempa, W. M.; Grabowik, C.; Kalinowski, K.
2016-08-01
The increased availability of data and computer-aided technologies such as MRPI/II, ERP and MES system, allowing producers to be more adaptive to market dynamics and to improve production scheduling. Integration of production scheduling and computer modelling, simulation and visualization systems can be useful in the analysis of production system constraints related to the efficiency of manufacturing systems. A integration methodology based on semi-automatic model generation method for eliminating problems associated with complexity of the model and labour-intensive and time-consuming process of simulation model creation is proposed. Data mapping and data transformation techniques for the proposed method have been applied. This approach has been illustrated through examples of practical implementation of the proposed method using KbRS scheduling system and Enterprise Dynamics simulation system.
On an image reconstruction method for ECT
NASA Astrophysics Data System (ADS)
Sasamoto, Akira; Suzuki, Takayuki; Nishimura, Yoshihiro
2007-04-01
An image by Eddy Current Testing(ECT) is a blurred image to original flaw shape. In order to reconstruct fine flaw image, a new image reconstruction method has been proposed. This method is based on an assumption that a very simple relationship between measured data and source were described by a convolution of response function and flaw shape. This assumption leads to a simple inverse analysis method with deconvolution.In this method, Point Spread Function (PSF) and Line Spread Function(LSF) play a key role in deconvolution processing. This study proposes a simple data processing to determine PSF and LSF from ECT data of machined hole and line flaw. In order to verify its validity, ECT data for SUS316 plate(200x200x10mm) with artificial machined hole and notch flaw had been acquired by differential coil type sensors(produced by ZETEC Inc). Those data were analyzed by the proposed method. The proposed method restored sharp discrete multiple hole image from interfered data by multiple holes. Also the estimated width of line flaw has been much improved compared with original experimental data. Although proposed inverse analysis strategy is simple and easy to implement, its validity to holes and line flaw have been shown by many results that much finer image than original image have been reconstructed.
Turbine blade profile design method based on Bezier curves
NASA Astrophysics Data System (ADS)
Alexeev, R. A.; Tishchenko, V. A.; Gribin, V. G.; Gavrilov, I. Yu.
2017-11-01
In this paper, the technique of two-dimensional parametric blade profile design is presented. Bezier curves are used to create the profile geometry. The main feature of the proposed method is an adaptive approach of curve fitting to given geometric conditions. Calculation of the profile shape is produced by multi-dimensional minimization method with a number of restrictions imposed on the blade geometry.The proposed method has been used to describe parametric geometry of known blade profile. Then the baseline geometry was modified by varying some parameters of the blade. The numerical calculation of obtained designs has been carried out. The results of calculations have shown the efficiency of chosen approach.
Random sequences generation through optical measurements by phase-shifting interferometry
NASA Astrophysics Data System (ADS)
François, M.; Grosges, T.; Barchiesi, D.; Erra, R.; Cornet, A.
2012-04-01
The development of new techniques for producing random sequences with a high level of security is a challenging topic of research in modern cryptographics. The proposed method is based on the measurement by phase-shifting interferometry of the speckle signals of the interaction between light and structures. We show how the combination of amplitude and phase distributions (maps) under a numerical process can produce random sequences. The produced sequences satisfy all the statistical requirements of randomness and can be used in cryptographic schemes.
USDA-ARS?s Scientific Manuscript database
A challenge in ecological studies is defining scales of observation that correspond to relevant ecological scales for organisms or processes. Image segmentation has been proposed as an alternative to pixel-based methods for scaling remotely-sensed data into ecologically-meaningful units. However, to...
Cryptography Would Reveal Alterations In Photographs
NASA Technical Reports Server (NTRS)
Friedman, Gary L.
1995-01-01
Public-key decryption method proposed to guarantee authenticity of photographic images represented in form of digital files. In method, digital camera generates original data from image in standard public format; also produces coded signature to verify standard-format image data. Scheme also helps protect against other forms of lying, such as attaching false captions.
An Empirical Comparison of Heterogeneity Variance Estimators in 12,894 Meta-Analyses
ERIC Educational Resources Information Center
Langan, Dean; Higgins, Julian P. T.; Simmonds, Mark
2015-01-01
Heterogeneity in meta-analysis is most commonly estimated using a moment-based approach described by DerSimonian and Laird. However, this method has been shown to produce biased estimates. Alternative methods to estimate heterogeneity include the restricted maximum likelihood approach and those proposed by Paule and Mandel, Sidik and Jonkman, and…
New hydrate formation methods in a liquid-gas medium
NASA Astrophysics Data System (ADS)
Chernov, A. A.; Pil'Nik, A. A.; Elistratov, D. S.; Mezentsev, I. V.; Meleshkin, A. V.; Bartashevich, M. V.; Vlasenko, M. G.
2017-01-01
Conceptually new methods of hydrate formation are proposed. The first one is based on the shock wave impact on a water-bubble medium. It is shown that the hydrate formation rate in this process is typically very high. A gas hydrate of carbon dioxide was produced. The process was experimentally studied using various initial conditions, as well as different external action magnitudes. The obtained experimental data are in good agreement with the proposed model. Other methods are based on the process of boiling liquefied gas in an enclosed volume of water (explosive boiling of a hydrating agent and the organization of cyclic boiling-condensation process). The key features of the methods are the high hydrate formation rate combined with a comparatively low power consumption leading to a great expected efficiency of the technologies based on them. The set of experiments was carried out. Gas hydrates of refrigerant R134a, carbon dioxide and propane were produced. The investigation of decomposition of a generated gas hydrate sample was made. The criteria of intensification of the hydrate formation process are formulated.
New hydrate formation methods in a liquid-gas medium.
Chernov, A A; Pil'nik, A A; Elistratov, D S; Mezentsev, I V; Meleshkin, A V; Bartashevich, M V; Vlasenko, M G
2017-01-18
Conceptually new methods of hydrate formation are proposed. The first one is based on the shock wave impact on a water-bubble medium. It is shown that the hydrate formation rate in this process is typically very high. A gas hydrate of carbon dioxide was produced. The process was experimentally studied using various initial conditions, as well as different external action magnitudes. The obtained experimental data are in good agreement with the proposed model. Other methods are based on the process of boiling liquefied gas in an enclosed volume of water (explosive boiling of a hydrating agent and the organization of cyclic boiling-condensation process). The key features of the methods are the high hydrate formation rate combined with a comparatively low power consumption leading to a great expected efficiency of the technologies based on them. The set of experiments was carried out. Gas hydrates of refrigerant R134a, carbon dioxide and propane were produced. The investigation of decomposition of a generated gas hydrate sample was made. The criteria of intensification of the hydrate formation process are formulated.
New hydrate formation methods in a liquid-gas medium
Chernov, A. A.; Pil’nik, A. A.; Elistratov, D. S.; Mezentsev, I. V.; Meleshkin, A. V.; Bartashevich, M. V.; Vlasenko, M. G.
2017-01-01
Conceptually new methods of hydrate formation are proposed. The first one is based on the shock wave impact on a water-bubble medium. It is shown that the hydrate formation rate in this process is typically very high. A gas hydrate of carbon dioxide was produced. The process was experimentally studied using various initial conditions, as well as different external action magnitudes. The obtained experimental data are in good agreement with the proposed model. Other methods are based on the process of boiling liquefied gas in an enclosed volume of water (explosive boiling of a hydrating agent and the organization of cyclic boiling-condensation process). The key features of the methods are the high hydrate formation rate combined with a comparatively low power consumption leading to a great expected efficiency of the technologies based on them. The set of experiments was carried out. Gas hydrates of refrigerant R134a, carbon dioxide and propane were produced. The investigation of decomposition of a generated gas hydrate sample was made. The criteria of intensification of the hydrate formation process are formulated. PMID:28098194
A method to determine agro-climatic zones based on correlation and cluster analyses
NASA Astrophysics Data System (ADS)
Borges Valeriano, Taynara Tuany; de Souza Rolim, Glauco; de Oliveira Aparecido, Lucas Eduardo
2017-12-01
Determining agro-climatic zones (ACZs) is traditionally made by cross-comparing meteorological elements such as air temperature, rainfall, and water deficit (DEF). This study proposes a new method based on correlations between monthly DEFs during the crop cycle and annual yield and performs a multivariate cluster analysis on these correlations. This `correlation method' was applied to all municipalities in the state of São Paulo to determine ACZs for coffee plantations. A traditional ACZ method for coffee, which is based on temperature and DEF ranges (Evangelista et al.; RBEAA, 6:445-452, 2002), was applied to the study area to compare against the correlation method. The traditional ACZ classified the "Alta Mogina," "Média Mogiana," and "Garça and Marília" regions as traditional coffee regions that were either suitable or even restricted for coffee plantations. These traditional regions have produced coffee since 1800 and should not be classified as restricted. The correlation method classified those areas as high-producing regions and expanded them into other areas. The proposed method is innovative, because it is more detailed than common ACZ methods. Each developmental crop phase was analyzed based on correlations between the monthly DEF and yield, improving the importance of crop physiology in relation to climate.
Scene-based nonuniformity correction with reduced ghosting using a gated LMS algorithm.
Hardie, Russell C; Baxley, Frank; Brys, Brandon; Hytla, Patrick
2009-08-17
In this paper, we present a scene-based nouniformity correction (NUC) method using a modified adaptive least mean square (LMS) algorithm with a novel gating operation on the updates. The gating is designed to significantly reduce ghosting artifacts produced by many scene-based NUC algorithms by halting updates when temporal variation is lacking. We define the algorithm and present a number of experimental results to demonstrate the efficacy of the proposed method in comparison to several previously published methods including other LMS and constant statistics based methods. The experimental results include simulated imagery and a real infrared image sequence. We show that the proposed method significantly reduces ghosting artifacts, but has a slightly longer convergence time. (c) 2009 Optical Society of America
Energy-saving management modelling and optimization for lead-acid battery formation process
NASA Astrophysics Data System (ADS)
Wang, T.; Chen, Z.; Xu, J. Y.; Wang, F. Y.; Liu, H. M.
2017-11-01
In this context, a typical lead-acid battery producing process is introduced. Based on the formation process, an efficiency management method is proposed. An optimization model with the objective to minimize the formation electricity cost in a single period is established. This optimization model considers several related constraints, together with two influencing factors including the transformation efficiency of IGBT charge-and-discharge machine and the time-of-use price. An example simulation is shown using PSO algorithm to solve this mathematic model, and the proposed optimization strategy is proved to be effective and learnable for energy-saving and efficiency optimization in battery producing industries.
NASA Astrophysics Data System (ADS)
Areekul, Phatchakorn; Senjyu, Tomonobu; Urasaki, Naomitsu; Yona, Atsushi
Electricity price forecasting is becoming increasingly relevant to power producers and consumers in the new competitive electric power markets, when planning bidding strategies in order to maximize their benefits and utilities, respectively. This paper proposed a method to predict hourly electricity prices for next-day electricity markets by combination methodology of ARIMA and ANN models. The proposed method is examined on the Australian National Electricity Market (NEM), New South Wales regional in year 2006. Comparison of forecasting performance with the proposed ARIMA, ANN and combination (ARIMA-ANN) models are presented. Empirical results indicate that an ARIMA-ANN model can improve the price forecasting accuracy.
A Surrogate Technique for Investigating Deterministic Dynamics in Discrete Human Movement.
Taylor, Paul G; Small, Michael; Lee, Kwee-Yum; Landeo, Raul; O'Meara, Damien M; Millett, Emma L
2016-10-01
Entropy is an effective tool for investigation of human movement variability. However, before applying entropy, it can be beneficial to employ analyses to confirm that observed data are not solely the result of stochastic processes. This can be achieved by contrasting observed data with that produced using surrogate methods. Unlike continuous movement, no appropriate method has been applied to discrete human movement. This article proposes a novel surrogate method for discrete movement data, outlining the processes for determining its critical values. The proposed technique reliably generated surrogates for discrete joint angle time series, destroying fine-scale dynamics of the observed signal, while maintaining macro structural characteristics. Comparison of entropy estimates indicated observed signals had greater regularity than surrogates and were not only the result of stochastic but also deterministic processes. The proposed surrogate method is both a valid and reliable technique to investigate determinism in other discrete human movement time series.
System and Method for Monitoring Distributed Asset Data
NASA Technical Reports Server (NTRS)
Gorinevsky, Dimitry (Inventor)
2015-01-01
A computer-based monitoring system and monitoring method implemented in computer software for detecting, estimating, and reporting the condition states, their changes, and anomalies for many assets. The assets are of same type, are operated over a period of time, and outfitted with data collection systems. The proposed monitoring method accounts for variability of working conditions for each asset by using regression model that characterizes asset performance. The assets are of the same type but not identical. The proposed monitoring method accounts for asset-to-asset variability; it also accounts for drifts and trends in the asset condition and data. The proposed monitoring system can perform distributed processing of massive amounts of historical data without discarding any useful information where moving all the asset data into one central computing system might be infeasible. The overall processing is includes distributed preprocessing data records from each asset to produce compressed data.
USDA-ARS?s Scientific Manuscript database
The objective of this study was to evaluate the percentage of US producers and milk not currently meeting the proposed bulk tank somatic cell counts (BTSCC) limits. Five different limits of BTSCC were evaluated for compliance: 750K, 600K, 500K, and 400K using the current US methods and 400K using th...
Practical Framework: Implementing OEE Method in Manufacturing Process Environment
NASA Astrophysics Data System (ADS)
Maideen, N. C.; Sahudin, S.; Mohd Yahya, N. H.; Norliawati, A. O.
2016-02-01
Manufacturing process environment requires reliable machineries in order to be able to satisfy the market demand. Ideally, a reliable machine is expected to be operated and produce a quality product at its maximum designed capability. However, due to some reason, the machine usually unable to achieved the desired performance. Since the performance will affect the productivity of the system, a measurement technique should be applied. Overall Equipment Effectiveness (OEE) is a good method to measure the performance of the machine. The reliable result produced from OEE can then be used to propose a suitable corrective action. There are a lot of published paper mentioned about the purpose and benefit of OEE that covers what and why factors. However, the how factor not yet been revealed especially the implementation of OEE in manufacturing process environment. Thus, this paper presents a practical framework to implement OEE and a case study has been discussed to explain in detail each steps proposed. The proposed framework is beneficial to the engineer especially the beginner to start measure their machine performance and later improve the performance of the machine.
Mu, Yang; Yang, Hou-Yun; Wang, Ya-Zhou; He, Chuan-Shu; Zhao, Quan-Bao; Wang, Yi; Yu, Han-Qing
2014-01-01
Fermentative hydrogen production from wastes has many advantages compared to various chemical methods. Methodology for characterizing the hydrogen-producing activity of anaerobic mixed cultures is essential for monitoring reactor operation in fermentative hydrogen production, however there is lack of such kind of standardized methodologies. In the present study, a new index, i.e., the maximum specific hydrogen-producing activity (SHAm) of anaerobic mixed cultures, was proposed, and consequently a reliable and simple method, named SHAm test, was developed to determine it. Furthermore, the influences of various parameters on the SHAm value determination of anaerobic mixed cultures were evaluated. Additionally, this SHAm assay was tested for different types of substrates and bacterial inocula. Our results demonstrate that this novel SHAm assay was a rapid, accurate and simple methodology for determining the hydrogen-producing activity of anaerobic mixed cultures. Thus, application of this approach is beneficial to establishing a stable anaerobic hydrogen-producing system. PMID:24912488
NASA Astrophysics Data System (ADS)
Mu, Yang; Yang, Hou-Yun; Wang, Ya-Zhou; He, Chuan-Shu; Zhao, Quan-Bao; Wang, Yi; Yu, Han-Qing
2014-06-01
Fermentative hydrogen production from wastes has many advantages compared to various chemical methods. Methodology for characterizing the hydrogen-producing activity of anaerobic mixed cultures is essential for monitoring reactor operation in fermentative hydrogen production, however there is lack of such kind of standardized methodologies. In the present study, a new index, i.e., the maximum specific hydrogen-producing activity (SHAm) of anaerobic mixed cultures, was proposed, and consequently a reliable and simple method, named SHAm test, was developed to determine it. Furthermore, the influences of various parameters on the SHAm value determination of anaerobic mixed cultures were evaluated. Additionally, this SHAm assay was tested for different types of substrates and bacterial inocula. Our results demonstrate that this novel SHAm assay was a rapid, accurate and simple methodology for determining the hydrogen-producing activity of anaerobic mixed cultures. Thus, application of this approach is beneficial to establishing a stable anaerobic hydrogen-producing system.
Horsetail matching: a flexible approach to optimization under uncertainty
NASA Astrophysics Data System (ADS)
Cook, L. W.; Jarrett, J. P.
2018-04-01
It is important to design engineering systems to be robust with respect to uncertainties in the design process. Often, this is done by considering statistical moments, but over-reliance on statistical moments when formulating a robust optimization can produce designs that are stochastically dominated by other feasible designs. This article instead proposes a formulation for optimization under uncertainty that minimizes the difference between a design's cumulative distribution function and a target. A standard target is proposed that produces stochastically non-dominated designs, but the formulation also offers enough flexibility to recover existing approaches for robust optimization. A numerical implementation is developed that employs kernels to give a differentiable objective function. The method is applied to algebraic test problems and a robust transonic airfoil design problem where it is compared to multi-objective, weighted-sum and density matching approaches to robust optimization; several advantages over these existing methods are demonstrated.
High resolution OCT image generation using super resolution via sparse representation
NASA Astrophysics Data System (ADS)
Asif, Muhammad; Akram, Muhammad Usman; Hassan, Taimur; Shaukat, Arslan; Waqar, Razi
2017-02-01
In this paper we propose a technique for obtaining a high resolution (HR) image from a single low resolution (LR) image -using joint learning dictionary - on the basis of image statistic research. It suggests that with an appropriate choice of an over-complete dictionary, image patches can be well represented as a sparse linear combination. Medical imaging for clinical analysis and medical intervention is being used for creating visual representations of the interior of a body, as well as visual representation of the function of some organs or tissues (physiology). A number of medical imaging techniques are in use like MRI, CT scan, X-rays and Optical Coherence Tomography (OCT). OCT is one of the new technologies in medical imaging and one of its uses is in ophthalmology where it is being used for analysis of the choroidal thickness in the eyes in healthy and disease states such as age-related macular degeneration, central serous chorioretinopathy, diabetic retinopathy and inherited retinal dystrophies. We have proposed a technique for enhancing the OCT images which can be used for clearly identifying and analyzing the particular diseases. Our method uses dictionary learning technique for generating a high resolution image from a single input LR image. We train two joint dictionaries, one with OCT images and the second with multiple different natural images, and compare the results with previous SR technique. Proposed method for both dictionaries produces HR images which are comparatively superior in quality with the other proposed method of SR. Proposed technique is very effective for noisy OCT images and produces up-sampled and enhanced OCT images.
Solutions of interval type-2 fuzzy polynomials using a new ranking method
NASA Astrophysics Data System (ADS)
Rahman, Nurhakimah Ab.; Abdullah, Lazim; Ghani, Ahmad Termimi Ab.; Ahmad, Noor'Ani
2015-10-01
A few years ago, a ranking method have been introduced in the fuzzy polynomial equations. Concept of the ranking method is proposed to find actual roots of fuzzy polynomials (if exists). Fuzzy polynomials are transformed to system of crisp polynomials, performed by using ranking method based on three parameters namely, Value, Ambiguity and Fuzziness. However, it was found that solutions based on these three parameters are quite inefficient to produce answers. Therefore in this study a new ranking method have been developed with the aim to overcome the inherent weakness. The new ranking method which have four parameters are then applied in the interval type-2 fuzzy polynomials, covering the interval type-2 of fuzzy polynomial equation, dual fuzzy polynomial equations and system of fuzzy polynomials. The efficiency of the new ranking method then numerically considered in the triangular fuzzy numbers and the trapezoidal fuzzy numbers. Finally, the approximate solutions produced from the numerical examples indicate that the new ranking method successfully produced actual roots for the interval type-2 fuzzy polynomials.
Roberts, Steven; Martin, Michael A
2010-01-01
Concerns have been raised about findings of associations between particulate matter (PM) air pollution and mortality that have been based on a single "best" model arising from a model selection procedure, because such a strategy may ignore model uncertainty inherently involved in searching through a set of candidate models to find the best model. Model averaging has been proposed as a method of allowing for model uncertainty in this context. To propose an extension (double BOOT) to a previously described bootstrap model-averaging procedure (BOOT) for use in time series studies of the association between PM and mortality. We compared double BOOT and BOOT with Bayesian model averaging (BMA) and a standard method of model selection [standard Akaike's information criterion (AIC)]. Actual time series data from the United States are used to conduct a simulation study to compare and contrast the performance of double BOOT, BOOT, BMA, and standard AIC. Double BOOT produced estimates of the effect of PM on mortality that have had smaller root mean squared error than did those produced by BOOT, BMA, and standard AIC. This performance boost resulted from estimates produced by double BOOT having smaller variance than those produced by BOOT and BMA. Double BOOT is a viable alternative to BOOT and BMA for producing estimates of the mortality effect of PM.
Wu, Hulin; Xue, Hongqi; Kumar, Arun
2012-06-01
Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Yuen, Kevin Kam Fung
2009-10-01
The most appropriate prioritization method is still one of the unsettled issues of the Analytic Hierarchy Process, although many studies have been made and applied. Interestingly, many AHP applications apply only Saaty's Eigenvector method as many studies have found that this method may produce rank reversals and have proposed various prioritization methods as alternatives. Some methods have been proved to be better than the Eigenvector method. However, these methods seem not to attract the attention of researchers. In this paper, eight important prioritization methods are reviewed. A Mixed Prioritization Operators Strategy (MPOS) is developed to select a vector which is prioritized by the most appropriate prioritization operator. To verify this new method, a case study of high school selection is revised using the proposed method. The contribution is that MPOS is useful for solving prioritization problems in the AHP.
Engel, Aaron J; Bashford, Gregory R
2015-08-01
Ultrasound based shear wave elastography (SWE) is a technique used for non-invasive characterization and imaging of soft tissue mechanical properties. Robust estimation of shear wave propagation speed is essential for imaging of soft tissue mechanical properties. In this study we propose to estimate shear wave speed by inversion of the first-order wave equation following directional filtering. This approach relies on estimation of first-order derivatives which allows for accurate estimations using smaller smoothing filters than when estimating second-order derivatives. The performance was compared to three current methods used to estimate shear wave propagation speed: direct inversion of the wave equation (DIWE), time-to-peak (TTP) and cross-correlation (CC). The shear wave speed of three homogeneous phantoms of different elastic moduli (gelatin by weight of 5%, 7%, and 9%) were measured with each method. The proposed method was shown to produce shear speed estimates comparable to the conventional methods (standard deviation of measurements being 0.13 m/s, 0.05 m/s, and 0.12 m/s), but with simpler processing and usually less time (by a factor of 1, 13, and 20 for DIWE, CC, and TTP respectively). The proposed method was able to produce a 2-D speed estimate from a single direction of wave propagation in about four seconds using an off-the-shelf PC, showing the feasibility of performing real-time or near real-time elasticity imaging with dedicated hardware.
"Other" indirect methods for nuclear astrophysics
NASA Astrophysics Data System (ADS)
Trache, Livius
2018-01-01
In the house of Trojan Horse Method (THM), I will say a few words about "other" indirect methods we use in Nuclear Physics for Astrophysics. In particular those using Rare Ion Beams that can be used to evaluate radiative proton capture reactions. I add words about work done with the Professore we celebrate today. With a proposal, and some results with TECSA, for a simple method to produce and use isomeric beam of 26mAl.
Sethi, Gaurav; Saini, B S
2015-12-01
This paper presents an abdomen disease diagnostic system based on the flexi-scale curvelet transform, which uses different optimal scales for extracting features from computed tomography (CT) images. To optimize the scale of the flexi-scale curvelet transform, we propose an improved genetic algorithm. The conventional genetic algorithm assumes that fit parents will likely produce the healthiest offspring that leads to the least fit parents accumulating at the bottom of the population, reducing the fitness of subsequent populations and delaying the optimal solution search. In our improved genetic algorithm, combining the chromosomes of a low-fitness and a high-fitness individual increases the probability of producing high-fitness offspring. Thereby, all of the least fit parent chromosomes are combined with high fit parent to produce offspring for the next population. In this way, the leftover weak chromosomes cannot damage the fitness of subsequent populations. To further facilitate the search for the optimal solution, our improved genetic algorithm adopts modified elitism. The proposed method was applied to 120 CT abdominal images; 30 images each of normal subjects, cysts, tumors and stones. The features extracted by the flexi-scale curvelet transform were more discriminative than conventional methods, demonstrating the potential of our method as a diagnostic tool for abdomen diseases.
Denis Valle; Benjamin Baiser; Christopher W. Woodall; Robin Chazdon; Jerome Chave
2014-01-01
We propose a novel multivariate method to analyse biodiversity data based on the Latent Dirichlet Allocation (LDA) model. LDA, a probabilistic model, reduces assemblages to sets of distinct component communities. It produces easily interpretable results, can represent abrupt and gradual changes in composition, accommodates missing data and allows for coherent estimates...
Chen, Weitian; Sica, Christopher T.; Meyer, Craig H.
2008-01-01
Off-resonance effects can cause image blurring in spiral scanning and various forms of image degradation in other MRI methods. Off-resonance effects can be caused by both B0 inhomogeneity and concomitant gradient fields. Previously developed off-resonance correction methods focus on the correction of a single source of off-resonance. This work introduces a computationally efficient method of correcting for B0 inhomogeneity and concomitant gradients simultaneously. The method is a fast alternative to conjugate phase reconstruction, with the off-resonance phase term approximated by Chebyshev polynomials. The proposed algorithm is well suited for semiautomatic off-resonance correction, which works well even with an inaccurate or low-resolution field map. The proposed algorithm is demonstrated using phantom and in vivo data sets acquired by spiral scanning. Semiautomatic off-resonance correction alone is shown to provide a moderate amount of correction for concomitant gradient field effects, in addition to B0 imhomogeneity effects. However, better correction is provided by the proposed combined method. The best results were produced using the semiautomatic version of the proposed combined method. PMID:18956462
Evaluation of a visual layering methodology for colour coding control room displays.
Van Laar, Darren; Deshe, Ofer
2002-07-01
Eighteen people participated in an experiment in which they were asked to search for targets on control room like displays which had been produced using three different coding methods. The monochrome coding method displayed the information in black and white only, the maximally discriminable method contained colours chosen for their high perceptual discriminability, the visual layers method contained colours developed from psychological and cartographic principles which grouped information into a perceptual hierarchy. The visual layers method produced significantly faster search times than the other two coding methods which did not differ significantly from each other. Search time also differed significantly for presentation order and for the method x order interaction. There was no significant difference between the methods in the number of errors made. Participants clearly preferred the visual layers coding method. Proposals are made for the design of experiments to further test and develop the visual layers colour coding methodology.
A segmentation editing framework based on shape change statistics
NASA Astrophysics Data System (ADS)
Mostapha, Mahmoud; Vicory, Jared; Styner, Martin; Pizer, Stephen
2017-02-01
Segmentation is a key task in medical image analysis because its accuracy significantly affects successive steps. Automatic segmentation methods often produce inadequate segmentations, which require the user to manually edit the produced segmentation slice by slice. Because editing is time-consuming, an editing tool that enables the user to produce accurate segmentations by only drawing a sparse set of contours would be needed. This paper describes such a framework as applied to a single object. Constrained by the additional information enabled by the manually segmented contours, the proposed framework utilizes object shape statistics to transform the failed automatic segmentation to a more accurate version. Instead of modeling the object shape, the proposed framework utilizes shape change statistics that were generated to capture the object deformation from the failed automatic segmentation to its corresponding correct segmentation. An optimization procedure was used to minimize an energy function that consists of two terms, an external contour match term and an internal shape change regularity term. The high accuracy of the proposed segmentation editing approach was confirmed by testing it on a simulated data set based on 10 in-vivo infant magnetic resonance brain data sets using four similarity metrics. Segmentation results indicated that our method can provide efficient and adequately accurate segmentations (Dice segmentation accuracy increase of 10%), with very sparse contours (only 10%), which is promising in greatly decreasing the work expected from the user.
Real-time biscuit tile image segmentation method based on edge detection.
Matić, Tomislav; Aleksi, Ivan; Hocenski, Željko; Kraus, Dieter
2018-05-01
In this paper we propose a novel real-time Biscuit Tile Segmentation (BTS) method for images from ceramic tile production line. BTS method is based on signal change detection and contour tracing with a main goal of separating tile pixels from background in images captured on the production line. Usually, human operators are visually inspecting and classifying produced ceramic tiles. Computer vision and image processing techniques can automate visual inspection process if they fulfill real-time requirements. Important step in this process is a real-time tile pixels segmentation. BTS method is implemented for parallel execution on a GPU device to satisfy the real-time constraints of tile production line. BTS method outperforms 2D threshold-based methods, 1D edge detection methods and contour-based methods. Proposed BTS method is in use in the biscuit tile production line. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Color preservation for tone reproduction and image enhancement
NASA Astrophysics Data System (ADS)
Hsin, Chengho; Lee, Zong Wei; Lee, Zheng Zhan; Shin, Shaw-Jyh
2014-01-01
Applications based on luminance processing often face the problem of recovering the original chrominance in the output color image. A common approach to reconstruct a color image from the luminance output is by preserving the original hue and saturation. However, this approach often produces a highly colorful image which is undesirable. We develop a color preservation method that not only retains the ratios of the input tri-chromatic values but also adjusts the output chroma in an appropriate way. Linearizing the output luminance is the key idea to realize this method. In addition, a lightness difference metric together with a colorfulness difference metric are proposed to evaluate the performance of the color preservation methods. It shows that the proposed method performs consistently better than the existing approaches.
A flower image retrieval method based on ROI feature.
Hong, An-Xiang; Chen, Gang; Li, Jun-Li; Chi, Zhe-Ru; Zhang, Dan
2004-07-01
Flower image retrieval is a very important step for computer-aided plant species recognition. In this paper, we propose an efficient segmentation method based on color clustering and domain knowledge to extract flower regions from flower images. For flower retrieval, we use the color histogram of a flower region to characterize the color features of flower and two shape-based features sets, Centroid-Contour Distance (CCD) and Angle Code Histogram (ACH), to characterize the shape features of a flower contour. Experimental results showed that our flower region extraction method based on color clustering and domain knowledge can produce accurate flower regions. Flower retrieval results on a database of 885 flower images collected from 14 plant species showed that our Region-of-Interest (ROI) based retrieval approach using both color and shape features can perform better than a method based on the global color histogram proposed by Swain and Ballard (1991) and a method based on domain knowledge-driven segmentation and color names proposed by Das et al.(1999).
Image superresolution by midfrequency sparse representation and total variation regularization
NASA Astrophysics Data System (ADS)
Xu, Jian; Chang, Zhiguo; Fan, Jiulun; Zhao, Xiaoqiang; Wu, Xiaomin; Wang, Yanzi
2015-01-01
Machine learning has provided many good tools for superresolution, whereas existing methods still need to be improved in many aspects. On one hand, the memory and time cost should be reduced. On the other hand, the step edges of the results obtained by the existing methods are not clear enough. We do the following work. First, we propose a method to extract the midfrequency features for dictionary learning. This method brings the benefit of a reduction of the memory and time complexity without sacrificing the performance. Second, we propose a detailed wiping-off total variation (DWO-TV) regularization model to reconstruct the sharp step edges. This model adds a novel constraint on the downsampling version of the high-resolution image to wipe off the details and artifacts and sharpen the step edges. Finally, step edges produced by the DWO-TV regularization and the details provided by learning are fused. Experimental results show that the proposed method offers a desirable compromise between low time and memory cost and the reconstruction quality.
Color image enhancement based on particle swarm optimization with Gaussian mixture
NASA Astrophysics Data System (ADS)
Kattakkalil Subhashdas, Shibudas; Choi, Bong-Seok; Yoo, Ji-Hoon; Ha, Yeong-Ho
2015-01-01
This paper proposes a Gaussian mixture based image enhancement method which uses particle swarm optimization (PSO) to have an edge over other contemporary methods. The proposed method uses the guassian mixture model to model the lightness histogram of the input image in CIEL*a*b* space. The intersection points of the guassian components in the model are used to partition the lightness histogram. . The enhanced lightness image is generated by transforming the lightness value in each interval to appropriate output interval according to the transformation function that depends on PSO optimized parameters, weight and standard deviation of Gaussian component and cumulative distribution of the input histogram interval. In addition, chroma compensation is applied to the resulting image to reduce washout appearance. Experimental results show that the proposed method produces a better enhanced image compared to the traditional methods. Moreover, the enhanced image is free from several side effects such as washout appearance, information loss and gradation artifacts.
ERIC Educational Resources Information Center
Ford, Norman C.; Kane, Joseph W.
1971-01-01
Proposes a method of collecting solar energy by using available plastics for Fresnel lenses to focus heat onto a converter where thermal dissociation of water would produce hydrogen. The hydrogen would be used as an efficient non-polluting fuel. Cost estimates are included. (AL)
Improvements to surrogate data methods for nonstationary time series.
Lucio, J H; Valdés, R; Rodríguez, L R
2012-05-01
The method of surrogate data has been extensively applied to hypothesis testing of system linearity, when only one realization of the system, a time series, is known. Normally, surrogate data should preserve the linear stochastic structure and the amplitude distribution of the original series. Classical surrogate data methods (such as random permutation, amplitude adjusted Fourier transform, or iterative amplitude adjusted Fourier transform) are successful at preserving one or both of these features in stationary cases. However, they always produce stationary surrogates, hence existing nonstationarity could be interpreted as dynamic nonlinearity. Certain modifications have been proposed that additionally preserve some nonstationarity, at the expense of reproducing a great deal of nonlinearity. However, even those methods generally fail to preserve the trend (i.e., global nonstationarity in the mean) of the original series. This is the case of time series with unit roots in their autoregressive structure. Additionally, those methods, based on Fourier transform, either need first and last values in the original series to match, or they need to select a piece of the original series with matching ends. These conditions are often inapplicable and the resulting surrogates are adversely affected by the well-known artefact problem. In this study, we propose a simple technique that, applied within existing Fourier-transform-based methods, generates surrogate data that jointly preserve the aforementioned characteristics of the original series, including (even strong) trends. Moreover, our technique avoids the negative effects of end mismatch. Several artificial and real, stationary and nonstationary, linear and nonlinear time series are examined, in order to demonstrate the advantages of the methods. Corresponding surrogate data are produced with the classical and with the proposed methods, and the results are compared.
Yamashita, Tatsuya; Oida, Takenori; Hamada, Shoji; Kobayashi, Tetsuo
2012-02-01
In recent years, there has been considerable interest in developing an ultra-low-field magnetic resonance imaging (ULF-MRI) system using an optically pumped atomic magnetometer (OPAM). However, a precise estimation of the signal-to-noise ratio (SNR) of ULF-MRI has not been carried out. Conventionally, to calculate the SNR of an MR image, thermal noise, also called Nyquist noise, has been estimated by considering a resistor that is electrically equivalent to a biological-conductive sample and is connected in series to a pickup coil. However, this method has major limitations in that the receiver has to be a coil and that it cannot be applied directly to a system using OPAM. In this paper, we propose a method to estimate the thermal noise of an MRI system using OPAM. We calculate the thermal noise from the variance of the magnetic sensor output produced by current-dipole moments that simulate thermally fluctuating current sources in a biological sample. We assume that the random magnitude of the current dipole in each volume element of the biological sample is described by the Maxwell-Boltzmann distribution. The sensor output produced by each current-dipole moment is calculated either by an analytical formula or a numerical method based on the boundary element method. We validate the proposed method by comparing our results with those obtained by conventional methods that consider resistors connected in series to a pickup coil using single-layered sphere, multi-layered sphere, and realistic head models. Finally, we apply the proposed method to the ULF-MRI model using OPAM as the receiver with multi-layered sphere and realistic head models and estimate their SNR. Copyright © 2011 Elsevier Inc. All rights reserved.
Proposal for the measuring molecular velocity vector with single-pulse coherent Raman spectroscopy
NASA Technical Reports Server (NTRS)
She, C. Y.
1983-01-01
Methods for simultaneous measurements of more than one flow velocity component using coherent Raman spectroscopy are proposed. It is demonstrated that using a kilowatt broad-band probe pulse (3-30 GHz) along with a megawatt narrow-band pump pulse (approximately 100 MHz), coherent Raman signal resulting from a single laser pulse is sufficient to produce a high-resolution Raman spectrum for a velocity measurement.
Maritime Search and Rescue via Multiple Coordinated UAS
2017-06-12
performed by a set of UAS. Our investigation covers the detection of multiple mobile objects by a heterogeneous collection of UAS. Three methods (two...account for contingencies such as airspace deconfliction. Results are produced using simulation to verify the capability of the proposed method and to...compare the various par- titioning methods . Results from this simulation show that great gains in search efficiency can be made when the search space is
NASA Astrophysics Data System (ADS)
Razboinikov, A. A.; Vashchilin, V. V.
2016-10-01
In the paper the problematics of gas transport system, main factors of an urgency of the development are described. Stages of a proposed reconstruction of combustion chamber DG-90 are introduced. Basic elements of the elaborated method for appraisal of risks of an emergency situation occurrence are given. The expected efficiency from implementation of the produced method is described.
Comparison of an Atomic Model and Its Cryo-EM Image at the Central Axis of a Helix
He, Jing; Zeil, Stephanie; Hallak, Hussam; McKaig, Kele; Kovacs, Julio; Wriggers, Willy
2016-01-01
Cryo-electron microscopy (cryo-EM) is an important biophysical technique that produces three-dimensional (3D) density maps at different resolutions. Because more and more models are being produced from cryo-EM density maps, validation of the models is becoming important. We propose a method for measuring local agreement between a model and the density map using the central axis of the helix. This method was tested using 19 helices from cryo-EM density maps between 5.5 Å and 7.2 Å resolution and 94 helices from simulated density maps. This method distinguished most of the well-fitting helices, although challenges exist for shorter helices. PMID:27280059
NASA Technical Reports Server (NTRS)
Patterson, J. C., Jr.; Jordan, F. L., Jr.
1975-01-01
A recently proposed method of flow visualization was investigated at the National Aeronautics and Space Administration's Langley Research Center. This method of flow visualization is particularly applicable to the study of lift-induced wing tip vortices through which it is possible to record the entire life span of the vortex. To accomplish this, a vertical screen of smoke was produced perpendicular to the flight path and allowed to become stationary. A model was then driven through the screen of smoke producing the circular vortex motion made visible as the smoke was induced along the path taken by the flow and was recorded by highspeed motion pictures.
Improving ontology matching with propagation strategy and user feedback
NASA Astrophysics Data System (ADS)
Li, Chunhua; Cui, Zhiming; Zhao, Pengpeng; Wu, Jian; Xin, Jie; He, Tianxu
2015-07-01
Markov logic networks which unify probabilistic graphical model and first-order logic provide an excellent framework for ontology matching. The existing approach requires a threshold to produce matching candidates and use a small set of constraints acting as filter to select the final alignments. We introduce novel match propagation strategy to model the influences between potential entity mappings across ontologies, which can help to identify the correct correspondences and produce missed correspondences. The estimation of appropriate threshold is a difficult task. We propose an interactive method for threshold selection through which we obtain an additional measurable improvement. Running experiments on a public dataset has demonstrated the effectiveness of proposed approach in terms of the quality of result alignment.
Detection of defects on apple using B-spline lighting correction method
NASA Astrophysics Data System (ADS)
Li, Jiangbo; Huang, Wenqian; Guo, Zhiming
To effectively extract defective areas in fruits, the uneven intensity distribution that was produced by the lighting system or by part of the vision system in the image must be corrected. A methodology was used to convert non-uniform intensity distribution on spherical objects into a uniform intensity distribution. A basically plane image with the defective area having a lower gray level than this plane was obtained by using proposed algorithms. Then, the defective areas can be easily extracted by a global threshold value. The experimental results with a 94.0% classification rate based on 100 apple images showed that the proposed algorithm was simple and effective. This proposed method can be applied to other spherical fruits.
Frequentist Model Averaging in Structural Equation Modelling.
Jin, Shaobo; Ankargren, Sebastian
2018-06-04
Model selection from a set of candidate models plays an important role in many structural equation modelling applications. However, traditional model selection methods introduce extra randomness that is not accounted for by post-model selection inference. In the current study, we propose a model averaging technique within the frequentist statistical framework. Instead of selecting an optimal model, the contributions of all candidate models are acknowledged. Valid confidence intervals and a [Formula: see text] test statistic are proposed. A simulation study shows that the proposed method is able to produce a robust mean-squared error, a better coverage probability, and a better goodness-of-fit test compared to model selection. It is an interesting compromise between model selection and the full model.
Ikushima, Koujiro; Arimura, Hidetaka; Jin, Ze; Yabu-Uchi, Hidetake; Kuwazuru, Jumpei; Shioyama, Yoshiyuki; Sasaki, Tomonari; Honda, Hiroshi; Sasaki, Masayuki
2017-01-01
We have proposed a computer-assisted framework for machine-learning-based delineation of gross tumor volumes (GTVs) following an optimum contour selection (OCS) method. The key idea of the proposed framework was to feed image features around GTV contours (determined based on the knowledge of radiation oncologists) into a machine-learning classifier during the training step, after which the classifier produces the 'degree of GTV' for each voxel in the testing step. Initial GTV regions were extracted using a support vector machine (SVM) that learned the image features inside and outside each tumor region (determined by radiation oncologists). The leave-one-out-by-patient test was employed for training and testing the steps of the proposed framework. The final GTV regions were determined using the OCS method that can be used to select a global optimum object contour based on multiple active delineations with a LSM around the GTV. The efficacy of the proposed framework was evaluated in 14 lung cancer cases [solid: 6, ground-glass opacity (GGO): 4, mixed GGO: 4] using the 3D Dice similarity coefficient (DSC), which denotes the degree of region similarity between the GTVs contoured by radiation oncologists and those determined using the proposed framework. The proposed framework achieved an average DSC of 0.777 for 14 cases, whereas the OCS-based framework produced an average DSC of 0.507. The average DSCs for GGO and mixed GGO were 0.763 and 0.701, respectively, obtained by the proposed framework. The proposed framework can be employed as a tool to assist radiation oncologists in delineating various GTV regions. © The Author 2016. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.
Improved Genetic Algorithm Based on the Cooperation of Elite and Inverse-elite
NASA Astrophysics Data System (ADS)
Kanakubo, Masaaki; Hagiwara, Masafumi
In this paper, we propose an improved genetic algorithm based on the combination of Bee system and Inverse-elitism, both are effective strategies for the improvement of GA. In the Bee system, in the beginning, each chromosome tries to find good solution individually as global search. When some chromosome is regarded as superior one, the other chromosomes try to find solution around there. However, since chromosomes for global search are generated randomly, Bee system lacks global search ability. On the other hand, in the Inverse-elitism, an inverse-elite whose gene values are reversed from the corresponding elite is produced. This strategy greatly contributes to diversification of chromosomes, but it lacks local search ability. In the proposed method, the Inverse-elitism with Pseudo-simplex method is employed for global search of Bee system in order to strengthen global search ability. In addition, it also has strong local search ability. The proposed method has synergistic effects of the three strategies. We confirmed validity and superior performance of the proposed method by computer simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prowell, Stacy J; Symons, Christopher T
2015-01-01
Producing trusted results from high-performance codes is essential for policy and has significant economic impact. We propose combining rigorous analytical methods with machine learning techniques to achieve the goal of repeatable, trustworthy scientific computing.
NASA Astrophysics Data System (ADS)
Tamimi, E.; Ebadi, H.; Kiani, A.
2017-09-01
Automatic building detection from High Spatial Resolution (HSR) images is one of the most important issues in Remote Sensing (RS). Due to the limited number of spectral bands in HSR images, using other features will lead to improve accuracy. By adding these features, the presence probability of dependent features will be increased, which leads to accuracy reduction. In addition, some parameters should be determined in Support Vector Machine (SVM) classification. Therefore, it is necessary to simultaneously determine classification parameters and select independent features according to image type. Optimization algorithm is an efficient method to solve this problem. On the other hand, pixel-based classification faces several challenges such as producing salt-paper results and high computational time in high dimensional data. Hence, in this paper, a novel method is proposed to optimize object-based SVM classification by applying continuous Ant Colony Optimization (ACO) algorithm. The advantages of the proposed method are relatively high automation level, independency of image scene and type, post processing reduction for building edge reconstruction and accuracy improvement. The proposed method was evaluated by pixel-based SVM and Random Forest (RF) classification in terms of accuracy. In comparison with optimized pixel-based SVM classification, the results showed that the proposed method improved quality factor and overall accuracy by 17% and 10%, respectively. Also, in the proposed method, Kappa coefficient was improved by 6% rather than RF classification. Time processing of the proposed method was relatively low because of unit of image analysis (image object). These showed the superiority of the proposed method in terms of time and accuracy.
NASA Astrophysics Data System (ADS)
Ono, Ryo; Tokumitsu, Yusuke; Zen, Shungo; Yonemori, Seiya
2014-11-01
We propose a method for producing OH, H, O, O3, and O2(a1Δg) using the vacuum ultraviolet photodissociation of H2O and O2 as a tool for studying the reaction processes of plasma medicine. For photodissociation, an H2O/He or O2/He mixture flowing in a quartz tube is irradiated by a Xe2 or Kr2 excimer lamp. The effluent can be applied to a target. Simulations show that the Xe2 lamp method can produce OH radicals within 0.1-1 ppm in the effluent at 5 mm from a quartz tube nozzle. This is comparable to those produced by a helium atmospheric-pressure plasma jet (He-APPJ) currently used in plasma medicine. The Xe2 lamp method also produces H atoms of, at most, 6 ppm. In contrast, the maximum O densities produced by the Xe2 and Kr2 lamp methods are 0.15 ppm and 2.5 ppm, respectively; these are much lower than those from He-APPJ (several tens of ppm). Both lamp methods can produce ozone at concentrations above 1000 ppm and O2(a1Δg) at tens of ppm. The validity of the simulations is verified by measuring the O3 and OH densities produced by the Xe2 lamp method using ultraviolet absorption and laser-induced fluorescence. The differences between the measured and simulated densities for O3 and OH are 20% and factors of 3-4, respectively.
Fully Convolutional Network-Based Multifocus Image Fusion.
Guo, Xiaopeng; Nie, Rencan; Cao, Jinde; Zhou, Dongming; Qian, Wenhua
2018-07-01
As the optical lenses for cameras always have limited depth of field, the captured images with the same scene are not all in focus. Multifocus image fusion is an efficient technology that can synthesize an all-in-focus image using several partially focused images. Previous methods have accomplished the fusion task in spatial or transform domains. However, fusion rules are always a problem in most methods. In this letter, from the aspect of focus region detection, we propose a novel multifocus image fusion method based on a fully convolutional network (FCN) learned from synthesized multifocus images. The primary novelty of this method is that the pixel-wise focus regions are detected through a learning FCN, and the entire image, not just the image patches, are exploited to train the FCN. First, we synthesize 4500 pairs of multifocus images by repeatedly using a gaussian filter for each image from PASCAL VOC 2012, to train the FCN. After that, a pair of source images is fed into the trained FCN, and two score maps indicating the focus property are generated. Next, an inversed score map is averaged with another score map to produce an aggregative score map, which take full advantage of focus probabilities in two score maps. We implement the fully connected conditional random field (CRF) on the aggregative score map to accomplish and refine a binary decision map for the fusion task. Finally, we exploit the weighted strategy based on the refined decision map to produce the fused image. To demonstrate the performance of the proposed method, we compare its fused results with several start-of-the-art methods not only on a gray data set but also on a color data set. Experimental results show that the proposed method can achieve superior fusion performance in both human visual quality and objective assessment.
Measurement method of magnetic field for the wire suspended micro-pendulum accelerometer.
Lu, Yongle; Li, Leilei; Hu, Ning; Pan, Yingjun; Ren, Chunhua
2015-04-13
Force producer is one of the core components of a Wire Suspended Micro-Pendulum Accelerometer; and the stability of permanent magnet in the force producer determines the consistency of the acceleration sensor's scale factor. For an assembled accelerometer; direct measurement of magnetic field strength is not a feasible option; as the magnetometer probe cannot be laid inside the micro-space of the sensor. This paper proposed an indirect measurement method of the remnant magnetization of Micro-Pendulum Accelerometer. The measurement is based on the working principle of the accelerometer; using the current output at several different scenarios to resolve the remnant magnetization of the permanent magnet. Iterative Least Squares algorithm was used for the adjustment of the data due to nonlinearity of this problem. The calculated remnant magnetization was 1.035 T. Compared to the true value; the error was less than 0.001 T. The proposed method provides an effective theoretical guidance for measuring the magnetic field of the Wire Suspended Micro-Pendulum Accelerometer; correcting the scale factor and temperature influence coefficients; etc.
Tzallas, A T; Karvelis, P S; Katsis, C D; Fotiadis, D I; Giannopoulos, S; Konitsiotis, S
2006-01-01
The aim of the paper is to analyze transient events in inter-ictal EEG recordings, and classify epileptic activity into focal or generalized epilepsy using an automated method. A two-stage approach is proposed. In the first stage the observed transient events of a single channel are classified into four categories: epileptic spike (ES), muscle activity (EMG), eye blinking activity (EOG), and sharp alpha activity (SAA). The process is based on an artificial neural network. Different artificial neural network architectures have been tried and the network having the lowest error has been selected using the hold out approach. In the second stage a knowledge-based system is used to produce diagnosis for focal or generalized epileptic activity. The classification of transient events reported high overall accuracy (84.48%), while the knowledge-based system for epilepsy diagnosis correctly classified nine out of ten cases. The proposed method is advantageous since it effectively detects and classifies the undesirable activity into appropriate categories and produces a final outcome related to the existence of epilepsy.
ERIC Educational Resources Information Center
Maggin, Daniel M.; Swaminathan, Hariharan; Rogers, Helen J.; O'Keeffe, Breda V.; Sugai, George; Horner, Robert H.
2011-01-01
A new method for deriving effect sizes from single-case designs is proposed. The strategy is applicable to small-sample time-series data with autoregressive errors. The method uses Generalized Least Squares (GLS) to model the autocorrelation of the data and estimate regression parameters to produce an effect size that represents the magnitude of…
Two-Color Laser High-Harmonic Generation in Cavitated Plasma Wakefields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, Carl; Benedetti, Carlo; Esarey, Eric
2016-10-03
A method is proposed for producing coherent x-rays via high-harmonic generation using a laser interacting with highly-stripped ions in cavitated plasma wakefields. Two laser pulses of different colors are employed: a long-wavelength pulse for cavitation and a short-wavelength pulse for harmonic generation. This method enables efficient laser harmonic generation in the sub-nm wavelength regime.
Synthetic aperture ultrasound imaging with a ring transducer array: preliminary ex vivo results.
Qu, Xiaolei; Azuma, Takashi; Yogi, Takeshi; Azuma, Shiho; Takeuchi, Hideki; Tamano, Satoshi; Takagi, Shu
2016-10-01
The conventional medical ultrasound imaging has a low lateral spatial resolution, and the image quality depends on the depth of the imaging location. To overcome these problems, this study presents a synthetic aperture (SA) ultrasound imaging method using a ring transducer array. An experimental ring transducer array imaging system was constructed. The array was composed of 2048 transducer elements, and had a diameter of 200 mm and an inter-element pitch of 0.325 mm. The imaging object was placed in the center of the ring transducer array, which was immersed in water. SA ultrasound imaging was then employed to scan the object and reconstruct the reflection image. Both wire phantom and ex vivo experiments were conducted. The proposed method was found to be capable of producing isotropic high-resolution images of the wire phantom. In addition, preliminary ex vivo experiments using porcine organs demonstrated the ability of the method to reconstruct high-quality images without any depth dependence. The proposed ring transducer array and SA ultrasound imaging method were shown to be capable of producing isotropic high-resolution images whose quality was independent of depth.
Alternative methods to evaluate trial level surrogacy.
Abrahantes, Josè Cortiñas; Shkedy, Ziv; Molenberghs, Geert
2008-01-01
The evaluation and validation of surrogate endpoints have been extensively studied in the last decade. Prentice [1] and Freedman, Graubard and Schatzkin [2] laid the foundations for the evaluation of surrogate endpoints in randomized clinical trials. Later, Buyse et al. [5] proposed a meta-analytic methodology, producing different methods for different settings, which was further studied by Alonso and Molenberghs [9], in their unifying approach based on information theory. In this article, we focus our attention on the trial-level surrogacy and propose alternative procedures to evaluate such surrogacy measure, which do not pre-specify the type of association. A promising correction based on cross-validation is investigated. As well as the construction of confidence intervals for this measure. In order to avoid making assumption about the type of relationship between the treatment effects and its distribution, a collection of alternative methods, based on regression trees, bagging, random forests, and support vector machines, combined with bootstrap-based confidence interval and, should one wish, in conjunction with a cross-validation based correction, will be proposed and applied. We apply the various strategies to data from three clinical studies: in opthalmology, in advanced colorectal cancer, and in schizophrenia. The results obtained for the three case studies are compared; they indicate that using random forest or bagging models produces larger estimated values for the surrogacy measure, which are in general stabler and the confidence interval narrower than linear regression and support vector regression. For the advanced colorectal cancer studies, we even found the trial-level surrogacy is considerably different from what has been reported. In general the alternative methods are more computationally demanding, and specially the calculation of the confidence intervals, require more computational time that the delta-method counterpart. First, more flexible modeling techniques can be used, allowing for other type of association. Second, when no cross-validation-based correction is applied, overly optimistic trial-level surrogacy estimates will be found, thus cross-validation is highly recommendable. Third, the use of the delta method to calculate confidence intervals is not recommendable since it makes assumptions valid only in very large samples. It may also produce range-violating limits. We therefore recommend alternatives: bootstrap methods in general. Also, the information-theoretic approach produces comparable results with the bagging and random forest approaches, when cross-validation correction is applied. It is also important to observe that, even for the case in which the linear model might be a good option too, bagging methods perform well too, and their confidence intervals were more narrow.
An algorithm for optimal fusion of atlases with different labeling protocols
Iglesias, Juan Eugenio; Sabuncu, Mert Rory; Aganj, Iman; Bhatt, Priyanka; Casillas, Christen; Salat, David; Boxer, Adam; Fischl, Bruce; Van Leemput, Koen
2014-01-01
In this paper we present a novel label fusion algorithm suited for scenarios in which different manual delineation protocols with potentially disparate structures have been used to annotate the training scans (hereafter referred to as “atlases”). Such scenarios arise when atlases have missing structures, when they have been labeled with different levels of detail, or when they have been taken from different heterogeneous databases. The proposed algorithm can be used to automatically label a novel scan with any of the protocols from the training data. Further, it enables us to generate new labels that are not present in any delineation protocol by defining intersections on the underling labels. We first use probabilistic models of label fusion to generalize three popular label fusion techniques to the multi-protocol setting: majority voting, semi-locally weighted voting and STAPLE. Then, we identify some shortcomings of the generalized methods, namely the inability to produce meaningful posterior probabilities for the different labels (majority voting, semi-locally weighted voting) and to exploit the similarities between the atlases (all three methods). Finally, we propose a novel generative label fusion model that can overcome these drawbacks. We use the proposed method to combine four brain MRI datasets labeled with different protocols (with a total of 102 unique labeled structures) to produce segmentations of 148 brain regions. Using cross-validation, we show that the proposed algorithm outperforms the generalizations of majority voting, semi-locally weighted voting and STAPLE (mean Dice score 83%, vs. 77%, 80% and 79%, respectively). We also evaluated the proposed algorithm in an aging study, successfully reproducing some well-known results in cortical and subcortical structures. PMID:25463466
Facile synthesis of biocompatible gold nanoparticles with organosilicone-coated surface properties
NASA Astrophysics Data System (ADS)
Xia, Lijin; Yi, Sijia; Lenaghan, Scott C.; Zhang, Mingjun
2012-07-01
In this study, a simple method for one-step synthesis of gold nanoparticles has been developed using an organosilicone surfactant, Silwet L-77, as both a reducing and capping agent. Synthesis of gold nanoparticles using this method is rapid and can be conducted conveniently at ambient temperature. Further refinement of the method, through the addition of sodium hydroxide and/or silver nitrate, allowed fine control over the size of spherical nanoparticles produced. Coated on the surface with organosilicone, the as-prepared gold nanoparticles were biocompatible and stable over the pH range from 5 to 12, and have been proven effective at transportation into MC3T3 osteoblast cells. The proposed method is simple, fast, and can produce size-controlled gold nanoparticles with unique surface properties for biomedical applications.
DOT National Transportation Integrated Search
1983-11-01
Author's abstract: A detailed re-analysis of available pedestrian accident data was utilized to define three sets of pedestrian safety public information and education (PI&E) messages. These messages were then produced and field tested. The objective...
Brooks, Larry M; Kuhlman, Benjamin J; McKesson, Doug W; McCloskey, Leo
2013-01-01
The poor interoperability of anthocyanin glycosides measurements by two pH differential methods is documented. Adams-Harbertson, which was proposed for commercial winemaking, was compared to AOAC Official Method 2005.02 for wine. California bottled wines (Pinot Noir, Merlot, and Cabernet Sauvignon) were assayed in a collaborative study (n=105), which found mean precision of Adams-Harbertson winery versus reference measurements to be 77 +/- 20%. Maximum error is expected to be 48% for Pinot Noir, 42% for Merlot, and 34% for Cabernet Sauvignon from reproducibility RSD. Range of measurements was actually 30 to 91% for Pinot Noir. An interoperability study (n=30) found Adams-Harbertson produces measurements that are nominally 150% of the AOAC pH differential method. Large analytical chemistry differences are: AOAC method uses Beer-Lambert equation and measures absorbance at pH 1.0 and 4.5, proposed a priori by Flueki and Francis; whereas Adams-Harbertson uses "universal" standard curve and measures absorbance ad hoc at pH 1.8 and 4.9 to reduce the effects of so-called co-pigmentation. Errors relative to AOAC are produced by Adams-Harbertson standard curve over Beer-Lambert and pH 1.8 over pH 1.0. The study recommends using AOAC Official Method 2005.02 for analysis of wine anthocyanin glycosides.
PhySIC: a veto supertree method with desirable properties.
Ranwez, Vincent; Berry, Vincent; Criscuolo, Alexis; Fabre, Pierre-Henri; Guillemot, Sylvain; Scornavacca, Celine; Douzery, Emmanuel J P
2007-10-01
This paper focuses on veto supertree methods; i.e., methods that aim at producing a conservative synthesis of the relationships agreed upon by all source trees. We propose desirable properties that a supertree should satisfy in this framework, namely the non-contradiction property (PC) and the induction property (PI). The former requires that the supertree does not contain relationships that contradict one or a combination of the source topologies, whereas the latter requires that all topological information contained in the supertree is present in a source tree or collectively induced by several source trees. We provide simple examples to illustrate their relevance and that allow a comparison with previously advocated properties. We show that these properties can be checked in polynomial time for any given rooted supertree. Moreover, we introduce the PhySIC method (PHYlogenetic Signal with Induction and non-Contradiction). For k input trees spanning a set of n taxa, this method produces a supertree that satisfies the above-mentioned properties in O(kn(3) + n(4)) computing time. The polytomies of the produced supertree are also tagged by labels indicating areas of conflict as well as those with insufficient overlap. As a whole, PhySIC enables the user to quickly summarize consensual information of a set of trees and localize groups of taxa for which the data require consolidation. Lastly, we illustrate the behaviour of PhySIC on primate data sets of various sizes, and propose a supertree covering 95% of all primate extant genera. The PhySIC algorithm is available at http://atgc.lirmm.fr/cgi-bin/PhySIC.
3D widefield light microscope image reconstruction without dyes
NASA Astrophysics Data System (ADS)
Larkin, S.; Larson, J.; Holmes, C.; Vaicik, M.; Turturro, M.; Jurkevich, A.; Sinha, S.; Ezashi, T.; Papavasiliou, G.; Brey, E.; Holmes, T.
2015-03-01
3D image reconstruction using light microscope modalities without exogenous contrast agents is proposed and investigated as an approach to produce 3D images of biological samples for live imaging applications. Multimodality and multispectral imaging, used in concert with this 3D optical sectioning approach is also proposed as a way to further produce contrast that could be specific to components in the sample. The methods avoid usage of contrast agents. Contrast agents, such as fluorescent or absorbing dyes, can be toxic to cells or alter cell behavior. Current modes of producing 3D image sets from a light microscope, such as 3D deconvolution algorithms and confocal microscopy generally require contrast agents. Zernike phase contrast (ZPC), transmitted light brightfield (TLB), darkfield microscopy and others can produce contrast without dyes. Some of these modalities have not previously benefitted from 3D image reconstruction algorithms, however. The 3D image reconstruction algorithm is based on an underlying physical model of scattering potential, expressed as the sample's 3D absorption and phase quantities. The algorithm is based upon optimizing an objective function - the I-divergence - while solving for the 3D absorption and phase quantities. Unlike typical deconvolution algorithms, each microscope modality, such as ZPC or TLB, produces two output image sets instead of one. Contrast in the displayed image and 3D renderings is further enabled by treating the multispectral/multimodal data as a feature set in a mathematical formulation that uses the principal component method of statistics.
Technique for forming ITO films with a controlled refractive index
DOE Office of Scientific and Technical Information (OSTI.GOV)
Markov, L. K., E-mail: l.markov@mail.ioffe.ru; Smirnova, I. P.; Pavluchenko, A. S.
2016-07-15
A new method for fabricating transparent conducting coatings based on indium-tin oxide (ITO) with a controlled refractive index is proposed. This method implies the successive deposition of material by electron-beam evaporation and magnetron sputtering. Sputtered coatings with different densities (and, correspondingly, different refractive indices) can be obtained by varying the ratio of the mass fractions of material deposited by different methods. As an example, films with effective refractive indices of 1.2, 1.4, and 1.7 in the wavelength range of 440–460 nm are fabricated. Two-layer ITO coatings with controlled refractive indices of the layers are also formed by the proposed method.more » Thus, multilayer transparent conducting coatings with desired optical parameters can be produced.« less
A Shearlet-based algorithm for quantum noise removal in low-dose CT images
NASA Astrophysics Data System (ADS)
Zhang, Aguan; Jiang, Huiqin; Ma, Ling; Liu, Yumin; Yang, Xiaopeng
2016-03-01
Low-dose CT (LDCT) scanning is a potential way to reduce the radiation exposure of X-ray in the population. It is necessary to improve the quality of low-dose CT images. In this paper, we propose an effective algorithm for quantum noise removal in LDCT images using shearlet transform. Because the quantum noise can be simulated by Poisson process, we first transform the quantum noise by using anscombe variance stabilizing transform (VST), producing an approximately Gaussian noise with unitary variance. Second, the non-noise shearlet coefficients are obtained by adaptive hard-threshold processing in shearlet domain. Third, we reconstruct the de-noised image using the inverse shearlet transform. Finally, an anscombe inverse transform is applied to the de-noised image, which can produce the improved image. The main contribution is to combine the anscombe VST with the shearlet transform. By this way, edge coefficients and noise coefficients can be separated from high frequency sub-bands effectively. A number of experiments are performed over some LDCT images by using the proposed method. Both quantitative and visual results show that the proposed method can effectively reduce the quantum noise while enhancing the subtle details. It has certain value in clinical application.
Simultaneous optimization method for absorption spectroscopy postprocessing.
Simms, Jean M; An, Xinliang; Brittelle, Mack S; Ramesh, Varun; Ghandhi, Jaal B; Sanders, Scott T
2015-05-10
A simultaneous optimization method is proposed for absorption spectroscopy postprocessing. This method is particularly useful for thermometry measurements based on congested spectra, as commonly encountered in combustion applications of H2O absorption spectroscopy. A comparison test demonstrated that the simultaneous optimization method had greater accuracy, greater precision, and was more user-independent than the common step-wise postprocessing method previously used by the authors. The simultaneous optimization method was also used to process experimental data from an environmental chamber and a constant volume combustion chamber, producing results with errors on the order of only 1%.
Novel harmonic regularization approach for variable selection in Cox's proportional hazards model.
Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan
2014-01-01
Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.
Naturalness preservation image contrast enhancement via histogram modification
NASA Astrophysics Data System (ADS)
Tian, Qi-Chong; Cohen, Laurent D.
2018-04-01
Contrast enhancement is a technique for enhancing image contrast to obtain better visual quality. Since many existing contrast enhancement algorithms usually produce over-enhanced results, the naturalness preservation is needed to be considered in the framework of image contrast enhancement. This paper proposes a naturalness preservation contrast enhancement method, which adopts the histogram matching to improve the contrast and uses the image quality assessment to automatically select the optimal target histogram. The contrast improvement and the naturalness preservation are both considered in the target histogram, so this method can avoid the over-enhancement problem. In the proposed method, the optimal target histogram is a weighted sum of the original histogram, the uniform histogram, and the Gaussian-shaped histogram. Then the structural metric and the statistical naturalness metric are used to determine the weights of corresponding histograms. At last, the contrast-enhanced image is obtained via matching the optimal target histogram. The experiments demonstrate the proposed method outperforms the compared histogram-based contrast enhancement algorithms.
NASA Astrophysics Data System (ADS)
Tajaddodianfar, Farid; Moheimani, S. O. Reza; Owen, James; Randall, John N.
2018-01-01
A common cause of tip-sample crashes in a Scanning Tunneling Microscope (STM) operating in constant current mode is the poor performance of its feedback control system. We show that there is a direct link between the Local Barrier Height (LBH) and robustness of the feedback control loop. A method known as the "gap modulation method" was proposed in the early STM studies for estimating the LBH. We show that the obtained measurements are affected by controller parameters and propose an alternative method which we prove to produce LBH measurements independent of the controller dynamics. We use the obtained LBH estimation to continuously update the gains of a STM proportional-integral (PI) controller and show that while tuning the PI gains, the closed-loop system tolerates larger variations of LBH without experiencing instability. We report experimental results, conducted on two STM scanners, to establish the efficiency of the proposed PI tuning approach. Improved feedback stability is believed to help in avoiding the tip/sample crash in STMs.
Elastic least-squares reverse time migration with velocities and density perturbation
NASA Astrophysics Data System (ADS)
Qu, Yingming; Li, Jinli; Huang, Jianping; Li, Zhenchun
2018-02-01
Elastic least-squares reverse time migration (LSRTM) based on the non-density-perturbation assumption can generate false-migrated interfaces caused by density variations. We perform an elastic LSRTM scheme with density variations for multicomponent seismic data to produce high-quality images in Vp, Vs and ρ components. However, the migrated images may suffer from crosstalk artefacts caused by P- and S-waves coupling in elastic LSRTM no matter what model parametrizations used. We have proposed an elastic LSRTM with density variations method based on wave modes separation to reduce these crosstalk artefacts by using P- and S-wave decoupled elastic velocity-stress equations to derive demigration equations and gradient formulae with respect to Vp, Vs and ρ. Numerical experiments with synthetic data demonstrate the capability and superiority of the proposed method. The imaging results suggest that our method promises imaging results with higher quality and has a faster residual convergence rate. Sensitivity analysis of migration velocity, migration density and stochastic noise verifies the robustness of the proposed method for field data.
NASA Astrophysics Data System (ADS)
Tudor, Albert Ioan; Motoc, Adrian Mihail; Ciobota, Cristina Florentina; Ciobota, Dan. Nastase; Piticescu, Radu Robert; Romero-Sanchez, Maria Dolores
2018-05-01
Thermal energy storage systems using phase change materials (PCMs) as latent heat storage are one of the main challenges at European level in improving the performances and efficiency of concentrated solar power energy generation due to their high energy density. PCM with high working temperatures in the temperature range 300-500 °C are required for these purposes. However their use is still limited due to the problems raised by the corrosion of the majority of high temperature PCMs and lower thermal transfer properties. Micro-encapsulation was proposed as one method to overcome these problems. Different micro-encapsulation methods proposed in the literature are presented and discussed. An original process for the micro-encapsulation of potassium nitrate as PCM in inorganic zinc oxide shells based on a solvothermal method followed by spray drying to produce microcapsules with controlled phase composition and distribution is proposed and their transformation temperatures and enthalpies measured by differential scanning calorimetry are presented.
NASA Astrophysics Data System (ADS)
Muir, B. R.; McEwen, M. R.; Rogers, D. W. O.
2014-10-01
A method is presented to obtain ion chamber calibration coefficients relative to secondary standard reference chambers in electron beams using depth-ionization measurements. Results are obtained as a function of depth and average electron energy at depth in 4, 8, 12 and 18 MeV electron beams from the NRC Elekta Precise linac. The PTW Roos, Scanditronix NACP-02, PTW Advanced Markus and NE 2571 ion chambers are investigated. The challenges and limitations of the method are discussed. The proposed method produces useful data at shallow depths. At depths past the reference depth, small shifts in positioning or drifts in the incident beam energy affect the results, thereby providing a built-in test of incident electron energy drifts and/or chamber set-up. Polarity corrections for ion chambers as a function of average electron energy at depth agree with literature data. The proposed method produces results consistent with those obtained using the conventional calibration procedure while gaining much more information about the behavior of the ion chamber with similar data acquisition time. Measurement uncertainties in calibration coefficients obtained with this method are estimated to be less than 0.5%. These results open up the possibility of using depth-ionization measurements to yield chamber ratios which may be suitable for primary standards-level dissemination.
Levman, Jacob E D; Gallego-Ortiz, Cristina; Warner, Ellen; Causer, Petrina; Martel, Anne L
2016-02-01
Magnetic resonance imaging (MRI)-enabled cancer screening has been shown to be a highly sensitive method for the early detection of breast cancer. Computer-aided detection systems have the potential to improve the screening process by standardizing radiologists to a high level of diagnostic accuracy. This retrospective study was approved by the institutional review board of Sunnybrook Health Sciences Centre. This study compares the performance of a proposed method for computer-aided detection (based on the second-order spatial derivative of the relative signal intensity) with the signal enhancement ratio (SER) on MRI-based breast screening examinations. Comparison is performed using receiver operating characteristic (ROC) curve analysis as well as free-response receiver operating characteristic (FROC) curve analysis. A modified computer-aided detection system combining the proposed approach with the SER method is also presented. The proposed method provides improvements in the rates of false positive markings over the SER method in the detection of breast cancer (as assessed by FROC analysis). The modified computer-aided detection system that incorporates both the proposed method and the SER method yields ROC results equal to that produced by SER while simultaneously providing improvements over the SER method in terms of false positives per noncancerous exam. The proposed method for identifying malignancies outperforms the SER method in terms of false positives on a challenging dataset containing many small lesions and may play a useful role in breast cancer screening by MRI as part of a computer-aided detection system.
Identity method to study chemical fluctuations in relativistic heavy-ion collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gazdzicki, Marek; Grebieszkow, Katarzyna; Mackowiak, Maja
Event-by-event fluctuations of the chemical composition of the hadronic final state of relativistic heavy-ion collisions carry valuable information on the properties of strongly interacting matter produced in the collisions. However, in experiments incomplete particle identification distorts the observed fluctuation signals. The effect is quantitatively studied and a new technique for measuring chemical fluctuations, the identity method, is proposed. The method fully eliminates the effect of incomplete particle identification. The application of the identity method to experimental data is explained.
NASA Astrophysics Data System (ADS)
Moey, Siah Watt; Abdullah, Aminah; Ahmad, Ishak
2014-09-01
A new patent pending process is proposed in this study to produce edible film directly from seaweed (Kappaphycus alvarezii). Seaweed together with other ingredients had been used to produce the film through casting technique. Physical and mechanical tests were performed on the edible film to examine the thickness, colour, transparency, solubility, tensile strength, elongation at break, water permeability rate, oxygen permeability rate and surface morphology. The produced film was transparent, stretchable, sealable and have basic properties for applications in food, pharmaceutical, cosmetic, toiletries and also agricultural industries. Edible film was successfully developed directly from dry seaweed instead of using alginate and carrageenan. The edible film processing method developed in this research was easier and cheaper compared with the method by using alginate and carrageenan.
Stochastic Least-Squares Petrov--Galerkin Method for Parameterized Linear Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Kookjin; Carlberg, Kevin; Elman, Howard C.
Here, we consider the numerical solution of parameterized linear systems where the system matrix, the solution, and the right-hand side are parameterized by a set of uncertain input parameters. We explore spectral methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of the solution error. As a remedy for this, we propose a novel stochatic least-squares Petrov--Galerkin (LSPG) method. The proposed method is optimal in the sense that it produces the solution that minimizes a weightedmore » $$\\ell^2$$-norm of the residual over all solutions in a given finite-dimensional subspace. Moreover, the method can be adapted to minimize the solution error in different weighted $$\\ell^2$$-norms by simply applying a weighting function within the least-squares formulation. In addition, a goal-oriented seminorm induced by an output quantity of interest can be minimized by defining a weighting function as a linear functional of the solution. We establish optimality and error bounds for the proposed method, and extensive numerical experiments show that the weighted LSPG method outperforms other spectral methods in minimizing corresponding target weighted norms.« less
Novas, Romulo Bourget; Fazan, Valeria Paula Sassoli; Felipe, Joaquim Cezar
2016-02-01
Nerve morphometry is known to produce relevant information for the evaluation of several phenomena, such as nerve repair, regeneration, implant, transplant, aging, and different human neuropathies. Manual morphometry is laborious, tedious, time consuming, and subject to many sources of error. Therefore, in this paper, we propose a new method for the automated morphometry of myelinated fibers in cross-section light microscopy images. Images from the recurrent laryngeal nerve of adult rats and the vestibulocochlear nerve of adult guinea pigs were used herein. The proposed pipeline for fiber segmentation is based on the techniques of competitive clustering and concavity analysis. The evaluation of the proposed method for segmentation of images was done by comparing the automatic segmentation with the manual segmentation. To further evaluate the proposed method considering morphometric features extracted from the segmented images, the distributions of these features were tested for statistical significant difference. The method achieved a high overall sensitivity and very low false-positive rates per image. We detect no statistical difference between the distribution of the features extracted from the manual and the pipeline segmentations. The method presented a good overall performance, showing widespread potential in experimental and clinical settings allowing large-scale image analysis and, thus, leading to more reliable results.
Chen, Jiajia; Zhao, Pan; Liang, Huawei; Mei, Tao
2014-09-18
The autonomous vehicle is an automated system equipped with features like environment perception, decision-making, motion planning, and control and execution technology. Navigating in an unstructured and complex environment is a huge challenge for autonomous vehicles, due to the irregular shape of road, the requirement of real-time planning, and the nonholonomic constraints of vehicle. This paper presents a motion planning method, based on the Radial Basis Function (RBF) neural network, to guide the autonomous vehicle in unstructured environments. The proposed algorithm extracts the drivable region from the perception grid map based on the global path, which is available in the road network. The sample points are randomly selected in the drivable region, and a gradient descent method is used to train the RBF network. The parameters of the motion-planning algorithm are verified through the simulation and experiment. It is observed that the proposed approach produces a flexible, smooth, and safe path that can fit any road shape. The method is implemented on autonomous vehicle and verified against many outdoor scenes; furthermore, a comparison of proposed method with the existing well-known Rapidly-exploring Random Tree (RRT) method is presented. The experimental results show that the proposed method is highly effective in planning the vehicle path and offers better motion quality.
Chen, Jiajia; Zhao, Pan; Liang, Huawei; Mei, Tao
2014-01-01
The autonomous vehicle is an automated system equipped with features like environment perception, decision-making, motion planning, and control and execution technology. Navigating in an unstructured and complex environment is a huge challenge for autonomous vehicles, due to the irregular shape of road, the requirement of real-time planning, and the nonholonomic constraints of vehicle. This paper presents a motion planning method, based on the Radial Basis Function (RBF) neural network, to guide the autonomous vehicle in unstructured environments. The proposed algorithm extracts the drivable region from the perception grid map based on the global path, which is available in the road network. The sample points are randomly selected in the drivable region, and a gradient descent method is used to train the RBF network. The parameters of the motion-planning algorithm are verified through the simulation and experiment. It is observed that the proposed approach produces a flexible, smooth, and safe path that can fit any road shape. The method is implemented on autonomous vehicle and verified against many outdoor scenes; furthermore, a comparison of proposed method with the existing well-known Rapidly-exploring Random Tree (RRT) method is presented. The experimental results show that the proposed method is highly effective in planning the vehicle path and offers better motion quality. PMID:25237902
Liu, Liangbing; Tao, Chao; Liu, XiaoJun; Deng, Mingxi; Wang, Senhua; Liu, Jun
2015-10-19
Photoacoustic tomography is a promising and rapidly developed methodology of biomedical imaging. It confronts an increasing urgent problem to reconstruct the image from weak and noisy photoacoustic signals, owing to its high benefit in extending the imaging depth and decreasing the dose of laser exposure. Based on the time-domain characteristics of photoacoustic signals, a pulse decomposition algorithm is proposed to reconstruct a photoacoustic image from signals with low signal-to-noise ratio. In this method, a photoacoustic signal is decomposed as the weighted summation of a set of pulses in the time-domain. Images are reconstructed from the weight factors, which are directly related to the optical absorption coefficient. Both simulation and experiment are conducted to test the performance of the method. Numerical simulations show that when the signal-to-noise ratio is -4 dB, the proposed method decreases the reconstruction error to about 17%, in comparison with the conventional back-projection method. Moreover, it can produce acceptable images even when the signal-to-noise ratio is decreased to -10 dB. Experiments show that, when the laser influence level is low, the proposed method achieves a relatively clean image of a hair phantom with some well preserved pattern details. The proposed method demonstrates imaging potential of photoacoustic tomography in expanding applications.
A note on the preconditioner Pm=(I+Sm)
NASA Astrophysics Data System (ADS)
Kohno, Toshiyuki; Niki, Hiroshi
2009-03-01
Kotakemori et al. [H. Kotakemori, K. Harada, M. Morimoto, H. Niki, A comparison theorem for the iterative method with the preconditioner (I+Smax), Journal of Computational and Applied Mathematics 145 (2002) 373-378] have reported that the convergence rate of the iterative method with a preconditioner Pm=(I+Sm) was superior to one of the modified Gauss-Seidel method under the condition. These authors derived a theorem comparing the Gauss-Seidel method with the proposed method. However, through application of a counter example, Wen Li [Wen Li, A note on the preconditioned GaussSeidel (GS) method for linear systems, Journal of Computational and Applied Mathematics 182 (2005) 81-91] pointed out that there exists a special matrix that does not satisfy this comparison theorem. In this note, we analyze the reason why such a to counter example may be produced, and propose a preconditioner to overcome this problem.
A TPMS-based method for modeling porous scaffolds for bionic bone tissue engineering.
Shi, Jianping; Zhu, Liya; Li, Lan; Li, Zongan; Yang, Jiquan; Wang, Xingsong
2018-05-09
In the field of bone defect repair, gradient porous scaffolds have received increased attention because they provide a better environment for promoting tissue regeneration. In this study, we propose an effective method to generate bionic porous scaffolds based on the TPMS (triply periodic minimal surface) and SF (sigmoid function) methods. First, cortical bone morphological features (e.g., pore size and distribution) were determined for several regions of a rabbit femoral bone by analyzing CT-scans. A finite element method was used to evaluate the mechanical properties of the bone at these respective areas. These results were used to place different TPMS substructures into one scaffold domain with smooth transitions. The geometrical parameters of the scaffolds were optimized to match the elastic properties of a human bone. With this proposed method, a functional gradient porous scaffold could be designed and produced by an additive manufacturing method.
Correcting groove error in gratings ruled on a 500-mm ruling engine using interferometric control.
Mi, Xiaotao; Yu, Haili; Yu, Hongzhu; Zhang, Shanwen; Li, Xiaotian; Yao, Xuefeng; Qi, Xiangdong; Bayinhedhig; Wan, Qiuhua
2017-07-20
Groove error is one of the most important factors affecting grating quality and spectral performance. To reduce groove error, we propose a new ruling-tool carriage system based on aerostatic guideways. We design a new blank carriage system with double piezoelectric actuators. We also propose a completely closed-loop servo-control system with a new optical measurement system that can control the position of the diamond relative to the blank. To evaluate our proposed methods, we produced several gratings, including an echelle grating with 79 grooves/mm, a grating with 768 grooves/mm, and a high-density grating with 6000 grooves/mm. The results show that our methods effectively reduce groove error in ruled gratings.
NASA Astrophysics Data System (ADS)
Maia, Priscila M. S.; de F. Rezende, Flavia B.; Netto, Annibal D. Pereira; de C. Marques, Flávia F.
The doramectin (DOR), which belongs to the avermectins group (AVM), has a high antiparasitic activity and so it has been widely used in food-producing animals. The DOR shows low fluorescence quantum efficiency and as a consequence, chemical derivatization reactions are necessary to produce derivatives with improved luminescent properties before its determination by fluorimetry. As the presence of this compound in food represents a risk to human health, an easy, clean and low cost derivatization reaction, which is alternative to those usually employed and that enables its spectrofluorimetric determination in milk samples, was developed. Ethanolic solutions of DOR, containing sodium hydroxide at a final concentration of 0.25 mol L-1, after 60 min of heating at 50 °C, produced fluorescent signals 1000 times higher than the original ethanolic solution. Using these optimized conditions, a linear response range that extended from 50.00 to 1000 μg L-1, with a value of (R2) equal to 0.9970, was obtained. Average recovery of DOR was 92.5 ± 1.5% (n = 3) in bovine milk fortified samples submitted to a liquid-liquid extraction at low temperature and pre concentration process, indicating the usefulness and effectiveness of the proposed method. The proposed spectrofluorimetric method is an alternative to high-performance liquid chromatography (HPLC) based methods, allowing rapid and simple detection of doramectin in milk samples.
Development of real-time PCR methods to quantify patulin-producing molds in food products.
Rodríguez, Alicia; Luque, M Isabel; Andrade, María J; Rodríguez, Mar; Asensio, Miguel A; Córdoba, Juan J
2011-09-01
Patulin is a mycotoxin produced by different Penicillium and Aspergillus strains isolated from food products. To improve food safety, the presence of patulin-producing molds in foods should be quantified. In the present work, two real-time (RTi) PCR protocols based on SYBR Green and TaqMan were developed. Thirty four patulin producers and 28 non-producers strains belonging to different species usually reported in food products were used. The patulin production was tested by mycellar electrokinetic capillary electrophoresis (MECE) and high-pressure liquid chromatography-mass spectrometry (HPLC-MS). A primer pair F-idhtrb/R-idhtrb and the probe IDHprobe were designed from the isoepoxydon dehydrogenase (idh) gene, involved in patulin biosynthesis. The functionality of the developed method was demonstrated by the high linear relationship of the standard curves constructed with the idh gene copy number and Ct values for the different patulin producers tested. The ability to quantify patulin producers of the developed SYBR Green and TaqMan assays in artificially inoculated food samples was successful, with a minimum threshold of 10 conidia g(-1) per reaction. The developed methods quantified with high efficiency fungal load in foods. These RTi-PCR protocols, are proposed to be used to quantify patulin-producing molds in food products and to prevent patulin from entering the food chain. Copyright © 2011 Elsevier Ltd. All rights reserved.
Joint Center Estimation Using Single-Frame Optimization: Part 1: Numerical Simulation.
Frick, Eric; Rahmatalla, Salam
2018-04-04
The biomechanical models used to refine and stabilize motion capture processes are almost invariably driven by joint center estimates, and any errors in joint center calculation carry over and can be compounded when calculating joint kinematics. Unfortunately, accurate determination of joint centers is a complex task, primarily due to measurements being contaminated by soft-tissue artifact (STA). This paper proposes a novel approach to joint center estimation implemented via sequential application of single-frame optimization (SFO). First, the method minimizes the variance of individual time frames’ joint center estimations via the developed variance minimization method to obtain accurate overall initial conditions. These initial conditions are used to stabilize an optimization-based linearization of human motion that determines a time-varying joint center estimation. In this manner, the complex and nonlinear behavior of human motion contaminated by STA can be captured as a continuous series of unique rigid-body realizations without requiring a complex analytical model to describe the behavior of STA. This article intends to offer proof of concept, and the presented method must be further developed before it can be reasonably applied to human motion. Numerical simulations were introduced to verify and substantiate the efficacy of the proposed methodology. When directly compared with a state-of-the-art inertial method, SFO reduced the error due to soft-tissue artifact in all cases by more than 45%. Instead of producing a single vector value to describe the joint center location during a motion capture trial as existing methods often do, the proposed method produced time-varying solutions that were highly correlated ( r > 0.82) with the true, time-varying joint center solution.
Process for making ceramic insulation
Akash, Akash [Salt Lake City, UT; Balakrishnan, G Nair [Sandy, UT
2009-12-08
A method is provided for producing insulation materials and insulation for high temperature applications using novel castable and powder-based ceramics. The ceramic components produced using the proposed process offers (i) a fine porosity (from nano-to micro scale); (ii) a superior strength-to-weight ratio; and (iii) flexibility in designing multilayered features offering multifunctionality which will increase the service lifetime of insulation and refractory components used in the solid oxide fuel cell, direct carbon fuel cell, furnace, metal melting, glass, chemical, paper/pulp, automobile, industrial heating, coal, and power generation industries. Further, the ceramic components made using this method may have net-shape and/or net-size advantages with minimum post machining requirements.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-28
... and rehabilitation research; (2) foster an exchange of expertise, information, and training methods to... refined analyses of data, producing observational findings, and creating other sources of research-based... Institute on Disability and Rehabilitation Research--Rehabilitation Research and Training Center AGENCY...
Getty: producing oil from diatomite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zublin, L.
1981-10-01
Getty Oil Company has developed unconventional oil production techniques which will yield oil from diatomaceous earth. They propose to mine oil-saturated diatomite using open-pit mining methods. Getty's diatomite deposit in the McKittrick field of California is unique because it is cocoa brown and saturated with crude oil. It is classified also as a tightly packed deposit, and oil cannot be extracted by conventional oil field methods.
Fragment size distribution statistics in dynamic fragmentation of laser shock-loaded tin
NASA Astrophysics Data System (ADS)
He, Weihua; Xin, Jianting; Zhao, Yongqiang; Chu, Genbai; Xi, Tao; Shui, Min; Lu, Feng; Gu, Yuqiu
2017-06-01
This work investigates the geometric statistics method to characterize the size distribution of tin fragments produced in the laser shock-loaded dynamic fragmentation process. In the shock experiments, the ejection of the tin sample with etched V-shape groove in the free surface are collected by the soft recovery technique. Subsequently, the produced fragments are automatically detected with the fine post-shot analysis techniques including the X-ray micro-tomography and the improved watershed method. To characterize the size distributions of the fragments, a theoretical random geometric statistics model based on Poisson mixtures is derived for dynamic heterogeneous fragmentation problem, which reveals linear combinational exponential distribution. The experimental data related to fragment size distributions of the laser shock-loaded tin sample are examined with the proposed theoretical model, and its fitting performance is compared with that of other state-of-the-art fragment size distribution models. The comparison results prove that our proposed model can provide far more reasonable fitting result for the laser shock-loaded tin.
Inverse Tone Mapping Based upon Retina Response
Huo, Yongqing; Yang, Fan; Brost, Vincent
2014-01-01
The development of high dynamic range (HDR) display arouses the research of inverse tone mapping methods, which expand dynamic range of the low dynamic range (LDR) image to match that of HDR monitor. This paper proposed a novel physiological approach, which could avoid artifacts occurred in most existing algorithms. Inspired by the property of the human visual system (HVS), this dynamic range expansion scheme performs with a low computational complexity and a limited number of parameters and obtains high-quality HDR results. Comparisons with three recent algorithms in the literature also show that the proposed method reveals more important image details and produces less contrast loss and distortion. PMID:24744678
A High Sensitivity and Wide Dynamic Range Fiber-Optic Sensor for Low-Concentration VOC Gas Detection
Khan, Md. Rajibur Rahaman; Kang, Shin-Won
2014-01-01
In this paper, we propose a volatile organic compound (VOC) gas sensing system with high sensitivity and a wide dynamic range that is based on the principle of the heterodyne frequency modulation method. According to this method, the time period of the sensing signal shift when Nile Red containing a VOC-sensitive membrane of a fiber-optic sensing element comes into contact with a VOC. This sensing membrane produces strong, fast and reversible signals when exposed to VOC gases. The response and recovery times of the proposed sensing system were less than 35 s, and good reproducibility and accuracy were obtained. PMID:25490592
Combined stamping-forging for non-axisymmetric product
NASA Astrophysics Data System (ADS)
Taureza, Muhammad; Danno, Atsushi; Song, Xu; Oh, Jin An
2016-10-01
Successive combined stamping-forging (CSF) is proposed to produce multi-thickness non-axisymmetric components. This method involves successive compression to create exclusively outward metal flow. Hitherto, the development of CSF has been mostly done for axisymmetric geometry. Using this technique, defect-free rectangular case component with length to thickness ratio of 40 is produced with lower forging pressure. This technology has potential for high throughput production of parts with multiple thicknesses and high width to thickness ratio.
Method for Producing Launch/Landing Pads and Structures Project
NASA Technical Reports Server (NTRS)
Mueller, Robert P. (Compiler)
2015-01-01
Current plans for deep space exploration include building landing-launch pads capable of withstanding the rocket blast of much larger spacecraft that that of the Apollo days. The proposed concept will develop lightweight launch and landing pad materials from in-situ materials, utilizing regolith to produce controllable porous cast metallic foam brickstiles shapes. These shapes can be utilized to lay a landing launch platform, as a construction material or as more complex parts of mechanical assemblies.
An efficient graph theory based method to identify every minimal reaction set in a metabolic network
2014-01-01
Background Development of cells with minimal metabolic functionality is gaining importance due to their efficiency in producing chemicals and fuels. Existing computational methods to identify minimal reaction sets in metabolic networks are computationally expensive. Further, they identify only one of the several possible minimal reaction sets. Results In this paper, we propose an efficient graph theory based recursive optimization approach to identify all minimal reaction sets. Graph theoretical insights offer systematic methods to not only reduce the number of variables in math programming and increase its computational efficiency, but also provide efficient ways to find multiple optimal solutions. The efficacy of the proposed approach is demonstrated using case studies from Escherichia coli and Saccharomyces cerevisiae. In case study 1, the proposed method identified three minimal reaction sets each containing 38 reactions in Escherichia coli central metabolic network with 77 reactions. Analysis of these three minimal reaction sets revealed that one of them is more suitable for developing minimal metabolism cell compared to other two due to practically achievable internal flux distribution. In case study 2, the proposed method identified 256 minimal reaction sets from the Saccharomyces cerevisiae genome scale metabolic network with 620 reactions. The proposed method required only 4.5 hours to identify all the 256 minimal reaction sets and has shown a significant reduction (approximately 80%) in the solution time when compared to the existing methods for finding minimal reaction set. Conclusions Identification of all minimal reactions sets in metabolic networks is essential since different minimal reaction sets have different properties that effect the bioprocess development. The proposed method correctly identified all minimal reaction sets in a both the case studies. The proposed method is computationally efficient compared to other methods for finding minimal reaction sets and useful to employ with genome-scale metabolic networks. PMID:24594118
Ring resonator based narrow-linewidth semiconductor lasers
NASA Technical Reports Server (NTRS)
Ksendzov, Alexander (Inventor)
2005-01-01
The present invention is a method and apparatus for using ring resonators to produce narrow linewidth hybrid semiconductor lasers. According to one embodiment of the present invention, the narrow linewidths are produced by combining the semiconductor gain chip with a narrow pass band external feedback element. The semi conductor laser is produced using a ring resonator which, combined with a Bragg grating, acts as the external feedback element. According to another embodiment of the present invention, the proposed integrated optics ring resonator is based on plasma enhanced chemical vapor deposition (PECVD) SiO.sub.2 /SiON/SiO.sub.2 waveguide technology.
Tracking and recognition face in videos with incremental local sparse representation model
NASA Astrophysics Data System (ADS)
Wang, Chao; Wang, Yunhong; Zhang, Zhaoxiang
2013-10-01
This paper addresses the problem of tracking and recognizing faces via incremental local sparse representation. First a robust face tracking algorithm is proposed via employing local sparse appearance and covariance pooling method. In the following face recognition stage, with the employment of a novel template update strategy, which combines incremental subspace learning, our recognition algorithm adapts the template to appearance changes and reduces the influence of occlusion and illumination variation. This leads to a robust video-based face tracking and recognition with desirable performance. In the experiments, we test the quality of face recognition in real-world noisy videos on YouTube database, which includes 47 celebrities. Our proposed method produces a high face recognition rate at 95% of all videos. The proposed face tracking and recognition algorithms are also tested on a set of noisy videos under heavy occlusion and illumination variation. The tracking results on challenging benchmark videos demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods. In the case of the challenging dataset in which faces undergo occlusion and illumination variation, and tracking and recognition experiments under significant pose variation on the University of California, San Diego (Honda/UCSD) database, our proposed method also consistently demonstrates a high recognition rate.
Multiscale moment-based technique for object matching and recognition
NASA Astrophysics Data System (ADS)
Thio, HweeLi; Chen, Liya; Teoh, Eam-Khwang
2000-03-01
A new method is proposed to extract features from an object for matching and recognition. The features proposed are a combination of local and global characteristics -- local characteristics from the 1-D signature function that is defined to each pixel on the object boundary, global characteristics from the moments that are generated from the signature function. The boundary of the object is first extracted, then the signature function is generated by computing the angle between two lines from every point on the boundary as a function of position along the boundary. This signature function is position, scale and rotation invariant (PSRI). The shape of the signature function is then described quantitatively by using moments. The moments of the signature function are the global characters of a local feature set. Using moments as the eventual features instead of the signature function reduces the time and complexity of an object matching application. Multiscale moments are implemented to produce several sets of moments that will generate more accurate matching. Basically multiscale technique is a coarse to fine procedure and makes the proposed method more robust to noise. This method is proposed to match and recognize objects under simple transformation, such as translation, scale changes, rotation and skewing. A simple logo indexing system is implemented to illustrate the performance of the proposed method.
a Preliminary Work on Layout Slam for Reconstruction of Indoor Corridor Environments
NASA Astrophysics Data System (ADS)
Baligh Jahromi, A.; Sohn, G.; Shahbazi, M.; Kang, J.
2017-09-01
We propose a real time indoor corridor layout estimation method based on visual Simultaneous Localization and Mapping (SLAM). The proposed method adopts the Manhattan World Assumption at indoor spaces and uses the detected single image straight line segments and their corresponding orthogonal vanishing points to improve the feature matching scheme in the adopted visual SLAM system. Using the proposed real time indoor corridor layout estimation method, the system is able to build an online sparse map of structural corner point features. The challenges presented by abrupt camera rotation in the 3D space are successfully handled through matching vanishing directions of consecutive video frames on the Gaussian sphere. Using the single image based indoor layout features for initializing the system, permitted the proposed method to perform real time layout estimation and camera localization in indoor corridor areas. For layout structural corner points matching, we adopted features which are invariant under scale, translation, and rotation. We proposed a new feature matching cost function which considers both local and global context information. The cost function consists of a unary term, which measures pixel to pixel orientation differences of the matched corners, and a binary term, which measures the amount of angle differences between directly connected layout corner features. We have performed the experiments on real scenes at York University campus buildings and the available RAWSEEDS dataset. The incoming results depict that the proposed method robustly performs along with producing very limited position and orientation errors.
NASA Astrophysics Data System (ADS)
Joo, Kyu-Ji; Shin, Jae-Woo; Dong, Kyung-Rae; Lim, Chang-Seon; Chung, Woon-Kwan; Kim, Young-Jae
2013-11-01
Reducing the exposure dose from a periapical X-ray machine is an important aim in dental radiography. Although the radiation exposure dose is generally low, any radiation exposure is harmful to the human body. Therefore, this study developed a method that reduces the exposure dose significantly compared to that encountered in a normal procedure, but still produces an image with a similar resolution. The correlation between the image resolution and the exposure dose of the proposed method was examined with increasing distance between the dosimeter and the X-ray tube. The results were compared with those obtained from the existing radiography method. When periapical radiography was performed once according to the recommendations of the International Commission on Radiological Protection (ICRP), the measured skin surface dose was low at 7 mGy or below. In contrast, the skin surface dose measured using the proposed method was only 1.57 mGy, showing a five-fold reduction. These results suggest that further decreases in dose might be achieved using the proposed method.
Markov random field model-based edge-directed image interpolation.
Li, Min; Nguyen, Truong Q
2008-07-01
This paper presents an edge-directed image interpolation algorithm. In the proposed algorithm, the edge directions are implicitly estimated with a statistical-based approach. In opposite to explicit edge directions, the local edge directions are indicated by length-16 weighting vectors. Implicitly, the weighting vectors are used to formulate geometric regularity (GR) constraint (smoothness along edges and sharpness across edges) and the GR constraint is imposed on the interpolated image through the Markov random field (MRF) model. Furthermore, under the maximum a posteriori-MRF framework, the desired interpolated image corresponds to the minimal energy state of a 2-D random field given the low-resolution image. Simulated annealing methods are used to search for the minimal energy state from the state space. To lower the computational complexity of MRF, a single-pass implementation is designed, which performs nearly as well as the iterative optimization. Simulation results show that the proposed MRF model-based edge-directed interpolation method produces edges with strong geometric regularity. Compared to traditional methods and other edge-directed interpolation methods, the proposed method improves the subjective quality of the interpolated edges while maintaining a high PSNR level.
Reflection full-waveform inversion using a modified phase misfit function
NASA Astrophysics Data System (ADS)
Cui, Chao; Huang, Jian-Ping; Li, Zhen-Chun; Liao, Wen-Yuan; Guan, Zhe
2017-09-01
Reflection full-waveform inversion (RFWI) updates the low- and highwavenumber components, and yields more accurate initial models compared with conventional full-waveform inversion (FWI). However, there is strong nonlinearity in conventional RFWI because of the lack of low-frequency data and the complexity of the amplitude. The separation of phase and amplitude information makes RFWI more linear. Traditional phase-calculation methods face severe phase wrapping. To solve this problem, we propose a modified phase-calculation method that uses the phase-envelope data to obtain the pseudo phase information. Then, we establish a pseudophase-information-based objective function for RFWI, with the corresponding source and gradient terms. Numerical tests verify that the proposed calculation method using the phase-envelope data guarantees the stability and accuracy of the phase information and the convergence of the objective function. The application on a portion of the Sigsbee2A model and comparison with inversion results of the improved RFWI and conventional FWI methods verify that the pseudophase-based RFWI produces a highly accurate and efficient velocity model. Moreover, the proposed method is robust to noise and high frequency.
Morphology filter bank for extracting nodular and linear patterns in medical images.
Hashimoto, Ryutaro; Uchiyama, Yoshikazu; Uchimura, Keiichi; Koutaki, Gou; Inoue, Tomoki
2017-04-01
Using image processing to extract nodular or linear shadows is a key technique of computer-aided diagnosis schemes. This study proposes a new method for extracting nodular and linear patterns of various sizes in medical images. We have developed a morphology filter bank that creates multiresolution representations of an image. Analysis bank of this filter bank produces nodular and linear patterns at each resolution level. Synthesis bank can then be used to perfectly reconstruct the original image from these decomposed patterns. Our proposed method shows better performance based on a quantitative evaluation using a synthesized image compared with a conventional method based on a Hessian matrix, often used to enhance nodular and linear patterns. In addition, experiments show that our method can be applied to the followings: (1) microcalcifications of various sizes in mammograms can be extracted, (2) blood vessels of various sizes in retinal fundus images can be extracted, and (3) thoracic CT images can be reconstructed while removing normal vessels. Our proposed method is useful for extracting nodular and linear shadows or removing normal structures in medical images.
SymPS: BRDF Symmetry Guided Photometric Stereo for Shape and Light Source Estimation.
Lu, Feng; Chen, Xiaowu; Sato, Imari; Sato, Yoichi
2018-01-01
We propose uncalibrated photometric stereo methods that address the problem due to unknown isotropic reflectance. At the core of our methods is the notion of "constrained half-vector symmetry" for general isotropic BRDFs. We show that such symmetry can be observed in various real-world materials, and it leads to new techniques for shape and light source estimation. Based on the 1D and 2D representations of the symmetry, we propose two methods for surface normal estimation; one focuses on accurate elevation angle recovery for surface normals when the light sources only cover the visible hemisphere, and the other for comprehensive surface normal optimization in the case that the light sources are also non-uniformly distributed. The proposed robust light source estimation method also plays an essential role to let our methods work in an uncalibrated manner with good accuracy. Quantitative evaluations are conducted with both synthetic and real-world scenes, which produce the state-of-the-art accuracy for all of the non-Lambertian materials in MERL database and the real-world datasets.
OligoIS: Scalable Instance Selection for Class-Imbalanced Data Sets.
García-Pedrajas, Nicolás; Perez-Rodríguez, Javier; de Haro-García, Aida
2013-02-01
In current research, an enormous amount of information is constantly being produced, which poses a challenge for data mining algorithms. Many of the problems in extremely active research areas, such as bioinformatics, security and intrusion detection, or text mining, share the following two features: large data sets and class-imbalanced distribution of samples. Although many methods have been proposed for dealing with class-imbalanced data sets, most of these methods are not scalable to the very large data sets common to those research fields. In this paper, we propose a new approach to dealing with the class-imbalance problem that is scalable to data sets with many millions of instances and hundreds of features. This proposal is based on the divide-and-conquer principle combined with application of the selection process to balanced subsets of the whole data set. This divide-and-conquer principle allows the execution of the algorithm in linear time. Furthermore, the proposed method is easy to implement using a parallel environment and can work without loading the whole data set into memory. Using 40 class-imbalanced medium-sized data sets, we will demonstrate our method's ability to improve the results of state-of-the-art instance selection methods for class-imbalanced data sets. Using three very large data sets, we will show the scalability of our proposal to millions of instances and hundreds of features.
Analog self-powered harvester achieving switching pause control to increase harvested energy
NASA Astrophysics Data System (ADS)
Makihara, Kanjuro; Asahina, Kei
2017-05-01
In this paper, we propose a self-powered analog controller circuit to increase the efficiency of electrical energy harvesting from vibrational energy using piezoelectric materials. Although the existing synchronized switch harvesting on inductor (SSHI) method is designed to produce efficient harvesting, its switching operation generates a vibration-suppression effect that reduces the harvested levels of electrical energy. To solve this problem, the authors proposed—in a previous paper—a switching method that takes this vibration-suppression effect into account. This method temporarily pauses the switching operation, allowing the recovery of the mechanical displacement and, therefore, of the piezoelectric voltage. In this paper, we propose a self-powered analog circuit to implement this switching control method. Self-powered vibration harvesting is achieved in this study by attaching a newly designed circuit to an existing analog controller for SSHI. This circuit aims to effectively implement the aforementioned new switching control strategy, where switching is paused in some vibration peaks, in order to allow motion recovery and a consequent increase in the harvested energy. Harvesting experiments performed using the proposed circuit reveal that the proposed method can increase the energy stored in the storage capacitor by a factor of 8.5 relative to the conventional SSHI circuit. This proposed technique is useful to increase the harvested energy especially for piezoelectric systems having large coupling factor.
Intelligent methods for the process parameter determination of plastic injection molding
NASA Astrophysics Data System (ADS)
Gao, Huang; Zhang, Yun; Zhou, Xundao; Li, Dequn
2018-03-01
Injection molding is one of the most widely used material processing methods in producing plastic products with complex geometries and high precision. The determination of process parameters is important in obtaining qualified products and maintaining product quality. This article reviews the recent studies and developments of the intelligent methods applied in the process parameter determination of injection molding. These intelligent methods are classified into three categories: Case-based reasoning methods, expert system- based methods, and data fitting and optimization methods. A framework of process parameter determination is proposed after comprehensive discussions. Finally, the conclusions and future research topics are discussed.
Xia, Huijun; Yang, Kunde; Ma, Yuanliang; Wang, Yong; Liu, Yaxiong
2017-01-01
Generally, many beamforming methods are derived under the assumption of white noise. In practice, the actual underwater ambient noise is complex. As a result, the noise removal capacity of the beamforming method may be deteriorated considerably. Furthermore, in underwater environment with extremely low signal-to-noise ratio (SNR), the performances of the beamforming method may be deteriorated. To tackle these problems, a noise removal method for uniform circular array (UCA) is proposed to remove the received noise and improve the SNR in complex noise environments with low SNR. First, the symmetrical noise sources are defined and the spatial correlation of the symmetrical noise sources is calculated. Then, based on the preceding results, the noise covariance matrix is decomposed into symmetrical and asymmetrical components. Analysis indicates that the symmetrical component only affect the real part of the noise covariance matrix. Consequently, the delay-and-sum (DAS) beamforming is performed by using the imaginary part of the covariance matrix to remove the symmetrical component. However, the noise removal method causes two problems. First, the proposed method produces a false target. Second, the proposed method would seriously suppress the signal when it is located in some directions. To solve the first problem, two methods to reconstruct the signal covariance matrix are presented: based on the estimation of signal variance and based on the constrained optimization algorithm. To solve the second problem, we can design the array configuration and select the suitable working frequency. Theoretical analysis and experimental results are included to demonstrate that the proposed methods are particularly effective in complex noise environments with low SNR. The proposed method can be extended to any array. PMID:28598386
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-10
... analyses of data, producing observational findings, and creating other sources of research-based... provide a rationale for the stage of research being proposed and the research methods associated with the... DEPARTMENT OF EDUCATION Final Priority: Disability and Rehabilitation Research Projects and...
A composite step conjugate gradients squared algorithm for solving nonsymmetric linear systems
NASA Astrophysics Data System (ADS)
Chan, Tony; Szeto, Tedd
1994-03-01
We propose a new and more stable variant of the CGS method [27] for solving nonsymmetric linear systems. The method is based on squaring the Composite Step BCG method, introduced recently by Bank and Chan [1,2], which itself is a stabilized variant of BCG in that it skips over steps for which the BCG iterate is not defined and causes one kind of breakdown in BCG. By doing this, we obtain a method (Composite Step CGS or CSCGS) which not only handles the breakdowns described above, but does so with the advantages of CGS, namely, no multiplications by the transpose matrix and a faster convergence rate than BCG. Our strategy for deciding whether to skip a step does not involve any machine dependent parameters and is designed to skip near breakdowns as well as produce smoother iterates. Numerical experiments show that the new method does produce improved performance over CGS on practical problems.
NASA Technical Reports Server (NTRS)
Jarosch, H. S.
1982-01-01
A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.
Brain medical image diagnosis based on corners with importance-values.
Gao, Linlin; Pan, Haiwei; Li, Qing; Xie, Xiaoqin; Zhang, Zhiqiang; Han, Jinming; Zhai, Xiao
2017-11-21
Brain disorders are one of the top causes of human death. Generally, neurologists analyze brain medical images for diagnosis. In the image analysis field, corners are one of the most important features, which makes corner detection and matching studies essential. However, existing corner detection studies do not consider the domain information of brain. This leads to many useless corners and the loss of significant information. Regarding corner matching, the uncertainty and structure of brain are not employed in existing methods. Moreover, most corner matching studies are used for 3D image registration. They are inapplicable for 2D brain image diagnosis because of the different mechanisms. To address these problems, we propose a novel corner-based brain medical image classification method. Specifically, we automatically extract multilayer texture images (MTIs) which embody diagnostic information from neurologists. Moreover, we present a corner matching method utilizing the uncertainty and structure of brain medical images and a bipartite graph model. Finally, we propose a similarity calculation method for diagnosis. Brain CT and MRI image sets are utilized to evaluate the proposed method. First, classifiers are trained in N-fold cross-validation analysis to produce the best θ and K. Then independent brain image sets are tested to evaluate the classifiers. Moreover, the classifiers are also compared with advanced brain image classification studies. For the brain CT image set, the proposed classifier outperforms the comparison methods by at least 8% on accuracy and 2.4% on F1-score. Regarding the brain MRI image set, the proposed classifier is superior to the comparison methods by more than 7.3% on accuracy and 4.9% on F1-score. Results also demonstrate that the proposed method is robust to different intensity ranges of brain medical image. In this study, we develop a robust corner-based brain medical image classifier. Specifically, we propose a corner detection method utilizing the diagnostic information from neurologists and a corner matching method based on the uncertainty and structure of brain medical images. Additionally, we present a similarity calculation method for brain image classification. Experimental results on two brain image sets show the proposed corner-based brain medical image classifier outperforms the state-of-the-art studies.
The average receiver operating characteristic curve in multireader multicase imaging studies
Samuelson, F W
2014-01-01
Objective: In multireader, multicase (MRMC) receiver operating characteristic (ROC) studies for evaluating medical imaging systems, the area under the ROC curve (AUC) is often used as a summary metric. Owing to the limitations of AUC, plotting the average ROC curve to accompany the rigorous statistical inference on AUC is recommended. The objective of this article is to investigate methods for generating the average ROC curve from ROC curves of individual readers. Methods: We present both a non-parametric method and a parametric method for averaging ROC curves that produce a ROC curve, the area under which is equal to the average AUC of individual readers (a property we call area preserving). We use hypothetical examples, simulated data and a real-world imaging data set to illustrate these methods and their properties. Results: We show that our proposed methods are area preserving. We also show that the method of averaging the ROC parameters, either the conventional bi-normal parameters (a, b) or the proper bi-normal parameters (c, da), is generally not area preserving and may produce a ROC curve that is intuitively not an average of multiple curves. Conclusion: Our proposed methods are useful for making plots of average ROC curves in MRMC studies as a companion to the rigorous statistical inference on the AUC end point. The software implementing these methods is freely available from the authors. Advances in knowledge: Methods for generating the average ROC curve in MRMC ROC studies are formally investigated. The area-preserving criterion we defined is useful to evaluate such methods. PMID:24884728
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Yunfeng, E-mail: yfcai@math.pku.edu.cn; Department of Computer Science, University of California, Davis 95616; Bai, Zhaojun, E-mail: bai@cs.ucdavis.edu
2013-12-15
The iterative diagonalization of a sequence of large ill-conditioned generalized eigenvalue problems is a computational bottleneck in quantum mechanical methods employing a nonorthogonal basis for ab initio electronic structure calculations. We propose a hybrid preconditioning scheme to effectively combine global and locally accelerated preconditioners for rapid iterative diagonalization of such eigenvalue problems. In partition-of-unity finite-element (PUFE) pseudopotential density-functional calculations, employing a nonorthogonal basis, we show that the hybrid preconditioned block steepest descent method is a cost-effective eigensolver, outperforming current state-of-the-art global preconditioning schemes, and comparably efficient for the ill-conditioned generalized eigenvalue problems produced by PUFE as the locally optimal blockmore » preconditioned conjugate-gradient method for the well-conditioned standard eigenvalue problems produced by planewave methods.« less
NASA Astrophysics Data System (ADS)
Varma, Ruchi; Ghosh, Jayanta
2018-06-01
A new hybrid technique, which is a combination of neural network (NN) and support vector machine, is proposed for designing of different slotted dual band proximity coupled microstrip antennas. Slots on the patch are employed to produce the second resonance along with size reduction. The proposed hybrid model provides flexibility to design the dual band antennas in the frequency range from 1 to 6 GHz. This includes DCS (1.71-1.88 GHz), PCS (1.88-1.99 GHz), UMTS (1.92-2.17 GHz), LTE2300 (2.3-2.4 GHz), Bluetooth (2.4-2.485 GHz), WiMAX (3.3-3.7 GHz), and WLAN (5.15-5.35 GHz, 5.725-5.825 GHz) bands applications. Also, the comparative study of this proposed technique is done with the existing methods like knowledge based NN and support vector machine. The proposed method is found to be more accurate in terms of % error and root mean square % error and the results are in good accord with the measured values.
Integration of heterogeneous features for remote sensing scene classification
NASA Astrophysics Data System (ADS)
Wang, Xin; Xiong, Xingnan; Ning, Chen; Shi, Aiye; Lv, Guofang
2018-01-01
Scene classification is one of the most important issues in remote sensing (RS) image processing. We find that features from different channels (shape, spectral, texture, etc.), levels (low-level and middle-level), or perspectives (local and global) could provide various properties for RS images, and then propose a heterogeneous feature framework to extract and integrate heterogeneous features with different types for RS scene classification. The proposed method is composed of three modules (1) heterogeneous features extraction, where three heterogeneous feature types, called DS-SURF-LLC, mean-Std-LLC, and MS-CLBP, are calculated, (2) heterogeneous features fusion, where the multiple kernel learning (MKL) is utilized to integrate the heterogeneous features, and (3) an MKL support vector machine classifier for RS scene classification. The proposed method is extensively evaluated on three challenging benchmark datasets (a 6-class dataset, a 12-class dataset, and a 21-class dataset), and the experimental results show that the proposed method leads to good classification performance. It produces good informative features to describe the RS image scenes. Moreover, the integration of heterogeneous features outperforms some state-of-the-art features on RS scene classification tasks.
S-CNN: Subcategory-aware convolutional networks for object detection.
Chen, Tao; Lu, Shijian; Fan, Jiayuan
2017-09-26
The marriage between the deep convolutional neural network (CNN) and region proposals has made breakthroughs for object detection in recent years. While the discriminative object features are learned via a deep CNN for classification, the large intra-class variation and deformation still limit the performance of the CNN based object detection. We propose a subcategory-aware CNN (S-CNN) to solve the object intra-class variation problem. In the proposed technique, the training samples are first grouped into multiple subcategories automatically through a novel instance sharing maximum margin clustering process. A multi-component Aggregated Channel Feature (ACF) detector is then trained to produce more latent training samples, where each ACF component corresponds to one clustered subcategory. The produced latent samples together with their subcategory labels are further fed into a CNN classifier to filter out false proposals for object detection. An iterative learning algorithm is designed for the joint optimization of image subcategorization, multi-component ACF detector, and subcategory-aware CNN classifier. Experiments on INRIA Person dataset, Pascal VOC 2007 dataset and MS COCO dataset show that the proposed technique clearly outperforms the state-of-the-art methods for generic object detection.
A Novel Coarsening Method for Scalable and Efficient Mesh Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, A; Hysom, D; Gunney, B
2010-12-02
In this paper, we propose a novel mesh coarsening method called brick coarsening method. The proposed method can be used in conjunction with any graph partitioners and scales to very large meshes. This method reduces problem space by decomposing the original mesh into fixed-size blocks of nodes called bricks, layered in a similar way to conventional brick laying, and then assigning each node of the original mesh to appropriate brick. Our experiments indicate that the proposed method scales to very large meshes while allowing simple RCB partitioner to produce higher-quality partitions with significantly less edge cuts. Our results further indicatemore » that the proposed brick-coarsening method allows more complicated partitioners like PT-Scotch to scale to very large problem size while still maintaining good partitioning performance with relatively good edge-cut metric. Graph partitioning is an important problem that has many scientific and engineering applications in such areas as VLSI design, scientific computing, and resource management. Given a graph G = (V,E), where V is the set of vertices and E is the set of edges, (k-way) graph partitioning problem is to partition the vertices of the graph (V) into k disjoint groups such that each group contains roughly equal number of vertices and the number of edges connecting vertices in different groups is minimized. Graph partitioning plays a key role in large scientific computing, especially in mesh-based computations, as it is used as a tool to minimize the volume of communication and to ensure well-balanced load across computing nodes. The impact of graph partitioning on the reduction of communication can be easily seen, for example, in different iterative methods to solve a sparse system of linear equation. Here, a graph partitioning technique is applied to the matrix, which is basically a graph in which each edge is a non-zero entry in the matrix, to allocate groups of vertices to processors in such a way that many of matrix-vector multiplication can be performed locally on each processor and hence to minimize communication. Furthermore, a good graph partitioning scheme ensures the equal amount of computation performed on each processor. Graph partitioning is a well known NP-complete problem, and thus the most commonly used graph partitioning algorithms employ some forms of heuristics. These algorithms vary in terms of their complexity, partition generation time, and the quality of partitions, and they tend to trade off these factors. A significant challenge we are currently facing at the Lawrence Livermore National Laboratory is how to partition very large meshes on massive-size distributed memory machines like IBM BlueGene/P, where scalability becomes a big issue. For example, we have found that the ParMetis, a very popular graph partitioning tool, can only scale to 16K processors. An ideal graph partitioning method on such an environment should be fast and scale to very large meshes, while producing high quality partitions. This is an extremely challenging task, as to scale to that level, the partitioning algorithm should be simple and be able to produce partitions that minimize inter-processor communications and balance the load imposed on the processors. Our goals in this work are two-fold: (1) To develop a new scalable graph partitioning method with good load balancing and communication reduction capability. (2) To study the performance of the proposed partitioning method on very large parallel machines using actual data sets and compare the performance to that of existing methods. The proposed method achieves the desired scalability by reducing the mesh size. For this, it coarsens an input mesh into a smaller size mesh by coalescing the vertices and edges of the original mesh into a set of mega-vertices and mega-edges. A new coarsening method called brick algorithm is developed in this research. In the brick algorithm, the zones in a given mesh are first grouped into fixed size blocks called bricks. These brick are then laid in a way similar to conventional brick laying technique, which reduces the number of neighboring blocks each block needs to communicate. Contributions of this research are as follows: (1) We have developed a novel method that scales to a really large problem size while producing high quality mesh partitions; (2) We measured the performance and scalability of the proposed method on a machine of massive size using a set of actual large complex data sets, where we have scaled to a mesh with 110 million zones using our method. To the best of our knowledge, this is the largest complex mesh that a partitioning method is successfully applied to; and (3) We have shown that proposed method can reduce the number of edge cuts by as much as 65%.« less
Novel Harmonic Regularization Approach for Variable Selection in Cox's Proportional Hazards Model
Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan
2014-01-01
Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods. PMID:25506389
An Improved Image Ringing Evaluation Method with Weighted Sum of Gray Extreme Value
NASA Astrophysics Data System (ADS)
Yang, Ling; Meng, Yanhua; Wang, Bo; Bai, Xu
2018-03-01
Blind image restoration algorithm usually produces ringing more obvious at the edges. Ringing phenomenon is mainly affected by noise, species of restoration algorithm, and the impact of the blur kernel estimation during restoration. Based on the physical mechanism of ringing, a method of evaluating the ringing on blind restoration images is proposed. The method extracts the ringing image overshooting and ripple region to make the weighted statistics for the regional gradient value. According to the weights set by multiple experiments, the edge information is used to characterize the details of the edge to determine the weight, quantify the seriousness of the ring effect, and propose the evaluation method of the ringing caused by blind restoration. The experimental results show that the method can effectively evaluate the ring effect in the restoration images under different restoration algorithms and different restoration parameters. The evaluation results are consistent with the visual evaluation results.
SEIPS-based process modeling in primary care.
Wooldridge, Abigail R; Carayon, Pascale; Hundt, Ann Schoofs; Hoonakker, Peter L T
2017-04-01
Process mapping, often used as part of the human factors and systems engineering approach to improve care delivery and outcomes, should be expanded to represent the complex, interconnected sociotechnical aspects of health care. Here, we propose a new sociotechnical process modeling method to describe and evaluate processes, using the SEIPS model as the conceptual framework. The method produces a process map and supplementary table, which identify work system barriers and facilitators. In this paper, we present a case study applying this method to three primary care processes. We used purposeful sampling to select staff (care managers, providers, nurses, administrators and patient access representatives) from two clinics to observe and interview. We show the proposed method can be used to understand and analyze healthcare processes systematically and identify specific areas of improvement. Future work is needed to assess usability and usefulness of the SEIPS-based process modeling method and further refine it. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Damay, Nicolas; Forgez, Christophe; Bichat, Marie-Pierre; Friedrich, Guy
2016-11-01
The entropy-variation of a battery is responsible for heat generation or consumption during operation and its prior measurement is mandatory for developing a thermal model. It is generally done through the potentiometric method which is considered as a reference. However, it requires several days or weeks to get a look-up table with a 5 or 10% SoC (State of Charge) resolution. In this study, a calorimetric method based on the inversion of a thermal model is proposed for the fast estimation of a nearly continuous curve of entropy-variation. This is achieved by separating the heats produced while charging and discharging the battery. The entropy-variation is then deduced from the extracted entropic heat. The proposed method is validated by comparing the results obtained with several current rates to measurements made with the potentiometric method.
Behavior Based Social Dimensions Extraction for Multi-Label Classification
Li, Le; Xu, Junyi; Xiao, Weidong; Ge, Bin
2016-01-01
Classification based on social dimensions is commonly used to handle the multi-label classification task in heterogeneous networks. However, traditional methods, which mostly rely on the community detection algorithms to extract the latent social dimensions, produce unsatisfactory performance when community detection algorithms fail. In this paper, we propose a novel behavior based social dimensions extraction method to improve the classification performance in multi-label heterogeneous networks. In our method, nodes’ behavior features, instead of community memberships, are used to extract social dimensions. By introducing Latent Dirichlet Allocation (LDA) to model the network generation process, nodes’ connection behaviors with different communities can be extracted accurately, which are applied as latent social dimensions for classification. Experiments on various public datasets reveal that the proposed method can obtain satisfactory classification results in comparison to other state-of-the-art methods on smaller social dimensions. PMID:27049849
SEIPS-Based Process Modeling in Primary Care
Wooldridge, Abigail R.; Carayon, Pascale; Hundt, Ann Schoofs; Hoonakker, Peter
2016-01-01
Process mapping, often used as part of the human factors and systems engineering approach to improve care delivery and outcomes, should be expanded to represent the complex, interconnected sociotechnical aspects of health care. Here, we propose a new sociotechnical process modeling method to describe and evaluate processes, using the SEIPS model as the conceptual framework. The method produces a process map and supplementary table, which identify work system barriers and facilitators. In this paper, we present a case study applying this method to three primary care processes. We used purposeful sampling to select staff (care managers, providers, nurses, administrators and patient access representatives) from two clinics to observe and interview. We show the proposed method can be used to understand and analyze healthcare processes systematically and identify specific areas of improvement. Future work is needed to assess usability and usefulness of the SEIPS-based process modeling method and further refine it. PMID:28166883
Relaxation method of compensation in an optical correlator
NASA Technical Reports Server (NTRS)
Juday, Richard D.; Daiuto, Brian J.
1987-01-01
An iterative method is proposed for the sharpening of programmable filters in a 4-f optical correlator. Continuously variable spatial light modulators (SLMs) permit the fine adjustment of optical processing filters so as to compensate for the departures from ideal behavior of a real optical system. Although motivated by the development of continuously variable phase-only SLMs, the proposed sharpening method is also applicable to amplitude modulators and, with appropriate adjustments, to binary modulators as well. A computer simulation is presented that illustrates the potential effectiveness of the method: an image is placed on the input to the correlator, and its corresponding phase-only filter is adjusted (allowed to relax) so as to produce a progressively brighter and more centralized peak in the correlation plane. The technique is highly robust against the form of the system's departure from ideal behavior.
Real-time traffic sign recognition based on a general purpose GPU and deep-learning.
Lim, Kwangyong; Hong, Yongwon; Choi, Yeongwoo; Byun, Hyeran
2017-01-01
We present a General Purpose Graphics Processing Unit (GPGPU) based real-time traffic sign detection and recognition method that is robust against illumination changes. There have been many approaches to traffic sign recognition in various research fields; however, previous approaches faced several limitations when under low illumination or wide variance of light conditions. To overcome these drawbacks and improve processing speeds, we propose a method that 1) is robust against illumination changes, 2) uses GPGPU-based real-time traffic sign detection, and 3) performs region detecting and recognition using a hierarchical model. This method produces stable results in low illumination environments. Both detection and hierarchical recognition are performed in real-time, and the proposed method achieves 0.97 F1-score on our collective dataset, which uses the Vienna convention traffic rules (Germany and South Korea).
da Silva, Larissa F; Barbosa, Andreia D; de Paula, Heber M; Romualdo, Lincoln L; Andrade, Leonardo S
2016-09-15
This paper describes and discusses an investigation into the treatment of paint manufacturing wastewater (water-based acrylic texture) by coagulation (aluminum sulfate) coupled to electrochemical methods (BDD electrode). Two proposals are put forward, based on the results. The first proposal considers the feasibility of reusing wastewater treated by the methods separately and in combination, while the second examines the possibility of its disposal into water bodies. To this end, parameters such as toxicity, turbidity, color, organic load, dissolved aluminum, alkalinity, hardness and odor are evaluated. In addition, the proposal for water reuse is strengthened by the quality of the water-based paints produced using the wastewater treated by the two methods (combined and separate), which was evaluated based on the typical parameters for the quality control of these products. Under optimized conditions, the use of the chemical coagulation (12 mL/L of Al2(SO4)3 dosage) treatment, alone, proved the feasibility of reusing the treated wastewater in the paint manufacturing process. However, the use of the electrochemical method (i = 10 mA/cm(2) and t = 90 min) was required to render the treated wastewater suitable for discharge into water bodies. Copyright © 2016 Elsevier Ltd. All rights reserved.
Lee, Hyunyeol; Jeong, Woo Chul; Kim, Hyung Joong; Woo, Eung Je; Park, Jaeseok
2016-05-01
To develop a novel, current-controlled alternating steady-state free precession (SSFP)-based conductivity imaging method and corresponding MR signal models to estimate current-induced magnetic flux density (Bz ) and conductivity distribution. In the proposed method, an SSFP pulse sequence, which is in sync with alternating current pulses, produces dual oscillating steady states while yielding nonlinear relation between signal phase and Bz . A ratiometric signal model between the states was analytically derived using the Bloch equation, wherein Bz was estimated by solving a nonlinear inverse problem for conductivity estimation. A theoretical analysis on the signal-to-noise ratio of Bz was given. Numerical and experimental studies were performed using SSFP-FID and SSFP-ECHO with current pulses positioned either before or after signal encoding to investigate the feasibility of the proposed method in conductivity estimation. Given all SSFP variants herein, SSFP-FID with alternating current pulses applied before signal encoding exhibits the highest Bz signal-to-noise ratio and conductivity contrast. Additionally, compared with conventional conductivity imaging, the proposed method benefits from rapid SSFP acquisition without apparent loss of conductivity contrast. We successfully demonstrated the feasibility of the proposed method in estimating current-induced Bz and conductivity distribution. It can be a promising, rapid imaging strategy for quantitative conductivity imaging. © 2015 Wiley Periodicals, Inc.
Costanzo, Paola; Bonacci, Sonia; Cariati, Luca; Nardi, Monica; Oliverio, Manuela; Procopio, Antonio
2018-04-15
A simple and very environmental friendly microwave assisted method to produce oleacein in good yield starting from the easily available oleuropein is here presented. The methodology is proposed to produce the appropriate amount of hydroxytyrosol derivatives to enrich a commercial oil for an oil which provides beneficial effects on the human health. Copyright © 2017 Elsevier Ltd. All rights reserved.
Classification of crude oils produced by in situ combustion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryabov, V.D.; Dauda, S.; Tabasaranskaya, T.Z.
1995-05-01
It was shown in that oil from the Karazhanbas field undergoes thermal and thermooxidative conversions in the course of production by in situ combustion (ISC). It has been proposed that crudes produced by this method should be assigned to three classes - those that have been subjected to thermal action only, those that have been subjected to thermooxidative action, and unconverted (native) crudes. This sort of classification is necessary in resolving questions of logical mixing of crudes, transportation, storage, and subsequent processing.
Casting technology for ODS steels - dispersion of nanoparticles in liquid metals
NASA Astrophysics Data System (ADS)
Sarma, M.; Grants, I.; Kaldre, I.; Bojarevics, A.; Gerbeth, G.
2017-07-01
Dispersion of particles to produce metal matrix nanocomposites (MMNC) can be achieved by means of ultrasonic vibration of the melt using ultrasound transducers. However, a direct transfer of this method to produce steel composites is not feasible because of the much higher working temperature. Therefore, an inductive technology for contactless treatment by acoustic cavitation was developed. This report describes the samples produced to assess the feasibility of the proposed method for nano-particle separation in steel. Stainless steel samples with inclusions of TiB2, TiO2, Y2O3, CeO2, Al2O3 and TiN have been created and analyzed. Additional experiments have been performed using light metals with an increased value of the steady magnetic field using a superconducting magnet with a field strength of up to 5 T.
Urban food crop production capacity and competition with the urban forest
Jeffrey J Richardson; L. Monika Moskal
2016-01-01
The sourcing of food plays a significant role in assessing the sustainability of a city, but it is unclear how much food a city can produce within its city limits. In this study, we propose a method for estimating the maximum food crop production capacity of a city and demonstrate the method in Seattle, WA USA by taking into account land use, the light environment, and...
High speed inviscid compressible flow by the finite element method
NASA Technical Reports Server (NTRS)
Zienkiewicz, O. C.; Loehner, R.; Morgan, K.
1984-01-01
The finite element method and an explicit time stepping algorithm which is based on Taylor-Galerkin schemes with an appropriate artificial viscosity is combined with an automatic mesh refinement process which is designed to produce accurate steady state solutions to problems of inviscid compressible flow in two dimensions. The results of two test problems are included which demonstrate the excellent performance characteristics of the proposed procedures.
Optical phase plates as a creative medium for special effects in images
NASA Astrophysics Data System (ADS)
Shaoulov, Vesselin I.; Meyer, Catherine; Argotti, Yann; Rolland, Jannick P.
2001-12-01
A new paradigm and methods for special effects in images were recently proposed by artist and movie producer Steven Hylen. Based on these methods, images resembling painting may be formed using optical phase plates. The role of the mathematical and optical properties of the phase plates is studied in the development of these new art forms. Results of custom software as well as ASAP simulations are presented.
NASA Astrophysics Data System (ADS)
Saleem, M. Rehan; Ali, Ishtiaq; Qamar, Shamsul
2018-03-01
In this article, a reduced five-equation two-phase flow model is numerically investigated. The formulation of the model is based on the conservation and energy exchange laws. The model is non-conservative and the governing equations contain two equations for the mass conservation, one for the over all momentum and one for the total energy. The fifth equation is the energy equation for one of the two phases that includes a source term on the right hand side for incorporating energy exchange between the two fluids in the form of mechanical and thermodynamical works. A Runge-Kutta discontinuous Galerkin finite element method is applied to solve the model equations. The main attractive features of the proposed method include its formal higher order accuracy, its nonlinear stability, its ability to handle complicated geometries, and its ability to capture sharp discontinuities or strong gradients in the solutions without producing spurious oscillations. The proposed method is robust and well suited for large-scale time-dependent computational problems. Several case studies of two-phase flows are presented. For validation and comparison of the results, the same model equations are also solved by using a staggered central scheme. It was found that discontinuous Galerkin scheme produces better results as compared to the staggered central scheme.
Automatic localization of cochlear implant electrodes in CTs with a limited intensity range
NASA Astrophysics Data System (ADS)
Zhao, Yiyuan; Dawant, Benoit M.; Noble, Jack H.
2017-02-01
Cochlear implants (CIs) are neural prosthetics for treating severe-to-profound hearing loss. Our group has developed an image-guided cochlear implant programming (IGCIP) system that uses image analysis techniques to recommend patientspecific CI processor settings to improve hearing outcomes. One crucial step in IGCIP is the localization of CI electrodes in post-implantation CTs. Manual localization of electrodes requires time and expertise. To automate this process, our group has proposed automatic techniques that have been validated on CTs acquired with scanners that produce images with an extended range of intensity values. However, there are many clinical CTs acquired with a limited intensity range. This limitation complicates the electrode localization process. In this work, we present a pre-processing step for CTs with a limited intensity range and extend the methods we proposed for full intensity range CTs to localize CI electrodes in CTs with limited intensity range. We evaluate our method on CTs of 20 subjects implanted with CI arrays produced by different manufacturers. Our method achieves a mean localization error of 0.21mm. This indicates our method is robust for automatic localization of CI electrodes in different types of CTs, which represents a crucial step for translating IGCIP from research laboratory to clinical use.
Virtual acoustic environments for comprehensive evaluation of model-based hearing devices.
Grimm, Giso; Luberadzka, Joanna; Hohmann, Volker
2018-06-01
Create virtual acoustic environments (VAEs) with interactive dynamic rendering for applications in audiology. A toolbox for creation and rendering of dynamic virtual acoustic environments (TASCAR) that allows direct user interaction was developed for application in hearing aid research and audiology. The software architecture and the simulation methods used to produce VAEs are outlined. Example environments are described and analysed. With the proposed software, a tool for simulation of VAEs is available. A set of VAEs rendered with the proposed software was described.
Milling strategies evaluation when simulating the forming dies' functional surfaces production
NASA Astrophysics Data System (ADS)
Ižol, Peter; Tomáš, Miroslav; Beňo, Jozef
2016-05-01
The paper deals with selection and evaluation of milling strategies, available in CAM systems and applicable when complicated shape parts are produced, such as forming dies. A method to obtain samples is proposed and this stems from real forming die surface machined by proper strategies. The strategy applicability for the whole part - forming die - is reviewed by the particular specimen evaluation. The presented methodology has been verified by machining model die and comparing it to the production procedure proposed in other CAM systems.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-09
... U.S. Honey Producer Research, Promotion, and Consumer Information Order; Withdrawal of a Proposed..., 2010, that proposed a new U.S. honey producer funded research and promotion program under the Commodity Promotion, Research, and Information Act of 1996 (1996 Act). The proposed U.S. Honey Producer Research...
Hu, Weiming; Fan, Yabo; Xing, Junliang; Sun, Liang; Cai, Zhaoquan; Maybank, Stephen
2018-09-01
We construct a new efficient near duplicate image detection method using a hierarchical hash code learning neural network and load-balanced locality-sensitive hashing (LSH) indexing. We propose a deep constrained siamese hash coding neural network combined with deep feature learning. Our neural network is able to extract effective features for near duplicate image detection. The extracted features are used to construct a LSH-based index. We propose a load-balanced LSH method to produce load-balanced buckets in the hashing process. The load-balanced LSH significantly reduces the query time. Based on the proposed load-balanced LSH, we design an effective and feasible algorithm for near duplicate image detection. Extensive experiments on three benchmark data sets demonstrate the effectiveness of our deep siamese hash encoding network and load-balanced LSH.
An investigation of new methods for estimating parameter sensitivities
NASA Technical Reports Server (NTRS)
Beltracchi, Todd J.; Gabriele, Gary A.
1989-01-01
The method proposed for estimating sensitivity derivatives is based on the Recursive Quadratic Programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This method is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RQP algorithm. Initial testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity.
Lan, Chengming; Zhou, Wensong; Xie, Yawen
2018-04-16
This work proposes a 3D shaped optic fiber sensor for ultrasonic stress waves detection based on the principle of a Mach–Zehnder interferometer. This sensor can be used to receive acoustic emission signals in the passive damage detection methods and other types of ultrasonic signals propagating in the active damage detection methods, such as guided wave-based methods. The sensitivity of an ultrasonic fiber sensor based on the Mach–Zehnder interferometer mainly depends on the length of the sensing optical fiber; therefore, the proposed sensor achieves the maximum possible sensitivity by wrapping an optical fiber on a hollow cylinder with a base. The deformation of the optical fiber is produced by the displacement field of guided waves in the hollow cylinder. The sensor was first analyzed using the finite element method, which demonstrated its basic sensing capacity, and the simulation signals have the same characteristics in the frequency domain as the excitation signal. Subsequently, the primary investigations were conducted via a series of experiments. The sensor was used to detect guided wave signals excited by a piezoelectric wafer in an aluminum plate, and subsequently it was tested on a reinforced concrete beam, which produced acoustic emission signals via impact loading and crack extension when it was loaded to failure. The signals obtained from a piezoelectric acoustic emission sensor were used for comparison, and the results indicated that the proposed 3D fiber optic sensor can detect ultrasonic signals in the specific frequency response range.
Xie, Yawen
2018-01-01
This work proposes a 3D shaped optic fiber sensor for ultrasonic stress waves detection based on the principle of a Mach–Zehnder interferometer. This sensor can be used to receive acoustic emission signals in the passive damage detection methods and other types of ultrasonic signals propagating in the active damage detection methods, such as guided wave-based methods. The sensitivity of an ultrasonic fiber sensor based on the Mach–Zehnder interferometer mainly depends on the length of the sensing optical fiber; therefore, the proposed sensor achieves the maximum possible sensitivity by wrapping an optical fiber on a hollow cylinder with a base. The deformation of the optical fiber is produced by the displacement field of guided waves in the hollow cylinder. The sensor was first analyzed using the finite element method, which demonstrated its basic sensing capacity, and the simulation signals have the same characteristics in the frequency domain as the excitation signal. Subsequently, the primary investigations were conducted via a series of experiments. The sensor was used to detect guided wave signals excited by a piezoelectric wafer in an aluminum plate, and subsequently it was tested on a reinforced concrete beam, which produced acoustic emission signals via impact loading and crack extension when it was loaded to failure. The signals obtained from a piezoelectric acoustic emission sensor were used for comparison, and the results indicated that the proposed 3D fiber optic sensor can detect ultrasonic signals in the specific frequency response range. PMID:29659540
Martínez, Sergio; Sánchez, David; Valls, Aida
2013-04-01
Structured patient data like Electronic Health Records (EHRs) are a valuable source for clinical research. However, the sensitive nature of such information requires some anonymisation procedure to be applied before releasing the data to third parties. Several studies have shown that the removal of identifying attributes, like the Social Security Number, is not enough to obtain an anonymous data file, since unique combinations of other attributes as for example, rare diagnoses and personalised treatments, may lead to patient's identity disclosure. To tackle this problem, Statistical Disclosure Control (SDC) methods have been proposed to mask sensitive attributes while preserving, up to a certain degree, the utility of anonymised data. Most of these methods focus on continuous-scale numerical data. Considering that part of the clinical data found in EHRs is expressed with non-numerical attributes as for example, diagnoses, symptoms, procedures, etc., their application to EHRs produces far from optimal results. In this paper, we propose a general framework to enable the accurate application of SDC methods to non-numerical clinical data, with a focus on the preservation of semantics. To do so, we exploit structured medical knowledge bases like SNOMED CT to propose semantically-grounded operators to compare, aggregate and sort non-numerical terms. Our framework has been applied to several well-known SDC methods and evaluated using a real clinical dataset with non-numerical attributes. Results show that the exploitation of medical semantics produces anonymised datasets that better preserve the utility of EHRs. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Belfort, Benjamin; Weill, Sylvain; Lehmann, François
2017-07-01
A novel, non-invasive imaging technique is proposed that determines 2D maps of water content in unsaturated porous media. This method directly relates digitally measured intensities to the water content of the porous medium. This method requires the classical image analysis steps, i.e., normalization, filtering, background subtraction, scaling and calibration. The main advantages of this approach are that no calibration experiment is needed, because calibration curve relating water content and reflected light intensities is established during the main monitoring phase of each experiment and that no tracer or dye is injected into the flow tank. The procedure enables effective processing of a large number of photographs and thus produces 2D water content maps at high temporal resolution. A drainage/imbibition experiment in a 2D flow tank with inner dimensions of 40 cm × 14 cm × 6 cm (L × W × D) is carried out to validate the methodology. The accuracy of the proposed approach is assessed using a statistical framework to perform an error analysis and numerical simulations with a state-of-the-art computational code that solves the Richards' equation. Comparison of the cumulative mass leaving and entering the flow tank and water content maps produced by the photographic measurement technique and the numerical simulations demonstrate the efficiency and high accuracy of the proposed method for investigating vadose zone flow processes. Finally, the photometric procedure has been developed expressly for its extension to heterogeneous media. Other processes may be investigated through different laboratory experiments which will serve as benchmark for numerical codes validation.
NASA Astrophysics Data System (ADS)
Cui, Sheng; Jin, Shang; Xia, Wenjuan; Ke, Changjian; Liu, Deming
2015-11-01
Symbol rate identification (SRI) based on asynchronous delayed sampling is accurate, cost-effective and robust to impairments. For on-off keying (OOK) signals the symbol rate can be derived from the periodicity of the second-order autocorrelation function (ACF2) of the delay tap samples. But it is found that when applied this method to advanced modulation format signals with auxiliary amplitude modulation (AAM), incorrect results may be produced because AAM has significant impact on ACF2 periodicity, which makes the symbol period harder or even unable to be correctly identified. In this paper it is demonstrated that for these signals the first order autocorrelation function (ACF1) has stronger periodicity and can be used to replace ACF2 to produce more accurate and robust results. Utilizing the characteristics of the ACFs, an improved SRI method is proposed to accommodate both OOK and advanced modulation formant signals in a transparent manner. Furthermore it is proposed that by minimizing the peak to average ratio (PAPR) of the delay tap samples with an additional tunable dispersion compensator (TDC) the limited dispersion tolerance can be expanded to desired values.
Effects of Synthesis Method on Electrical Properties of Graphene
NASA Astrophysics Data System (ADS)
Fuad, M. F. I. Ahmad; Jarni, H. H.; Shariffudin, W. N.; Othman, N. H.; Rahim, A. N. Che Abdul
2018-05-01
The aim of this study is to achieve the highest reduction capability and complete reductions of oxygen from graphene oxide (GO) by using different type of chemical methods. The modification of Hummer’s method has been proposed to produce GO, and hydrazine hydrate has been utilized in the GO’s reduction process into graphene. There are two types of chemical method are used to synthesize graphene; 1) Sina’s method and 2) Sasha’s method. Both GO and graphene were then characterized using X-Ray Powder Diffraction (XRD) and Fourier Transform Infrared Spectrometry (FT-IR). The graph patterns obtained from XRD showed that the values of graphene and GO are within their reliable ranges, FT-IR identified the comparison functional group between GO and graphene. Graphene was verified to experience the reduction process due to absent of functional group consist of oxygen has detected. Electrochemical impedance spectrometry (EIS) was then conducted to test the ability of conducting electricity of two batches (each weighted 1.6g) of graphene synthesized using different methods (Sina’s method and Sasha’s method). Sasha’s method was proven to have lower conductivity value compare to Sina’s method, with value of 6.2E+02 S/m and 8.1E+02 S/m respectively. These values show that both methods produced good graphene; however, by using Sina’s method, the graphene produced has better electrical properties.
Regional flood-frequency relations for streams with many years of no flow
Hjalmarson, Hjalmar W.; Thomas, Blakemore E.; ,
1990-01-01
In the southwestern United States, flood-frequency relations for streams that drain small arid basins are difficult to estimate, largely because of the extreme temporal and spatial variability of floods and the many years of no flow. A method is proposed that is based on the station-year method. The new method produces regional flood-frequency relations using all available annual peak-discharge data. The prediction errors for the relations are directly assessed using randomly selected subsamples of the annual peak discharges.
Convergence analysis of a monotonic penalty method for American option pricing
NASA Astrophysics Data System (ADS)
Zhang, Kai; Yang, Xiaoqi; Teo, Kok Lay
2008-12-01
This paper is devoted to study the convergence analysis of a monotonic penalty method for pricing American options. A monotonic penalty method is first proposed to solve the complementarity problem arising from the valuation of American options, which produces a nonlinear degenerated parabolic PDE with Black-Scholes operator. Based on the variational theory, the solvability and convergence properties of this penalty approach are established in a proper infinite dimensional space. Moreover, the convergence rate of the combination of two power penalty functions is obtained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Qishi; Berry, M. L..; Grieme, M.
We propose a localization-based radiation source detection (RSD) algorithm using the Ratio of Squared Distance (ROSD) method. Compared with the triangulation-based method, the advantages of this ROSD method are multi-fold: i) source location estimates based on four detectors improve their accuracy, ii) ROSD provides closed-form source location estimates and thus eliminates the imaginary-roots issue, and iii) ROSD produces a unique source location estimate as opposed to two real roots (if any) in triangulation, and obviates the need to identify real phantom roots during clustering.
New sulphiding method for steel and cast iron parts
NASA Astrophysics Data System (ADS)
Tarelnyk, V.; Martsynkovskyy, V.; Gaponova, O.; Konoplianchenko, Ie; Dovzyk, M.; Tarelnyk, N.; Gorovoy, S.
2017-08-01
A new method for sulphiding steel and cast iron part surfaces by electroerosion alloying (EEA) with the use of a special electrode is proposed, which method is characterized in that while manufacturing the electrode, on its surface, in any known manner (punching, threading, pulling, etc.), there is formed at least a recess to be filled with sulfur as a consistent material, and then there is produced EEA by the obtained electrode without waiting for the consistent material to become dried.
Lessons Learned from Client Projects in an Undergraduate Project Management Course
ERIC Educational Resources Information Center
Pollard, Carol E.
2012-01-01
This work proposes that a subtle combination of three learning methods offering "just in time" project management knowledge, coupled with hands-on project management experience can be particularly effective in producing project management students with employable skills. Students were required to apply formal project management knowledge to gain…
Purifying Aluminum by Vacuum Distillation
NASA Technical Reports Server (NTRS)
Du Fresne, E. R.
1985-01-01
Proposed method for purifying aluminum employs one-step vacuum distillation. Raw material for process impure aluminum produced in electrolysis of aluminum ore. Impure metal melted in vacuum. Since aluminum has much higher vapor pressure than other constituents, boils off and condenses on nearby cold surfaces in proportions much greater than those of other constituents.
Coatings Based on Nanodispersed Oxide Materials Produced by the Method of Pneumatic Spraying
NASA Astrophysics Data System (ADS)
Potekaev, A. I.; Lysak, I. A.; Malinovskaya, T. D.; Lysak, G. V.
2018-03-01
New approaches are proposed, relying on which the coatings from nanodispersed oxide materials are formed on polypropylene fibers. It is shown that in the course of the viscous fluid - solid state transition of the polymer its nanoparticles are stabilized on the surface of the formed fibers.
Fast sweeping methods for hyperbolic systems of conservation laws at steady state II
NASA Astrophysics Data System (ADS)
Engquist, Björn; Froese, Brittany D.; Tsai, Yen-Hsi Richard
2015-04-01
The idea of using fast sweeping methods for solving stationary systems of conservation laws has previously been proposed for efficiently computing solutions with sharp shocks. We further develop these methods to allow for a more challenging class of problems including problems with sonic points, shocks originating in the interior of the domain, rarefaction waves, and two-dimensional systems. We show that fast sweeping methods can produce higher-order accuracy. Computational results validate the claims of accuracy, sharp shock curves, and optimal computational efficiency.
Spurious cross-frequency amplitude-amplitude coupling in nonstationary, nonlinear signals
NASA Astrophysics Data System (ADS)
Yeh, Chien-Hung; Lo, Men-Tzung; Hu, Kun
2016-07-01
Recent studies of brain activities show that cross-frequency coupling (CFC) plays an important role in memory and learning. Many measures have been proposed to investigate the CFC phenomenon, including the correlation between the amplitude envelopes of two brain waves at different frequencies - cross-frequency amplitude-amplitude coupling (AAC). In this short communication, we describe how nonstationary, nonlinear oscillatory signals may produce spurious cross-frequency AAC. Utilizing the empirical mode decomposition, we also propose a new method for assessment of AAC that can potentially reduce the effects of nonlinearity and nonstationarity and, thus, help to avoid the detection of artificial AACs. We compare the performances of this new method and the traditional Fourier-based AAC method. We also discuss the strategies to identify potential spurious AACs.
NASA Astrophysics Data System (ADS)
Raccichini, Rinaldo; Varzi, Alberto; Chakravadhanula, Venkata Sai Kiran; Kübel, Christian; Balducci, Andrea; Passerini, Stefano
2015-05-01
The electrochemical properties of graphene are strongly depending on its synthesis. Between the different methods proposed so far, liquid phase exfoliation turns out to be a promising method for the production of graphene. Unfortunately, the low yield of this technique, in term of solid material obtained, still limit its use to small scale applications. In this article we propose a low cost and environmentally friendly method for producing multilayer crystalline graphene with high yield. Such innovative approach, involving an improved ionic liquid assisted, microwave exfoliation of expanded graphite, allows the production of graphene with advanced lithium ion storage performance, for the first time, at low temperatures (<0 °C), as low as -30 °C, with respect to commercially available graphite.
Multisource least-squares reverse-time migration with structure-oriented filtering
NASA Astrophysics Data System (ADS)
Fan, Jing-Wen; Li, Zhen-Chun; Zhang, Kai; Zhang, Min; Liu, Xue-Tong
2016-09-01
The technology of simultaneous-source acquisition of seismic data excited by several sources can significantly improve the data collection efficiency. However, direct imaging of simultaneous-source data or blended data may introduce crosstalk noise and affect the imaging quality. To address this problem, we introduce a structure-oriented filtering operator as preconditioner into the multisource least-squares reverse-time migration (LSRTM). The structure-oriented filtering operator is a nonstationary filter along structural trends that suppresses crosstalk noise while maintaining structural information. The proposed method uses the conjugate-gradient method to minimize the mismatch between predicted and observed data, while effectively attenuating the interference noise caused by exciting several sources simultaneously. Numerical experiments using synthetic data suggest that the proposed method can suppress the crosstalk noise and produce highly accurate images.
DendroBLAST: approximate phylogenetic trees in the absence of multiple sequence alignments.
Kelly, Steven; Maini, Philip K
2013-01-01
The rapidly growing availability of genome information has created considerable demand for both fast and accurate phylogenetic inference algorithms. We present a novel method called DendroBLAST for reconstructing phylogenetic dendrograms/trees from protein sequences using BLAST. This method differs from other methods by incorporating a simple model of sequence evolution to test the effect of introducing sequence changes on the reliability of the bipartitions in the inferred tree. Using realistic simulated sequence data we demonstrate that this method produces phylogenetic trees that are more accurate than other commonly-used distance based methods though not as accurate as maximum likelihood methods from good quality multiple sequence alignments. In addition to tests on simulated data, we use DendroBLAST to generate input trees for a supertree reconstruction of the phylogeny of the Archaea. This independent analysis produces an approximate phylogeny of the Archaea that has both high precision and recall when compared to previously published analysis of the same dataset using conventional methods. Taken together these results demonstrate that approximate phylogenetic trees can be produced in the absence of multiple sequence alignments, and we propose that these trees will provide a platform for improving and informing downstream bioinformatic analysis. A web implementation of the DendroBLAST method is freely available for use at http://www.dendroblast.com/.
NASA Astrophysics Data System (ADS)
Bellos, Vasilis; Tsakiris, George
2016-09-01
The study presents a new hybrid method for the simulation of flood events in small catchments. It combines a physically-based two-dimensional hydrodynamic model and the hydrological unit hydrograph theory. Unit hydrographs are derived using the FLOW-R2D model which is based on the full form of two-dimensional Shallow Water Equations, solved by a modified McCormack numerical scheme. The method is tested at a small catchment in a suburb of Athens-Greece for a storm event which occurred in February 2013. The catchment is divided into three friction zones and unit hydrographs of 15 and 30 min are produced. The infiltration process is simulated by the empirical Kostiakov equation and the Green-Ampt model. The results from the implementation of the proposed hybrid method are compared with recorded data at the hydrometric station at the outlet of the catchment and the results derived from the fully hydrodynamic model FLOW-R2D. It is concluded that for the case studied, the proposed hybrid method produces results close to those of the fully hydrodynamic simulation at substantially shorter computational time. This finding, if further verified in a variety of case studies, can be useful in devising effective hybrid tools for the two-dimensional flood simulations, which are lead to accurate and considerably faster results than those achieved by the fully hydrodynamic simulations.
Antimatter propulsion, status and prospects
NASA Technical Reports Server (NTRS)
Howe, Steven D.; Hynes, Michael V.
1986-01-01
The use of advanced propulsion techniques must be considered if the currently envisioned launch date of the manned Mars mission were delayed until 2020 or later. Within the next thirty years, technological advances may allow such methods as beaming power to the ship, inertial-confinement fusion, or mass-conversion of antiprotons to become feasible. A propulsion system with an ISP of around 5000 s would allow the currently envisioned mission module to fly to Mars in 3 months and would require about one million pounds to be assembled in Earth orbit. Of the possible methods to achieve this, the antiproton mass-conversion reaction offers the highest potential, the greatest problems, and the most fascination. Increasing the production rates of antiprotons is a high priority task at facilities around the world. The application of antiprotons to propulsion requires the coupling of the energy released in the mass-conversion reaction to thrust-producing mechanisms. Recent proposals entail using the antiprotons to produce inertial confinement fusion or to produce negative muons which can catalyze fusion. By increasing the energy released per antiproton, the effective cost, (dollars/joule) can be reduced. These proposals and other areas of research can be investigated now. These short term results will be important in assessing the long range feasibility of an antiproton powered engine.
A multi-product green supply chain under government supervision with price and demand uncertainty
NASA Astrophysics Data System (ADS)
Hafezalkotob, Ashkan; Zamani, Soma
2018-05-01
In this paper, a bi-level game-theoretic model is proposed to investigate the effects of governmental financial intervention on green supply chain. This problem is formulated as a bi-level program for a green supply chain that produces various products with different environmental pollution levels. The problem is also regard uncertainties in market demand and sale price of raw materials and products. The model is further transformed into a single-level nonlinear programming problem by replacing the lower-level optimization problem with its Karush-Kuhn-Tucker optimality conditions. Genetic algorithm is applied as a solution methodology to solve nonlinear programming model. Finally, to investigate the validity of the proposed method, the computational results obtained through genetic algorithm are compared with global optimal solution attained by enumerative method. Analytical results indicate that the proposed GA offers better solutions in large size problems. Also, we conclude that financial intervention by government consists of green taxation and subsidization is an effective method to stabilize green supply chain members' performance.
Development of a high-performance noise-reduction filter for tomographic reconstruction
NASA Astrophysics Data System (ADS)
Kao, Chien-Min; Pan, Xiaochuan
2001-07-01
We propose a new noise-reduction method for tomographic reconstruction. The method incorporates a priori information on the source image for allowing the derivation of the energy spectrum of its ideal sinogram. In combination with the energy spectrum of the Poisson noise in the measured sinogram, we are able to derive a Wiener-like filter for effective suppression of the sinogram noise. The filtered backprojection (FBP) algorithm, with a ramp filter, is then applied to the filtered sinogram to produce tomographic images. The resulting filter has a closed-form expression in the frequency space and contains a single user-adjustable regularization parameter. The proposed method is hence simple to implement and easy to use. In contrast to the ad hoc apodizing windows, such as Hanning and Butterworth filters, that are commonly used in the conventional FBP reconstruction, the proposed filter is theoretically more rigorous as it is derived by basing upon an optimization criterion, subject to a known class of source image intensity distributions.
Wavelet neural networks: a practical guide.
Alexandridis, Antonios K; Zapranis, Achilleas D
2013-06-01
Wavelet networks (WNs) are a new class of networks which have been used with great success in a wide range of applications. However a general accepted framework for applying WNs is missing from the literature. In this study, we present a complete statistical model identification framework in order to apply WNs in various applications. The following subjects were thoroughly examined: the structure of a WN, training methods, initialization algorithms, variable significance and variable selection algorithms, model selection methods and finally methods to construct confidence and prediction intervals. In addition the complexity of each algorithm is discussed. Our proposed framework was tested in two simulated cases, in one chaotic time series described by the Mackey-Glass equation and in three real datasets described by daily temperatures in Berlin, daily wind speeds in New York and breast cancer classification. Our results have shown that the proposed algorithms produce stable and robust results indicating that our proposed framework can be applied in various applications. Copyright © 2013 Elsevier Ltd. All rights reserved.
Interpretation of fingerprint image quality features extracted by self-organizing maps
NASA Astrophysics Data System (ADS)
Danov, Ivan; Olsen, Martin A.; Busch, Christoph
2014-05-01
Accurate prediction of fingerprint quality is of significant importance to any fingerprint-based biometric system. Ensuring high quality samples for both probe and reference can substantially improve the system's performance by lowering false non-matches, thus allowing finer adjustment of the decision threshold of the biometric system. Furthermore, the increasing usage of biometrics in mobile contexts demands development of lightweight methods for operational environment. A novel two-tier computationally efficient approach was recently proposed based on modelling block-wise fingerprint image data using Self-Organizing Map (SOM) to extract specific ridge pattern features, which are then used as an input to a Random Forests (RF) classifier trained to predict the quality score of a propagated sample. This paper conducts an investigative comparative analysis on a publicly available dataset for the improvement of the two-tier approach by proposing additionally three feature interpretation methods, based respectively on SOM, Generative Topographic Mapping and RF. The analysis shows that two of the proposed methods produce promising results on the given dataset.
Zhang, Y N
2017-01-01
Parkinson's disease (PD) is primarily diagnosed by clinical examinations, such as walking test, handwriting test, and MRI diagnostic. In this paper, we propose a machine learning based PD telediagnosis method for smartphone. Classification of PD using speech records is a challenging task owing to the fact that the classification accuracy is still lower than doctor-level. Here we demonstrate automatic classification of PD using time frequency features, stacked autoencoders (SAE), and K nearest neighbor (KNN) classifier. KNN classifier can produce promising classification results from useful representations which were learned by SAE. Empirical results show that the proposed method achieves better performance with all tested cases across classification tasks, demonstrating machine learning capable of classifying PD with a level of competence comparable to doctor. It concludes that a smartphone can therefore potentially provide low-cost PD diagnostic care. This paper also gives an implementation on browser/server system and reports the running time cost. Both advantages and disadvantages of the proposed telediagnosis system are discussed.
2017-01-01
Parkinson's disease (PD) is primarily diagnosed by clinical examinations, such as walking test, handwriting test, and MRI diagnostic. In this paper, we propose a machine learning based PD telediagnosis method for smartphone. Classification of PD using speech records is a challenging task owing to the fact that the classification accuracy is still lower than doctor-level. Here we demonstrate automatic classification of PD using time frequency features, stacked autoencoders (SAE), and K nearest neighbor (KNN) classifier. KNN classifier can produce promising classification results from useful representations which were learned by SAE. Empirical results show that the proposed method achieves better performance with all tested cases across classification tasks, demonstrating machine learning capable of classifying PD with a level of competence comparable to doctor. It concludes that a smartphone can therefore potentially provide low-cost PD diagnostic care. This paper also gives an implementation on browser/server system and reports the running time cost. Both advantages and disadvantages of the proposed telediagnosis system are discussed. PMID:29075547
Quantitative Modeling of Cerenkov Light Production Efficiency from Medical Radionuclides
Beattie, Bradley J.; Thorek, Daniel L. J.; Schmidtlein, Charles R.; Pentlow, Keith S.; Humm, John L.; Hielscher, Andreas H.
2012-01-01
There has been recent and growing interest in applying Cerenkov radiation (CR) for biological applications. Knowledge of the production efficiency and other characteristics of the CR produced by various radionuclides would help in accessing the feasibility of proposed applications and guide the choice of radionuclides. To generate this information we developed models of CR production efficiency based on the Frank-Tamm equation and models of CR distribution based on Monte-Carlo simulations of photon and β particle transport. All models were validated against direct measurements using multiple radionuclides and then applied to a number of radionuclides commonly used in biomedical applications. We show that two radionuclides, Ac-225 and In-111, which have been reported to produce CR in water, do not in fact produce CR directly. We also propose a simple means of using this information to calibrate high sensitivity luminescence imaging systems and show evidence suggesting that this calibration may be more accurate than methods in routine current use. PMID:22363636
Using virtual data for training deep model for hand gesture recognition
NASA Astrophysics Data System (ADS)
Nikolaev, E. I.; Dvoryaninov, P. V.; Lensky, Y. Y.; Drozdovsky, N. S.
2018-05-01
Deep learning has shown real promise for the classification efficiency for hand gesture recognition problems. In this paper, the authors present experimental results for a deeply-trained model for hand gesture recognition through the use of hand images. The authors have trained two deep convolutional neural networks. The first architecture produces the hand position as a 2D-vector by input hand image. The second one predicts the hand gesture class for the input image. The first proposed architecture produces state of the art results with an accuracy rate of 89% and the second architecture with split input produces accuracy rate of 85.2%. In this paper, the authors also propose using virtual data for training a supervised deep model. Such technique is aimed to avoid using original labelled images in the training process. The interest of this method in data preparation is motivated by the need to overcome one of the main challenges of deep supervised learning: using a copious amount of labelled data during training.
A process for producing lignin and volatile compounds from hydrolysis liquor.
Khazraie, Tooran; Zhang, Yiqian; Tarasov, Dmitry; Gao, Weijue; Price, Jacquelyn; DeMartini, Nikolai; Hupa, Leena; Fatehi, Pedram
2017-01-01
Hot water hydrolysis process is commercially applied for treating wood chips prior to pulping or wood pellet production, while it produces hydrolysis liquor as a by-product. Since the hydrolysis liquor is dilute, the production of value-added materials from it would be challenging. In this study, acidification was proposed as a viable method to extract (1) furfural and acetic acid from hot water hydrolysis liquor and (2) lignin compounds from the liquor. The thermal properties of the precipitates made from the acidification of hydrolysis liquor confirmed the volatile characteristics of precipitates. Membrane dialysis was effective in removing inorganic salts associated with lignin compounds. The purified lignin compounds had a glass transition temperature (Tg) of 180-190 °C, and were thermally stable. The results confirmed that lignin compounds present in hot water hydrolysis liquor had different characteristics. The acidification of hydrolysis liquor primarily removed the volatile compounds from hydrolysis liquor. Based on these results, a process for producing purified lignin and precipitates of volatile compounds was proposed.
NASA Technical Reports Server (NTRS)
Bayo, Eduardo; Ledesma, Ragnar
1993-01-01
A technique is presented for solving the inverse dynamics of flexible planar multibody systems. This technique yields the non-causal joint efforts (inverse dynamics) as well as the internal states (inverse kinematics) that produce a prescribed nominal trajectory of the end effector. A non-recursive global Lagrangian approach is used in formulating the equations for motion as well as in solving the inverse dynamics equations. Contrary to the recursive method previously presented, the proposed method solves the inverse problem in a systematic and direct manner for both open-chain as well as closed-chain configurations. Numerical simulation shows that the proposed procedure provides an excellent tracking of the desired end effector trajectory.
An Algorithm for Automatically Modifying Train Crew Schedule
NASA Astrophysics Data System (ADS)
Takahashi, Satoru; Kataoka, Kenji; Kojima, Teruhito; Asami, Masayuki
Once the break-down of the train schedule occurs, the crew schedule as well as the train schedule has to be modified as quickly as possible to restore them. In this paper, we propose an algorithm for automatically modifying a crew schedule that takes all constraints into consideration, presenting a model of the combined problem of crews and trains. The proposed algorithm builds an initial solution by relaxing some of the constraint conditions, and then uses a Taboo-search method to revise this solution in order to minimize the degree of constraint violation resulting from these relaxed conditions. Then we show not only that the algorithm can generate a constraint satisfaction solution, but also that the solution will satisfy the experts. That is, we show the proposed algorithm is capable of producing a usable solution in a short time by applying to actual cases of train-schedule break-down, and that the solution is at least as good as those produced manually, by comparing the both solutions with several point of view.
Obesity and public policies: the Brazilian government's definitions and strategies.
Dias, Patricia Camacho; Henriques, Patrícia; Anjos, Luiz Antonio Dos; Burlandy, Luciene
2017-07-27
The study analyzes national strategies for dealing with obesity in Brazil in the framework of the Brazilian Unified National Health System (SUS) and the Food and Nutritional Security System (SISAN). Based on the document analysis method, we examined government documents produced in the last 15 years in the following dimensions: definitions of obesity, proposed actions, and strategies for linkage between sectors. In the SUS, obesity is approached as both a risk factor and a disease, with individual and social/environmental approaches aimed at changing eating practices and physical activity. In the SISAN, obesity is also conceived as a social problem involving food insecurity, and new modes of producing, marketing, and consuming foods are proposed to change eating practices in an integrated way. Proposals in the SUS point to an integrated and intra-sector approach to obesity, while those in SISAN emphasize the problem's inter-sector nature from an expanded perspective that challenges the prevailing sector-based institutional structures.
Drawing Road Networks with Mental Maps.
Lin, Shih-Syun; Lin, Chao-Hung; Hu, Yan-Jhang; Lee, Tong-Yee
2014-09-01
Tourist and destination maps are thematic maps designed to represent specific themes in maps. The road network topologies in these maps are generally more important than the geometric accuracy of roads. A road network warping method is proposed to facilitate map generation and improve theme representation in maps. The basic idea is deforming a road network to meet a user-specified mental map while an optimization process is performed to propagate distortions originating from road network warping. To generate a map, the proposed method includes algorithms for estimating road significance and for deforming a road network according to various geometric and aesthetic constraints. The proposed method can produce an iconic mark of a theme from a road network and meet a user-specified mental map. Therefore, the resulting map can serve as a tourist or destination map that not only provides visual aids for route planning and navigation tasks, but also visually emphasizes the presentation of a theme in a map for the purpose of advertising. In the experiments, the demonstrations of map generations show that our method enables map generation systems to generate deformed tourist and destination maps efficiently.
Feature-based Approach in Product Design with Energy Efficiency Consideration
NASA Astrophysics Data System (ADS)
Li, D. D.; Zhang, Y. J.
2017-10-01
In this paper, a method to measure the energy efficiency and ecological footprint metrics of features is proposed for product design. First the energy consumption models of various manufacturing features, like cutting feature, welding feature, etc. are studied. Then, the total energy consumption of a product is modeled and estimated according to its features. Finally, feature chains that combined by several sequence features based on the producing operation orders are defined and analyzed to calculate global optimal solution. The corresponding assessment model is also proposed to estimate their energy efficiency and ecological footprint. Finally, an example is given to validate the proposed approach in the improvement of sustainability.
Development of a Flexible Non-Metal Electrode for Cell Stimulation and Recording
Gong, Cihun-Siyong Alex; Syu, Wun-Jia; Lei, Kin Fong; Hwang, Yih-Shiou
2016-01-01
This study presents a method of producing flexible electrodes for potentially simultaneously stimulating and measuring cellular signals in retinal cells. Currently, most multi-electrode applications rely primarily on etching, but the metals involved have a certain degree of brittleness, leaving them prone to cracking under prolonged pressure. This study proposes using silver chloride ink as a conductive metal, and polydimethysiloxane (PDMS) as the substrate to provide electrodes with an increased degree of flexibility to allow them to bend. This structure is divided into the electrode layer made of PDMS and silver chloride ink, and a PDMS film coating layer. PDMS can be mixed in different proportions to modify the degree of rigidity. The proposed method involved three steps. The first segment entailed the manufacturing of the electrode, using silver chloride ink as the conductive material, and using computer software to define the electrode size and micro-engraving mechanisms to produce the electrode pattern. The resulting uniform PDMS pattern was then baked onto the model, and the flow channel was filled with the conductive material before air drying to produce the required electrode. In the second stage, we tested the electrode, using an impedance analyzer to measure electrode cyclic voltammetry and impedance. In the third phase, mechanical and biocompatibility tests were conducted to determine electrode properties. This study aims to produce a flexible, non-metallic sensing electrode which fits snugly for use in a range of measurement applications. PMID:27690049
NASA Technical Reports Server (NTRS)
Tarabalka, Y.; Tilton, J. C.; Benediktsson, J. A.; Chanussot, J.
2012-01-01
The Hierarchical SEGmentation (HSEG) algorithm, which combines region object finding with region object clustering, has given good performances for multi- and hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. Two classification-based approaches for automatic marker selection are adapted and compared for this purpose. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. Three different implementations of the M-HSEG method are proposed and their performances in terms of classification accuracies are compared. The experimental results, presented for three hyperspectral airborne images, demonstrate that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for remote sensing image analysis.
Embedded Palmprint Recognition System Using OMAP 3530
Shen, Linlin; Wu, Shipei; Zheng, Songhao; Ji, Zhen
2012-01-01
We have proposed in this paper an embedded palmprint recognition system using the dual-core OMAP 3530 platform. An improved algorithm based on palm code was proposed first. In this method, a Gabor wavelet is first convolved with the palmprint image to produce a response image, where local binary patterns are then applied to code the relation among the magnitude of wavelet response at the ccentral pixel with that of its neighbors. The method is fully tested using the public PolyU palmprint database. While palm code achieves only about 89% accuracy, over 96% accuracy is achieved by the proposed G-LBP approach. The proposed algorithm was then deployed to the DSP processor of OMAP 3530 and work together with the ARM processor for feature extraction. When complicated algorithms run on the DSP processor, the ARM processor can focus on image capture, user interface and peripheral control. Integrated with an image sensing module and central processing board, the designed device can achieve accurate and real time performance. PMID:22438721
Embedded palmprint recognition system using OMAP 3530.
Shen, Linlin; Wu, Shipei; Zheng, Songhao; Ji, Zhen
2012-01-01
We have proposed in this paper an embedded palmprint recognition system using the dual-core OMAP 3530 platform. An improved algorithm based on palm code was proposed first. In this method, a Gabor wavelet is first convolved with the palmprint image to produce a response image, where local binary patterns are then applied to code the relation among the magnitude of wavelet response at the central pixel with that of its neighbors. The method is fully tested using the public PolyU palmprint database. While palm code achieves only about 89% accuracy, over 96% accuracy is achieved by the proposed G-LBP approach. The proposed algorithm was then deployed to the DSP processor of OMAP 3530 and work together with the ARM processor for feature extraction. When complicated algorithms run on the DSP processor, the ARM processor can focus on image capture, user interface and peripheral control. Integrated with an image sensing module and central processing board, the designed device can achieve accurate and real time performance.
Kumar, Shiu; Mamun, Kabir; Sharma, Alok
2017-12-01
Classification of electroencephalography (EEG) signals for motor imagery based brain computer interface (MI-BCI) is an exigent task and common spatial pattern (CSP) has been extensively explored for this purpose. In this work, we focused on developing a new framework for classification of EEG signals for MI-BCI. We propose a single band CSP framework for MI-BCI that utilizes the concept of tangent space mapping (TSM) in the manifold of covariance matrices. The proposed method is named CSP-TSM. Spatial filtering is performed on the bandpass filtered MI EEG signal. Riemannian tangent space is utilized for extracting features from the spatial filtered signal. The TSM features are then fused with the CSP variance based features and feature selection is performed using Lasso. Linear discriminant analysis (LDA) is then applied to the selected features and finally classification is done using support vector machine (SVM) classifier. The proposed framework gives improved performance for MI EEG signal classification in comparison with several competing methods. Experiments conducted shows that the proposed framework reduces the overall classification error rate for MI-BCI by 3.16%, 5.10% and 1.70% (for BCI Competition III dataset IVa, BCI Competition IV Dataset I and BCI Competition IV Dataset IIb, respectively) compared to the conventional CSP method under the same experimental settings. The proposed CSP-TSM method produces promising results when compared with several competing methods in this paper. In addition, the computational complexity is less compared to that of TSM method. Our proposed CSP-TSM framework can be potentially used for developing improved MI-BCI systems. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yang, Fei; Raynova, Stella; Singh, Ajit; Zhao, Qinyang; Romero, Carlos; Bolzoni, Leandro
2018-02-01
Powder metallurgy is a very attractive method for producing titanium alloys, which can be near-net-shape formed and have freedom in composition selection. However, applications are still limited due to product affordability. In this paper, we will discuss a possible cost-effective route, combining fast heating and hot processing, to produce titanium alloys with similar or even better mechanical properties than that of ingot metallurgy titanium alloys. Two titanium alloys, Ti-5Al-5V-5Mo-3Cr (Ti-5553) and Ti-5Fe, were successfully produced from HDH titanium powder and other master alloy powders using the proposed processing route. The effect of the processing route on microstructural variation and mechanical properties have been discussed.
Vessel extraction in retinal images using automatic thresholding and Gabor Wavelet.
Ali, Aziah; Hussain, Aini; Wan Zaki, Wan Mimi Diyana
2017-07-01
Retinal image analysis has been widely used for early detection and diagnosis of multiple systemic diseases. Accurate vessel extraction in retinal image is a crucial step towards a fully automated diagnosis system. This work affords an efficient unsupervised method for extracting blood vessels from retinal images by combining existing Gabor Wavelet (GW) method with automatic thresholding. Green channel image is extracted from color retinal image and used to produce Gabor feature image using GW. Both green channel image and Gabor feature image undergo vessel-enhancement step in order to highlight blood vessels. Next, the two vessel-enhanced images are transformed to binary images using automatic thresholding before combined to produce the final vessel output. Combining the images results in significant improvement of blood vessel extraction performance compared to using individual image. Effectiveness of the proposed method was proven via comparative analysis with existing methods validated using publicly available database, DRIVE.
Li, Der-Chiang; Hu, Susan C; Lin, Liang-Sian; Yeh, Chun-Wu
2017-01-01
It is difficult for learning models to achieve high classification performances with imbalanced data sets, because with imbalanced data sets, when one of the classes is much larger than the others, most machine learning and data mining classifiers are overly influenced by the larger classes and ignore the smaller ones. As a result, the classification algorithms often have poor learning performances due to slow convergence in the smaller classes. To balance such data sets, this paper presents a strategy that involves reducing the sizes of the majority data and generating synthetic samples for the minority data. In the reducing operation, we use the box-and-whisker plot approach to exclude outliers and the Mega-Trend-Diffusion method to find representative data from the majority data. To generate the synthetic samples, we propose a counterintuitive hypothesis to find the distributed shape of the minority data, and then produce samples according to this distribution. Four real datasets were used to examine the performance of the proposed approach. We used paired t-tests to compare the Accuracy, G-mean, and F-measure scores of the proposed data pre-processing (PPDP) method merging in the D3C method (PPDP+D3C) with those of the one-sided selection (OSS), the well-known SMOTEBoost (SB) study, and the normal distribution-based oversampling (NDO) approach, and the proposed data pre-processing (PPDP) method. The results indicate that the classification performance of the proposed approach is better than that of above-mentioned methods.
NASA Astrophysics Data System (ADS)
Alajlouni, Sa'ed; Albakri, Mohammad; Tarazaga, Pablo
2018-05-01
An algorithm is introduced to solve the general multilateration (source localization) problem in a dispersive waveguide. The algorithm is designed with the intention of localizing impact forces in a dispersive floor, and can potentially be used to localize and track occupants in a building using vibration sensors connected to the lower surface of the walking floor. The lower the wave frequencies generated by the impact force, the more accurate the localization is expected to be. An impact force acting on a floor, generates a seismic wave that gets distorted as it travels away from the source. This distortion is noticeable even over relatively short traveled distances, and is mainly caused by the dispersion phenomenon among other reasons, therefore using conventional localization/multilateration methods will produce localization error values that are highly variable and occasionally large. The proposed localization approach is based on the fact that the wave's energy, calculated over some time window, decays exponentially as the wave travels away from the source. Although localization methods that assume exponential decay exist in the literature (in the field of wireless communications), these methods have only been considered for wave propagation in non-dispersive media, in addition to the limiting assumption required by these methods that the source must not coincide with a sensor location. As a result, these methods cannot be applied to the indoor localization problem in their current form. We show how our proposed method is different from the other methods, and that it overcomes the source-sensor location coincidence limitation. Theoretical analysis and experimental data will be used to motivate and justify the pursuit of the proposed approach for localization in a dispersive medium. Additionally, hammer impacts on an instrumented floor section inside an operational building, as well as finite element model simulations, are used to evaluate the performance of the algorithm. It is shown that the algorithm produces promising results providing a foundation for further future development and optimization.
Non-negative matrix factorization in texture feature for classification of dementia with MRI data
NASA Astrophysics Data System (ADS)
Sarwinda, D.; Bustamam, A.; Ardaneswari, G.
2017-07-01
This paper investigates applications of non-negative matrix factorization as feature selection method to select the features from gray level co-occurrence matrix. The proposed approach is used to classify dementia using MRI data. In this study, texture analysis using gray level co-occurrence matrix is done to feature extraction. In the feature extraction process of MRI data, we found seven features from gray level co-occurrence matrix. Non-negative matrix factorization selected three features that influence of all features produced by feature extractions. A Naïve Bayes classifier is adapted to classify dementia, i.e. Alzheimer's disease, Mild Cognitive Impairment (MCI) and normal control. The experimental results show that non-negative factorization as feature selection method able to achieve an accuracy of 96.4% for classification of Alzheimer's and normal control. The proposed method also compared with other features selection methods i.e. Principal Component Analysis (PCA).
Optical measurement of high-temperature melt flow rate.
Bizjan, Benjamin; Širok, Brane; Chen, Jinpeng
2018-05-20
This paper presents an optical method and system for contactless measurement of the mass flow rate of melts by digital cameras. The proposed method is based on reconstruction of melt stream geometry and flow velocity calculation by cross correlation, and is very cost-effective due its modest hardware requirements. Using a laboratory test rig with a small inductive melting pot and reference mass flow rate measurement by weighing, the proposed method was demonstrated to have an excellent dynamic response (0.1 s order of magnitude) while producing deviations from the reference of about 5% in the steady-state flow regime. Similar results were obtained in an industrial stone wool production line for two repeated measurements. Our method was tested in a wide range of melt flow rates (0.05-1.2 kg/s) and did not require very fast cameras (120 frames per second would be sufficient for most industrial applications).
Laser-plasma interactions with a Fourier-Bessel particle-in-cell method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andriyash, Igor A., E-mail: igor.andriyash@gmail.com; LOA, ENSTA ParisTech, CNRS, Ecole polytechnique, Université Paris-Saclay, 828 bd des Maréchaux, 91762 Palaiseau cedex; Lehe, Remi
A new spectral particle-in-cell (PIC) method for plasma modeling is presented and discussed. In the proposed scheme, the Fourier-Bessel transform is used to translate the Maxwell equations to the quasi-cylindrical spectral domain. In this domain, the equations are solved analytically in time, and the spatial derivatives are approximated with high accuracy. In contrast to the finite-difference time domain (FDTD) methods, that are used commonly in PIC, the developed method does not produce numerical dispersion and does not involve grid staggering for the electric and magnetic fields. These features are especially valuable in modeling the wakefield acceleration of particles in plasmas.more » The proposed algorithm is implemented in the code PLARES-PIC, and the test simulations of laser plasma interactions are compared to the ones done with the quasi-cylindrical FDTD PIC code CALDER-CIRC.« less
NASA Astrophysics Data System (ADS)
Muravsky, Leonid I.; Kmet', Arkady B.; Stasyshyn, Ihor V.; Voronyak, Taras I.; Bobitski, Yaroslav V.
2018-06-01
A new three-step interferometric method with blind phase shifts to retrieve phase maps (PMs) of smooth and low-roughness engineering surfaces is proposed. Evaluating of two unknown phase shifts is fulfilled by using the interframe correlation between interferograms. The method consists of two stages. The first stage provides recording of three interferograms of a test object and their processing including calculation of unknown phase shifts, and retrieval of a coarse PM. The second stage implements firstly separation of high-frequency and low-frequency PMs and secondly producing of a fine PM consisting of areal surface roughness and waviness PMs. Extraction of the areal surface roughness and waviness PMs is fulfilled by using a linear low-pass filter. The computer simulation and experiments fulfilled to retrieve a gauge block surface area and its areal surface roughness and waviness have confirmed the reliability of the proposed three-step method.
Real-time traffic sign recognition based on a general purpose GPU and deep-learning
Hong, Yongwon; Choi, Yeongwoo; Byun, Hyeran
2017-01-01
We present a General Purpose Graphics Processing Unit (GPGPU) based real-time traffic sign detection and recognition method that is robust against illumination changes. There have been many approaches to traffic sign recognition in various research fields; however, previous approaches faced several limitations when under low illumination or wide variance of light conditions. To overcome these drawbacks and improve processing speeds, we propose a method that 1) is robust against illumination changes, 2) uses GPGPU-based real-time traffic sign detection, and 3) performs region detecting and recognition using a hierarchical model. This method produces stable results in low illumination environments. Both detection and hierarchical recognition are performed in real-time, and the proposed method achieves 0.97 F1-score on our collective dataset, which uses the Vienna convention traffic rules (Germany and South Korea). PMID:28264011
Shear Wave Wavefront Mapping Using Ultrasound Color Flow Imaging.
Yamakoshi, Yoshiki; Kasahara, Toshihiro; Iijima, Tomohiro; Yuminaka, Yasushi
2015-10-01
A wavefront reconstruction method for a continuous shear wave is proposed. The method uses ultrasound color flow imaging (CFI) to detect the shear wave's wavefront. When the shear wave vibration frequency satisfies the required frequency condition and the displacement amplitude satisfies the displacement amplitude condition, zero and maximum flow velocities appear at the shear wave vibration phases of zero and π rad, respectively. These specific flow velocities produce the shear wave's wavefront map in CFI. An important feature of this method is that the shear wave propagation is observed in real time without addition of extra functions to the ultrasound imaging system. The experiments are performed using a 6.5 MHz CFI system. The shear wave is excited by a multilayer piezoelectric actuator. In a phantom experiment, the shear wave velocities estimated using the proposed method and those estimated using a system based on displacement measurement show good agreement. © The Author(s) 2015.
A novel orthoimage mosaic method using the weighted A* algorithm for UAV imagery
NASA Astrophysics Data System (ADS)
Zheng, Maoteng; Zhou, Shunping; Xiong, Xiaodong; Zhu, Junfeng
2017-12-01
A weighted A* algorithm is proposed to select optimal seam-lines in orthoimage mosaic for UAV (Unmanned Aircraft Vehicle) imagery. The whole workflow includes four steps: the initial seam-line network is firstly generated by standard Voronoi Diagram algorithm; an edge diagram is then detected based on DSM (Digital Surface Model) data; the vertices (conjunction nodes) of initial network are relocated since some of them are on the high objects (buildings, trees and other artificial structures); and, the initial seam-lines are finally refined using the weighted A* algorithm based on the edge diagram and the relocated vertices. The method was tested with two real UAV datasets. Preliminary results show that the proposed method produces acceptable mosaic images in both the urban and mountainous areas, and is better than the result of the state-of-the-art methods on the datasets.
The EM Method in a Probabilistic Wavelet-Based MRI Denoising
2015-01-01
Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images. PMID:26089959
The EM Method in a Probabilistic Wavelet-Based MRI Denoising.
Martin-Fernandez, Marcos; Villullas, Sergio
2015-01-01
Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images.
An improved swarm optimization for parameter estimation and biological model selection.
Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail
2013-01-01
One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data.
NASA Astrophysics Data System (ADS)
Zhu, Wenbin; Jia, Shaofeng; Lv, Aifeng
2017-10-01
The triangle method based on the spatial relationship between remotely sensed land surface temperature (Ts) and vegetation index (VI) has been widely used for the estimates of evaporative fraction (EF). In the present study, a universal triangle method was proposed by transforming the Ts-VI feature space from a regional scale to a pixel scale. The retrieval of EF is only related to the boundary conditions at pixel scale, regardless of the Ts-VI configuration over the spatial domain. The boundary conditions of each pixel are composed of the theoretical dry edge determined by the surface energy balance principle and the wet edge determined by the average air temperature of open water. The universal triangle method was validated using the EF observations collected by the Energy Balance Bowen Ratio systems in the Southern Great Plains of the United States of America (USA). Two parameterization schemes of EF were used to demonstrate their applicability with Terra Moderate Resolution Imaging Spectroradiometer (MODIS) products over the whole year 2004. The results of this study show that the accuracy produced by both of these two parameterization schemes is comparable to that produced by the traditional triangle method, although the universal triangle method seems specifically suited to the parameterization scheme proposed in our previous research. The independence of the universal triangle method from the Ts-VI feature space makes it possible to conduct a continuous monitoring of evapotranspiration and soil moisture. That is just the ability the traditional triangle method does not possess.
Tharwat, Alaa; Moemen, Yasmine S; Hassanien, Aboul Ella
2016-12-09
Measuring toxicity is one of the main steps in drug development. Hence, there is a high demand for computational models to predict the toxicity effects of the potential drugs. In this study, we used a dataset, which consists of four toxicity effects:mutagenic, tumorigenic, irritant and reproductive effects. The proposed model consists of three phases. In the first phase, rough set-based methods are used to select the most discriminative features for reducing the classification time and improving the classification performance. Due to the imbalanced class distribution, in the second phase, different sampling methods such as Random Under-Sampling, Random Over-Sampling and Synthetic Minority Oversampling Technique are used to solve the problem of imbalanced datasets. ITerative Sampling (ITS) method is proposed to avoid the limitations of those methods. ITS method has two steps. The first step (sampling step) iteratively modifies the prior distribution of the minority and majority classes. In the second step, a data cleaning method is used to remove the overlapping that is produced from the first step. In the third phase, Bagging classifier is used to classify an unknown drug into toxic or non-toxic. The experimental results proved that the proposed model performed well in classifying the unknown samples according to all toxic effects in the imbalanced datasets.
Second Harmonic Generation of Unpolarized Light
NASA Astrophysics Data System (ADS)
Ding, Changqin; Ulcickas, James R. W.; Deng, Fengyuan; Simpson, Garth J.
2017-11-01
A Mueller tensor mathematical framework was applied for predicting and interpreting the second harmonic generation (SHG) produced with an unpolarized fundamental beam. In deep tissue imaging through SHG and multiphoton fluorescence, partial or complete depolarization of the incident light complicates polarization analysis. The proposed framework has the distinct advantage of seamlessly merging the purely polarized theory based on the Jones or Cartesian susceptibility tensors with a more general Mueller tensor framework capable of handling partial depolarized fundamental and/or SHG produced. The predictions of the model are in excellent agreement with experimental measurements of z -cut quartz and mouse tail tendon obtained with polarized and depolarized incident light. The polarization-dependent SHG produced with unpolarized fundamental allowed determination of collagen fiber orientation in agreement with orthogonal methods based on image analysis. This method has the distinct advantage of being immune to birefringence or depolarization of the fundamental beam for structural analysis of tissues.
Multispectral computational ghost imaging with multiplexed illumination
NASA Astrophysics Data System (ADS)
Huang, Jian; Shi, Dongfeng
2017-07-01
Computational ghost imaging has attracted wide attention from researchers in many fields over the last two decades. Multispectral imaging as one application of computational ghost imaging possesses spatial and spectral resolving abilities, and is very useful for surveying scenes and extracting detailed information. Existing multispectral imagers mostly utilize narrow band filters or dispersive optical devices to separate light of different wavelengths, and then use multiple bucket detectors or an array detector to record them separately. Here, we propose a novel multispectral ghost imaging method that uses one single bucket detector with multiplexed illumination to produce a colored image. The multiplexed illumination patterns are produced by three binary encoded matrices (corresponding to the red, green and blue colored information, respectively) and random patterns. The results of the simulation and experiment have verified that our method can be effective in recovering the colored object. Multispectral images are produced simultaneously by one single-pixel detector, which significantly reduces the amount of data acquisition.
Production of 92Y via the 92Zr(n,p) reaction using the C(d,n) accelerator neutron source
NASA Astrophysics Data System (ADS)
Kin, Tadahiro; Sanzen, Yukimasa; Kamida, Masaki; Watanabe, Yukinobu; Itoh, Masatoshi
2017-09-01
We have proposed a new method of producing medical radioisotope 92Y as a candidate of alternatives of 111In bioscan prior to 90Y ibritumomab tiuxetan treatment. The 92Y isotope is produced via the 92Zr (n,p) reaction using accelerator neutrons generated by the interaction of deuteron beams with carbon. A feasibility experiment was performed at Cyclotron and Radioisotope Center, Tohoku University. A carbon thick target was irradiated by 20-MeV deuterons to produce accelerator neutrons. The thick target neutron yield (TTNY) was measured by using the multiple foils activation method. The foils were made of Al, Fe, Co, Ni, Zn, Zr, Nb, and Au. The production amount of 92Y and induced impurities were estimated by simulation with the measured TTNY and the JENDL-4.0 nuclear data.
An Elimination Method of Temperature-Induced Linear Birefringence in a Stray Current Sensor
Xu, Shaoyi; Li, Wei; Xing, Fangfang; Wang, Yuqiao; Wang, Ruilin; Wang, Xianghui
2017-01-01
In this work, an elimination method of the temperature-induced linear birefringence (TILB) in a stray current sensor is proposed using the cylindrical spiral fiber (CSF), which produces a large amount of circular birefringence to eliminate the TILB based on geometric rotation effect. First, the differential equations that indicate the polarization evolution of the CSF element are derived, and the output error model is built based on the Jones matrix calculus. Then, an accurate search method is proposed to obtain the key parameters of the CSF, including the length of the cylindrical silica rod and the number of the curve spirals. The optimized results are 302 mm and 11, respectively. Moreover, an effective factor is proposed to analyze the elimination of the TILB, which should be greater than 7.42 to achieve the output error requirement that is not greater than 0.5%. Finally, temperature experiments are conducted to verify the feasibility of the elimination method. The results indicate that the output error caused by the TILB can be controlled less than 0.43% based on this elimination method within the range from −20 °C to 40 °C. PMID:28282953
Convolutional neural network features based change detection in satellite images
NASA Astrophysics Data System (ADS)
Mohammed El Amin, Arabi; Liu, Qingjie; Wang, Yunhong
2016-07-01
With the popular use of high resolution remote sensing (HRRS) satellite images, a huge research efforts have been placed on change detection (CD) problem. An effective feature selection method can significantly boost the final result. While hand-designed features have proven difficulties to design features that effectively capture high and mid-level representations, the recent developments in machine learning (Deep Learning) omit this problem by learning hierarchical representation in an unsupervised manner directly from data without human intervention. In this letter, we propose approaching the change detection problem from a feature learning perspective. A novel deep Convolutional Neural Networks (CNN) features based HR satellite images change detection method is proposed. The main guideline is to produce a change detection map directly from two images using a pretrained CNN. This method can omit the limited performance of hand-crafted features. Firstly, CNN features are extracted through different convolutional layers. Then, a concatenation step is evaluated after an normalization step, resulting in a unique higher dimensional feature map. Finally, a change map was computed using pixel-wise Euclidean distance. Our method has been validated on real bitemporal HRRS satellite images according to qualitative and quantitative analyses. The results obtained confirm the interest of the proposed method.
A novel pre-processing technique for improving image quality in digital breast tomosynthesis.
Kim, Hyeongseok; Lee, Taewon; Hong, Joonpyo; Sabir, Sohail; Lee, Jung-Ryun; Choi, Young Wook; Kim, Hak Hee; Chae, Eun Young; Cho, Seungryong
2017-02-01
Nonlinear pre-reconstruction processing of the projection data in computed tomography (CT) where accurate recovery of the CT numbers is important for diagnosis is usually discouraged, for such a processing would violate the physics of image formation in CT. However, one can devise a pre-processing step to enhance detectability of lesions in digital breast tomosynthesis (DBT) where accurate recovery of the CT numbers is fundamentally impossible due to the incompleteness of the scanned data. Since the detection of lesions such as micro-calcifications and mass in breasts is the purpose of using DBT, it is justified that a technique producing higher detectability of lesions is a virtue. A histogram modification technique was developed in the projection data domain. Histogram of raw projection data was first divided into two parts: One for the breast projection data and the other for background. Background pixel values were set to a single value that represents the boundary between breast and background. After that, both histogram parts were shifted by an appropriate amount of offset and the histogram-modified projection data were log-transformed. Filtered-backprojection (FBP) algorithm was used for image reconstruction of DBT. To evaluate performance of the proposed method, we computed the detectability index for the reconstructed images from clinically acquired data. Typical breast border enhancement artifacts were greatly suppressed and the detectability of calcifications and masses was increased by use of the proposed method. Compared to a global threshold-based post-reconstruction processing technique, the proposed method produced images of higher contrast without invoking additional image artifacts. In this work, we report a novel pre-processing technique that improves detectability of lesions in DBT and has potential advantages over the global threshold-based post-reconstruction processing technique. The proposed method not only increased the lesion detectability but also reduced typical image artifacts pronounced in conventional FBP-based DBT. © 2016 American Association of Physicists in Medicine.
Methods for Scaling Icing Test Conditions
NASA Technical Reports Server (NTRS)
Anderson, David N.
1995-01-01
This report presents the results of tests at NASA Lewis to evaluate several methods to establish suitable alternative test conditions when the test facility limits the model size or operating conditions. The first method was proposed by Olsen. It can be applied when full-size models are tested and all the desired test conditions except liquid-water content can be obtained in the facility. The other two methods discussed are: a modification of the French scaling law and the AEDC scaling method. Icing tests were made with cylinders at both reference and scaled conditions representing mixed and glaze ice in the NASA Lewis Icing Research Tunnel. Reference and scale ice shapes were compared to evaluate each method. The Olsen method was tested with liquid-water content varying from 1.3 to .8 g/m(exp3). Over this range, ice shapes produced using the Olsen method were unchanged. The modified French and AEDC methods produced scaled ice shapes which approximated the reference shapes when model size was reduced to half the reference size for the glaze-ice cases tested.
On the Possibility of Acceleration of Polarized Protons in the Synchrotron Nuclotron
NASA Astrophysics Data System (ADS)
Shatunov, Yu. M.; Koop, I. A.; Otboev, A. V.; Mane, S. P.; Shatunov, P. Yu.
2018-05-01
One of the main tasks of the NICA project is to produce colliding beams of polarized protons. It is planned to accelerate polarized protons from the source to the maximum energy in the existing proton synchrotron. We consider all depolarizing spin resonances in the Nuclotron and propose methods to overcome them.
The Mini-Patt Approach for Individualizing Instruction.
ERIC Educational Resources Information Center
Jenkins, Jimmy R.; Krockover, Gerald H.
A method is proposed which is said to allow elementary and secondary teachers to prepare 30-minute audio-tutorial tapes in one to three hours. A list of materials needed is provided, and the six-step procedure outlined. More than 300 Mini-Patt tapes are said to have been produced for use from elementary
A General Multilevel SEM Framework for Assessing Multilevel Mediation
ERIC Educational Resources Information Center
Preacher, Kristopher J.; Zyphur, Michael J.; Zhang, Zhen
2010-01-01
Several methods for testing mediation hypotheses with 2-level nested data have been proposed by researchers using a multilevel modeling (MLM) paradigm. However, these MLM approaches do not accommodate mediation pathways with Level-2 outcomes and may produce conflated estimates of between- and within-level components of indirect effects. Moreover,…
GPURFSCREEN: a GPU based virtual screening tool using random forest classifier.
Jayaraj, P B; Ajay, Mathias K; Nufail, M; Gopakumar, G; Jaleel, U C A
2016-01-01
In-silico methods are an integral part of modern drug discovery paradigm. Virtual screening, an in-silico method, is used to refine data models and reduce the chemical space on which wet lab experiments need to be performed. Virtual screening of a ligand data model requires large scale computations, making it a highly time consuming task. This process can be speeded up by implementing parallelized algorithms on a Graphical Processing Unit (GPU). Random Forest is a robust classification algorithm that can be employed in the virtual screening. A ligand based virtual screening tool (GPURFSCREEN) that uses random forests on GPU systems has been proposed and evaluated in this paper. This tool produces optimized results at a lower execution time for large bioassay data sets. The quality of results produced by our tool on GPU is same as that on a regular serial environment. Considering the magnitude of data to be screened, the parallelized virtual screening has a significantly lower running time at high throughput. The proposed parallel tool outperforms its serial counterpart by successfully screening billions of molecules in training and prediction phases.
A graph-based watershed merging using fuzzy C-means and simulated annealing for image segmentation
NASA Astrophysics Data System (ADS)
Vadiveloo, Mogana; Abdullah, Rosni; Rajeswari, Mandava
2015-12-01
In this paper, we have addressed the issue of over-segmented regions produced in watershed by merging the regions using global feature. The global feature information is obtained from clustering the image in its feature space using Fuzzy C-Means (FCM) clustering. The over-segmented regions produced by performing watershed on the gradient of the image are then mapped to this global information in the feature space. Further to this, the global feature information is optimized using Simulated Annealing (SA). The optimal global feature information is used to derive the similarity criterion to merge the over-segmented watershed regions which are represented by the region adjacency graph (RAG). The proposed method has been tested on digital brain phantom simulated dataset to segment white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) soft tissues regions. The experiments showed that the proposed method performs statistically better, with average of 95.242% regions are merged, than the immersion watershed and average accuracy improvement of 8.850% in comparison with RAG-based immersion watershed merging using global and local features.
Adaptive enhancement for nonuniform illumination images via nonlinear mapping
NASA Astrophysics Data System (ADS)
Wang, Yanfang; Huang, Qian; Hu, Jing
2017-09-01
Nonuniform illumination images suffer from degenerated details because of underexposure, overexposure, or a combination of both. To improve the visual quality of color images, underexposure regions should be lightened, whereas overexposure areas need to be dimmed properly. However, discriminating between underexposure and overexposure is troublesome. Compared with traditional methods that produce a fixed demarcation value throughout an image, the proposed demarcation changes as local luminance varies, thus is suitable for manipulating complicated illumination. Based on this locally adaptive demarcation, a nonlinear modification is applied to image luminance. Further, with the modified luminance, we propose a nonlinear process to reconstruct a luminance-enhanced color image. For every pixel, this nonlinear process takes the luminance change and the original chromaticity into account, thus trying to avoid exaggerated colors at dark areas and depressed colors at highly bright regions. Finally, to improve image contrast, a local and image-dependent exponential technique is designed and applied to the RGB channels of the obtained color image. Experimental results demonstrate that our method produces good contrast and vivid color for both nonuniform illumination images and images with normal illumination.
NASA Astrophysics Data System (ADS)
El Harti, Abderrazak; Lhissou, Rachid; Chokmani, Karem; Ouzemou, Jamal-eddine; Hassouna, Mohamed; Bachaoui, El Mostafa; El Ghmari, Abderrahmene
2016-08-01
Soil salinization is major environmental issue in irrigated agricultural production. Conventional methods for salinization monitoring are time and money consuming and limited by the high spatiotemporal variability of this phenomenon. This work aims to propose a spatiotemporal monitoring method of soil salinization in the Tadla plain in central Morocco using spectral indices derived from Thematic Mapper (TM) and Operational Land Imager (OLI) data. Six Landsat TM/OLI satellite images acquired during 13 years period (2000-2013) coupled with in-situ electrical conductivity (EC) measurements were used to develop the proposed method. After radiometric and atmospheric correction of TM/OLI images, a new soil salinity index (OLI-SI) is proposed for soil EC estimation. Validation shows that this index allowed a satisfactory EC estimation in the Tadla irrigated perimeter with coefficient of determination R2 varying from 0.55 to 0.77 and a Root Mean Square Error (RMSE) ranging between 1.02 dS/m and 2.35 dS/m. The times-series of salinity maps produced over the Tadla plain using the proposed method show that salinity is decreasing in intensity and progressively increasing in spatial extent, over the 2000-2013 period. This trend resulted in a decrease in agricultural activities in the southwestern part of the perimeter, located in the hydraulic downstream.
An effective non-rigid registration approach for ultrasound image based on "demons" algorithm.
Liu, Yan; Cheng, H D; Huang, Jianhua; Zhang, Yingtao; Tang, Xianglong; Tian, Jiawei
2013-06-01
Medical image registration is an important component of computer-aided diagnosis system in diagnostics, therapy planning, and guidance of surgery. Because of its low signal/noise ratio (SNR), ultrasound (US) image registration is a difficult task. In this paper, a fully automatic non-rigid image registration algorithm based on demons algorithm is proposed for registration of ultrasound images. In the proposed method, an "inertia force" derived from the local motion trend of pixels in a Moore neighborhood system is produced and integrated into optical flow equation to estimate the demons force, which is helpful to handle the speckle noise and preserve the geometric continuity of US images. In the experiment, a series of US images and several similarity measure metrics are utilized for evaluating the performance. The experimental results demonstrate that the proposed method can register ultrasound images efficiently, robust to noise, quickly and automatically.
Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao
2016-12-01
To address the low compression efficiency of lossless compression and the low image quality of general near-lossless compression, a novel near-lossless compression algorithm based on adaptive spatial prediction is proposed for medical sequence images for possible diagnostic use in this paper. The proposed method employs adaptive block size-based spatial prediction to predict blocks directly in the spatial domain and Lossless Hadamard Transform before quantization to improve the quality of reconstructed images. The block-based prediction breaks the pixel neighborhood constraint and takes full advantage of the local spatial correlations found in medical images. The adaptive block size guarantees a more rational division of images and the improved use of the local structure. The results indicate that the proposed algorithm can efficiently compress medical images and produces a better peak signal-to-noise ratio (PSNR) under the same pre-defined distortion than other near-lossless methods.
Inhomogeneity compensation for MR brain image segmentation using a multi-stage FCM-based approach.
Szilágyi, László; Szilágyi, Sándor M; Dávid, László; Benyó, Zoltán
2008-01-01
Intensity inhomogeneity or intensity non-uniformity (INU) is an undesired phenomenon that represents the main obstacle for MR image segmentation and registration methods. Various techniques have been proposed to eliminate or compensate the INU, most of which are embedded into clustering algorithms. This paper proposes a multiple stage fuzzy c-means (FCM) based algorithm for the estimation and compensation of the slowly varying additive or multiplicative noise, supported by a pre-filtering technique for Gaussian and impulse noise elimination. The slowly varying behavior of the bias or gain field is assured by a smoothening filter that performs a context dependent averaging, based on a morphological criterion. The experiments using 2-D synthetic phantoms and real MR images show, that the proposed method provides accurate segmentation. The produced segmentation and fuzzy membership values can serve as excellent support for 3-D registration and segmentation techniques.
Fabricating TiO2 nanocolloids by electric spark discharge method at normal temperature and pressure.
Tseng, Kuo-Hsiung; Chang, Chaur-Yang; Chung, Meng-Yun; Cheng, Ting-Shou
2017-11-17
In this study, TiO 2 nanocolloids were successfully fabricated in deionized water without using suspending agents through using the electric spark discharge method at room temperature and under normal atmospheric pressure. This method was exceptional because it did not create nanoparticle dispersion and the produced colloids contained no derivatives. The proposed method requires only traditional electrical discharge machines (EDMs), self-made magnetic stirrers, and Ti wires (purity, 99.99%). The EDM pulse on time (T on ) and pulse off time (T off ) were respectively set at 50 and 100 μs, 100 and 100 μs, 150 and 100 μs, and 200 and 100 μs to produce four types of TiO 2 nanocolloids. Zetasizer analysis of the nanocolloids showed that a decrease in T on increased the suspension stability, but there were no significant correlations between T on and particle size. Colloids produced from the four production configurations showed a minimum particle size between 29.39 and 52.85 nm and a zeta-potential between -51.2 and -46.8 mV, confirming that the method introduced in this study can be used to produce TiO 2 nanocolloids with excellent suspension stability. Scanning electron microscopy with energy dispersive spectroscopy also indicated that the TiO 2 colloids did not contain elements other than Ti and oxygen.
Tolerance allocation for an electronic system using neural network/Monte Carlo approach
NASA Astrophysics Data System (ADS)
Al-Mohammed, Mohammed; Esteve, Daniel; Boucher, Jaque
2001-12-01
The intense global competition to produce quality products at a low cost has led many industrial nations to consider tolerances as a key factor to bring about cost as well as to remain competitive. In actually, Tolerance allocation stays widely applied on the Mechanic System. It is known that to study the tolerances in an electronic domain, Monte-Carlo method well be used. But the later method spends a long time. This paper reviews several methods (Worst-case, Statistical Method, Least Cost Allocation by Optimization methods) that can be used for treating the tolerancing problem for an Electronic System and explains their advantages and their limitations. Then, it proposes an efficient method based on the Neural Networks associated with Monte-Carlo method as basis data. The network is trained using the Error Back Propagation Algorithm to predict the individual part tolerances, minimizing the total cost of the system by a method of optimization. This proposed approach has been applied on Small-Signal Amplifier Circuit as an example. This method can be easily extended to a complex system of n-components.
Automatic Synthesis of Panoramic Radiographs from Dental Cone Beam Computed Tomography Data.
Luo, Ting; Shi, Changrong; Zhao, Xing; Zhao, Yunsong; Xu, Jinqiu
2016-01-01
In this paper, we propose an automatic method of synthesizing panoramic radiographs from dental cone beam computed tomography (CBCT) data for directly observing the whole dentition without the superimposition of other structures. This method consists of three major steps. First, the dental arch curve is generated from the maximum intensity projection (MIP) of 3D CBCT data. Then, based on this curve, the long axial curves of the upper and lower teeth are extracted to create a 3D panoramic curved surface describing the whole dentition. Finally, the panoramic radiograph is synthesized by developing this 3D surface. Both open-bite shaped and closed-bite shaped dental CBCT datasets were applied in this study, and the resulting images were analyzed to evaluate the effectiveness of this method. With the proposed method, a single-slice panoramic radiograph can clearly and completely show the whole dentition without the blur and superimposition of other dental structures. Moreover, thickened panoramic radiographs can also be synthesized with increased slice thickness to show more features, such as the mandibular nerve canal. One feature of the proposed method is that it is automatically performed without human intervention. Another feature of the proposed method is that it requires thinner panoramic radiographs to show the whole dentition than those produced by other existing methods, which contributes to the clarity of the anatomical structures, including the enamel, dentine and pulp. In addition, this method can rapidly process common dental CBCT data. The speed and image quality of this method make it an attractive option for observing the whole dentition in a clinical setting.
NASA Astrophysics Data System (ADS)
Song, Xizi; Xu, Yanbin; Dong, Feng
2017-04-01
Electrical resistance tomography (ERT) is a promising measurement technique with important industrial and clinical applications. However, with limited effective measurements, it suffers from poor spatial resolution due to the ill-posedness of the inverse problem. Recently, there has been an increasing research interest in hybrid imaging techniques, utilizing couplings of physical modalities, because these techniques obtain much more effective measurement information and promise high resolution. Ultrasound modulated electrical impedance tomography (UMEIT) is one of the newly developed hybrid imaging techniques, which combines electric and acoustic modalities. A linearized image reconstruction method based on power density is proposed for UMEIT. The interior data, power density distribution, is adopted to reconstruct the conductivity distribution with the proposed image reconstruction method. At the same time, relating the power density change to the change in conductivity, the Jacobian matrix is employed to make the nonlinear problem into a linear one. The analytic formulation of this Jacobian matrix is derived and its effectiveness is also verified. In addition, different excitation patterns are tested and analyzed, and opposite excitation provides the best performance with the proposed method. Also, multiple power density distributions are combined to implement image reconstruction. Finally, image reconstruction is implemented with the linear back-projection (LBP) algorithm. Compared with ERT, with the proposed image reconstruction method, UMEIT can produce reconstructed images with higher quality and better quantitative evaluation results.
SubspaceEM: A Fast Maximum-a-posteriori Algorithm for Cryo-EM Single Particle Reconstruction
Dvornek, Nicha C.; Sigworth, Fred J.; Tagare, Hemant D.
2015-01-01
Single particle reconstruction methods based on the maximum-likelihood principle and the expectation-maximization (E–M) algorithm are popular because of their ability to produce high resolution structures. However, these algorithms are computationally very expensive, requiring a network of computational servers. To overcome this computational bottleneck, we propose a new mathematical framework for accelerating maximum-likelihood reconstructions. The speedup is by orders of magnitude and the proposed algorithm produces similar quality reconstructions compared to the standard maximum-likelihood formulation. Our approach uses subspace approximations of the cryo-electron microscopy (cryo-EM) data and projection images, greatly reducing the number of image transformations and comparisons that are computed. Experiments using simulated and actual cryo-EM data show that speedup in overall execution time compared to traditional maximum-likelihood reconstruction reaches factors of over 300. PMID:25839831
Path Planning Algorithms for Autonomous Border Patrol Vehicles
NASA Astrophysics Data System (ADS)
Lau, George Tin Lam
This thesis presents an online path planning algorithm developed for unmanned vehicles in charge of autonomous border patrol. In this Pursuit-Evasion game, the unmanned vehicle is required to capture multiple trespassers on its own before any of them reach a target safe house where they are safe from capture. The problem formulation is based on Isaacs' Target Guarding problem, but extended to the case of multiple evaders. The proposed path planning method is based on Rapidly-exploring random trees (RRT) and is capable of producing trajectories within several seconds to capture 2 or 3 evaders. Simulations are carried out to demonstrate that the resulting trajectories approach the optimal solution produced by a nonlinear programming-based numerical optimal control solver. Experiments are also conducted on unmanned ground vehicles to show the feasibility of implementing the proposed online path planning algorithm on physical applications.
Simplified estimation of age-specific reference intervals for skewed data.
Wright, E M; Royston, P
1997-12-30
Age-specific reference intervals are commonly used in medical screening and clinical practice, where interest lies in the detection of extreme values. Many different statistical approaches have been published on this topic. The advantages of a parametric method are that they necessarily produce smooth centile curves, the entire density is estimated and an explicit formula is available for the centiles. The method proposed here is a simplified version of a recent approach proposed by Royston and Wright. Basic transformations of the data and multiple regression techniques are combined to model the mean, standard deviation and skewness. Using these simple tools, which are implemented in almost all statistical computer packages, age-specific reference intervals may be obtained. The scope of the method is illustrated by fitting models to several real data sets and assessing each model using goodness-of-fit techniques.
Automatic latency equalization in VHDL-implemented complex pipelined systems
NASA Astrophysics Data System (ADS)
Zabołotny, Wojciech M.
2016-09-01
In the pipelined data processing systems it is very important to ensure that parallel paths delay data by the same number of clock cycles. If that condition is not met, the processing blocks receive data not properly aligned in time and produce incorrect results. Manual equalization of latencies is a tedious and error-prone work. This paper presents an automatic method of latency equalization in systems described in VHDL. The proposed method uses simulation to measure latencies and verify introduced correction. The solution is portable between different simulation and synthesis tools. The method does not increase the complexity of the synthesized design comparing to the solution based on manual latency adjustment. The example implementation of the proposed methodology together with a simple design demonstrating its use is available as an open source project under BSD license.
Sweep excitation with order tracking: A new tactic for beam crack analysis
NASA Astrophysics Data System (ADS)
Wei, Dongdong; Wang, KeSheng; Zhang, Mian; Zuo, Ming J.
2018-04-01
Crack detection in beams and beam-like structures is an important issue in industry and has attracted numerous investigations. A local crack leads to global system dynamics changes and produce non-linear vibration responses. Many researchers have studied these non-linearities for beam crack diagnosis. However, most reported methods are based on impact excitation and constant frequency excitation. Few studies have focused on crack detection through external sweep excitation which unleashes abundant dynamic characteristics of the system. Together with a signal resampling technique inspired by Computed Order Tracking, this paper utilize vibration responses under sweep excitations to diagnose crack status of beams. A data driven method for crack depth evaluation is proposed and window based harmonics extracting approaches are studied. The effectiveness of sweep excitation and the proposed method is experimentally validated.
Poppies for medicine in Afghanistan: lessons from India and Turkey.
Windle, James
2011-01-01
This study examines India and Turkey as case studies relevant to the Senlis Council’s ‘poppies for medicine’ proposal. The proposal is that Afghan farmers are licensed to produce opium for medical and scientific purposes. Here it is posited that the Senlis proposal neglects at least three key lessons from the Turkish and Indian experiences. First, not enough weight has been given to diversion from licit markets, as experienced in India. Second, both India and Turkey had significantly more efficient state institutions with authority over the licensed growing areas. Third, the proposal appears to overlook the fact that Turkey’s successful transition was largely due to the use of the poppy straw method of opium production. It is concluded that, while innovative and creative policy proposals such as that of the Senlis proposal are required if Afghanistan is to move beyond its present problems, ‘poppies for medicine’ does not withstand evidence-based scrutiny.
Attallah, Omneya; Karthikesalingam, Alan; Holt, Peter J E; Thompson, Matthew M; Sayers, Rob; Bown, Matthew J; Choke, Eddie C; Ma, Xianghong
2017-08-03
Feature selection (FS) process is essential in the medical area as it reduces the effort and time needed for physicians to measure unnecessary features. Choosing useful variables is a difficult task with the presence of censoring which is the unique characteristic in survival analysis. Most survival FS methods depend on Cox's proportional hazard model; however, machine learning techniques (MLT) are preferred but not commonly used due to censoring. Techniques that have been proposed to adopt MLT to perform FS with survival data cannot be used with the high level of censoring. The researcher's previous publications proposed a technique to deal with the high level of censoring. It also used existing FS techniques to reduce dataset dimension. However, in this paper a new FS technique was proposed and combined with feature transformation and the proposed uncensoring approaches to select a reduced set of features and produce a stable predictive model. In this paper, a FS technique based on artificial neural network (ANN) MLT is proposed to deal with highly censored Endovascular Aortic Repair (EVAR). Survival data EVAR datasets were collected during 2004 to 2010 from two vascular centers in order to produce a final stable model. They contain almost 91% of censored patients. The proposed approach used a wrapper FS method with ANN to select a reduced subset of features that predict the risk of EVAR re-intervention after 5 years to patients from two different centers located in the United Kingdom, to allow it to be potentially applied to cross-centers predictions. The proposed model is compared with the two popular FS techniques; Akaike and Bayesian information criteria (AIC, BIC) that are used with Cox's model. The final model outperforms other methods in distinguishing the high and low risk groups; as they both have concordance index and estimated AUC better than the Cox's model based on AIC, BIC, Lasso, and SCAD approaches. These models have p-values lower than 0.05, meaning that patients with different risk groups can be separated significantly and those who would need re-intervention can be correctly predicted. The proposed approach will save time and effort made by physicians to collect unnecessary variables. The final reduced model was able to predict the long-term risk of aortic complications after EVAR. This predictive model can help clinicians decide patients' future observation plan.
Huang, Fu-Chun; Chen, Yih-Far; Lee, Gwo-Bin
2007-04-01
This study presents a new packaging method using a polyethylene/thermoplastic elastomer (PE/TPE) film to seal an injection-molded CE chip made of either poly(methyl methacrylate) (PMMA) or polycarbonate (PC) materials. The packaging is performed at atmospheric pressure and at room temperature, which is a fast, easy, and reliable bonding method to form a sealed CE chip for chemical analysis and biomedical applications. The fabrication of PMMA and PC microfluidic channels is accomplished by using an injection-molding process, which could be mass-produced for commercial applications. In addition to microfluidic CE channels, 3-D reservoirs for storing biosamples, and CE buffers are also formed during this injection-molding process. With this approach, a commercial CE chip can be of low cost and disposable. Finally, the functionality of the mass-produced CE chip is demonstrated through its successful separation of phiX174 DNA/HaeIII markers. Experimental data show that the S/N for the CE chips using the PE/TPE film has a value of 5.34, when utilizing DNA markers with a concentration of 2 ng/microL and a CE buffer of 2% hydroxypropyl-methylcellulose (HPMC) in Tris-borate-EDTA (TBE) with 1% YO-PRO-1 fluorescent dye. Thus, the detection limit of the developed chips is improved. Lastly, the developed CE chips are used for the separation and detection of PCR products. A mixture of an amplified antibiotic gene for Streptococcus pneumoniae and phiX174 DNA/HaeIII markers was successfully separated and detected by using the proposed CE chips. Experimental data show that these DNA samples were separated within 2 min. The study proposed a promising method for the development of mass-produced CE chips.
NASA Astrophysics Data System (ADS)
Yu, Wansik; Nakakita, Eiichi; Kim, Sunmin; Yamaguchi, Kosei
2016-08-01
The use of meteorological ensembles to produce sets of hydrological predictions increased the capability to issue flood warnings. However, space scale of the hydrological domain is still much finer than meteorological model, and NWP models have challenges with displacement. The main objective of this study to enhance the transposition method proposed in Yu et al. (2014) and to suggest the post-processing ensemble flood forecasting method for the real-time updating and the accuracy improvement of flood forecasts that considers the separation of the orographic rainfall and the correction of misplaced rain distributions using additional ensemble information through the transposition of rain distributions. In the first step of the proposed method, ensemble forecast rainfalls from a numerical weather prediction (NWP) model are separated into orographic and non-orographic rainfall fields using atmospheric variables and the extraction of topographic effect. Then the non-orographic rainfall fields are examined by the transposition scheme to produce additional ensemble information and new ensemble NWP rainfall fields are calculated by recombining the transposition results of non-orographic rain fields with separated orographic rainfall fields for a generation of place-corrected ensemble information. Then, the additional ensemble information is applied into a hydrologic model for post-flood forecasting with a 6-h interval. The newly proposed method has a clear advantage to improve the accuracy of mean value of ensemble flood forecasting. Our study is carried out and verified using the largest flood event by typhoon 'Talas' of 2011 over the two catchments, which are Futatsuno (356.1 km2) and Nanairo (182.1 km2) dam catchments of Shingu river basin (2360 km2), which is located in the Kii peninsula, Japan.
Chen, Yinsheng; Li, Zeju; Wu, Guoqing; Yu, Jinhua; Wang, Yuanyuan; Lv, Xiaofei; Ju, Xue; Chen, Zhongping
2018-07-01
Due to the totally different therapeutic regimens needed for primary central nervous system lymphoma (PCNSL) and glioblastoma (GBM), accurate differentiation of the two diseases by noninvasive imaging techniques is important for clinical decision-making. Thirty cases of PCNSL and 66 cases of GBM with conventional T1-contrast magnetic resonance imaging (MRI) were analyzed in this study. Convolutional neural networks was used to segment tumor automatically. A modified scale invariant feature transform (SIFT) method was utilized to extract three-dimensional local voxel arrangement information from segmented tumors. Fisher vector was proposed to normalize the dimension of SIFT features. An improved genetic algorithm (GA) was used to extract SIFT features with PCNSL and GBM discrimination ability. The data-set was divided into a cross-validation cohort and an independent validation cohort by the ratio of 2:1. Support vector machine with the leave-one-out cross-validation based on 20 cases of PCNSL and 44 cases of GBM was employed to build and validate the differentiation model. Among 16,384 high-throughput features, 1356 features show significant differences between PCNSL and GBM with p < 0.05 and 420 features with p < 0.001. A total of 496 features were finally chosen by improved GA algorithm. The proposed method produces PCNSL vs. GBM differentiation with an area under the curve (AUC) curve of 99.1% (98.2%), accuracy 95.3% (90.6%), sensitivity 85.0% (80.0%) and specificity 100% (95.5%) on the cross-validation cohort (and independent validation cohort). Since the local voxel arrangement characterization provided by SIFT features, proposed method produced more competitive PCNSL and GBM differentiation performance by using conventional MRI than methods based on advanced MRI.
Prediction of anti-cancer drug response by kernelized multi-task learning.
Tan, Mehmet
2016-10-01
Chemotherapy or targeted therapy are two of the main treatment options for many types of cancer. Due to the heterogeneous nature of cancer, the success of the therapeutic agents differs among patients. In this sense, determination of chemotherapeutic response of the malign cells is essential for establishing a personalized treatment protocol and designing new drugs. With the recent technological advances in producing large amounts of pharmacogenomic data, in silico methods have become important tools to achieve this aim. Data produced by using cancer cell lines provide a test bed for machine learning algorithms that try to predict the response of cancer cells to different agents. The potential use of these algorithms in drug discovery/repositioning and personalized treatments motivated us in this study to work on predicting drug response by exploiting the recent pharmacogenomic databases. We aim to improve the prediction of drug response of cancer cell lines. We propose to use a method that employs multi-task learning to improve learning by transfer, and kernels to extract non-linear relationships to predict drug response. The method outperforms three state-of-the-art algorithms on three anti-cancer drug screen datasets. We achieved a mean squared error of 3.305 and 0.501 on two different large scale screen data sets. On a recent challenge dataset, we obtained an error of 0.556. We report the methodological comparison results as well as the performance of the proposed algorithm on each single drug. The results show that the proposed method is a strong candidate to predict drug response of cancer cell lines in silico for pre-clinical studies. The source code of the algorithm and data used can be obtained from http://mtan.etu.edu.tr/Supplementary/kMTrace/. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chu, Shu-Chun
2008-07-01
This study proposes a systematic method of selecting excitations of part of Ince-Gaussian modes (IGMs) and a three-lens configuration for generating multiple vortex beams with forced IGMs in the model of laser-diode (LD)-pumped solid-state lasers. Simply changing the lateral off-axis position of the tight pump beam focus on the laser crystal can produce the desired multiple optical vortex beam from the laser in a well-controlled manner using a proposed astigmatic mode converter assembled into one body with the laser cavity.
Hydrologic testing of tight zones in southeastern New Mexico.
Dennehy, K.F.; Davis, P.A.
1981-01-01
Increased attention is being directed toward the investigation of tight zones in relation to the storage and disposal of hazardous wastes. Shut-in tests, slug tests, and pressure-slug tests are being used at the proposed Waste Isolation Pilot Plant site, New Mexico, to evaluate the fluid-transmitting properties of several zones above the proposed repository zone. All three testing methods were used in various combinations to obtain values for the hydraulic properties of the test zones. Multiple testing on the same zone produced similar results. -from Authors
Ware, Matthew J.; Colbert, Kevin; Keshishian, Vazrik; Ho, Jason; Corr, Stuart J.; Curley, Steven A.
2016-01-01
In vitro characterization of tumor cell biology or of potential anticancer drugs is usually performed using tumor cell lines cultured as a monolayer. However, it has been previously shown that three-dimensional (3D) organization of the tumor cells is important to provide insights on tumor biology and transport of therapeutics. Several methods to create 3D tumors in vitro have been proposed, with hanging drop technique being the most simple and, thus, most frequently used. However, in many cell lines this method has failed to form the desired 3D tumor structures. The aim of this study was to design and test an easy-to-use and highly reproducible modification of the hanging drop method for tumor sphere formation by adding methylcellulose polymer. Most pancreatic cancer cells do not form cohesive and manageable spheres when the original hanging drop method is used, thus we investigated these cell lines for our modified hanging drop method. The spheroids produced by this improved technique were analyzed by histology, light microscopy, immunohistochemistry, and scanning electron microscopy. Results show that using the proposed simple method; we were able to produce uniform spheroids for all five of the tested human pancreatic cancer cell lines; Panc-1, BxPC-3, Capan-1, MiaPaCa-2, and AsPC-1. We believe that this method can be used as a reliable and reproducible technique to make 3D cancer spheroids for use in tumor biology research and evaluation of therapeutic responses, and for the development of bio-artificial tissues. PMID:26830354
A bias-corrected estimator in multiple imputation for missing data.
Tomita, Hiroaki; Fujisawa, Hironori; Henmi, Masayuki
2018-05-29
Multiple imputation (MI) is one of the most popular methods to deal with missing data, and its use has been rapidly increasing in medical studies. Although MI is rather appealing in practice since it is possible to use ordinary statistical methods for a complete data set once the missing values are fully imputed, the method of imputation is still problematic. If the missing values are imputed from some parametric model, the validity of imputation is not necessarily ensured, and the final estimate for a parameter of interest can be biased unless the parametric model is correctly specified. Nonparametric methods have been also proposed for MI, but it is not so straightforward as to produce imputation values from nonparametrically estimated distributions. In this paper, we propose a new method for MI to obtain a consistent (or asymptotically unbiased) final estimate even if the imputation model is misspecified. The key idea is to use an imputation model from which the imputation values are easily produced and to make a proper correction in the likelihood function after the imputation by using the density ratio between the imputation model and the true conditional density function for the missing variable as a weight. Although the conditional density must be nonparametrically estimated, it is not used for the imputation. The performance of our method is evaluated by both theory and simulation studies. A real data analysis is also conducted to illustrate our method by using the Duke Cardiac Catheterization Coronary Artery Disease Diagnostic Dataset. Copyright © 2018 John Wiley & Sons, Ltd.
Ware, Matthew J; Colbert, Kevin; Keshishian, Vazrik; Ho, Jason; Corr, Stuart J; Curley, Steven A; Godin, Biana
2016-04-01
In vitro characterization of tumor cell biology or of potential anticancer drugs is usually performed using tumor cell lines cultured as a monolayer. However, it has been previously shown that three-dimensional (3D) organization of the tumor cells is important to provide insights on tumor biology and transport of therapeutics. Several methods to create 3D tumors in vitro have been proposed, with hanging drop technique being the most simple and, thus, most frequently used. However, in many cell lines this method has failed to form the desired 3D tumor structures. The aim of this study was to design and test an easy-to-use and highly reproducible modification of the hanging drop method for tumor sphere formation by adding methylcellulose polymer. Most pancreatic cancer cells do not form cohesive and manageable spheres when the original hanging drop method is used, thus we investigated these cell lines for our modified hanging drop method. The spheroids produced by this improved technique were analyzed by histology, light microscopy, immunohistochemistry, and scanning electron microscopy. Results show that using the proposed simple method; we were able to produce uniform spheroids for all five of the tested human pancreatic cancer cell lines; Panc-1, BxPC-3, Capan-1, MiaPaCa-2, and AsPC-1. We believe that this method can be used as a reliable and reproducible technique to make 3D cancer spheroids for use in tumor biology research and evaluation of therapeutic responses, and for the development of bio-artificial tissues.
Denoising in digital speckle pattern interferometry using wave atoms.
Federico, Alejandro; Kaufmann, Guillermo H
2007-05-15
We present an effective method for speckle noise removal in digital speckle pattern interferometry, which is based on a wave-atom thresholding technique. Wave atoms are a variant of 2D wavelet packets with a parabolic scaling relation and improve the sparse representation of fringe patterns when compared with traditional expansions. The performance of the denoising method is analyzed by using computer-simulated fringes, and the results are compared with those produced by wavelet and curvelet thresholding techniques. An application of the proposed method to reduce speckle noise in experimental data is also presented.
Methods for biological data integration: perspectives and challenges
Gligorijević, Vladimir; Pržulj, Nataša
2015-01-01
Rapid technological advances have led to the production of different types of biological data and enabled construction of complex networks with various types of interactions between diverse biological entities. Standard network data analysis methods were shown to be limited in dealing with such heterogeneous networked data and consequently, new methods for integrative data analyses have been proposed. The integrative methods can collectively mine multiple types of biological data and produce more holistic, systems-level biological insights. We survey recent methods for collective mining (integration) of various types of networked biological data. We compare different state-of-the-art methods for data integration and highlight their advantages and disadvantages in addressing important biological problems. We identify the important computational challenges of these methods and provide a general guideline for which methods are suited for specific biological problems, or specific data types. Moreover, we propose that recent non-negative matrix factorization-based approaches may become the integration methodology of choice, as they are well suited and accurate in dealing with heterogeneous data and have many opportunities for further development. PMID:26490630
NASA Astrophysics Data System (ADS)
Akdemir, Bayram; Güneş, Salih; Yosunkaya, Şebnem
Sleep disorders are a very common unawareness illness among public. Obstructive Sleep Apnea Syndrome (OSAS) is characterized with decreased oxygen saturation level and repetitive upper respiratory tract obstruction episodes during full night sleep. In the present study, we have proposed a novel data normalization method called Line Based Normalization Method (LBNM) to evaluate OSAS using real data set obtained from Polysomnography device as a diagnostic tool in patients and clinically suspected of suffering OSAS. Here, we have combined the LBNM and classification methods comprising C4.5 decision tree classifier and Artificial Neural Network (ANN) to diagnose the OSAS. Firstly, each clinical feature in OSAS dataset is scaled by LBNM method in the range of [0,1]. Secondly, normalized OSAS dataset is classified using different classifier algorithms including C4.5 decision tree classifier and ANN, respectively. The proposed normalization method was compared with min-max normalization, z-score normalization, and decimal scaling methods existing in literature on the diagnosis of OSAS. LBNM has produced very promising results on the assessing of OSAS. Also, this method could be applied to other biomedical datasets.
Multistage morphological segmentation of bright-field and fluorescent microscopy images
NASA Astrophysics Data System (ADS)
Korzyńska, A.; Iwanowski, M.
2012-06-01
This paper describes the multistage morphological segmentation method (MSMA) for microscopic cell images. The proposed method enables us to study the cell behaviour by using a sequence of two types of microscopic images: bright field images and/or fluorescent images. The proposed method is based on two types of information: the cell texture coming from the bright field images and intensity of light emission, done by fluorescent markers. The method is dedicated to the image sequences segmentation and it is based on mathematical morphology methods supported by other image processing techniques. The method allows for detecting cells in image independently from a degree of their flattening and from presenting structures which produce the texture. It makes use of some synergic information from the fluorescent light emission image as the support information. The MSMA method has been applied to images acquired during the experiments on neural stem cells as well as to artificial images. In order to validate the method, two types of errors have been considered: the error of cell area detection and the error of cell position using artificial images as the "gold standard".
Development of a method of alignment between various SOLAR MAXIMUM MISSION experiments
NASA Technical Reports Server (NTRS)
1977-01-01
Results of an engineering study of the methods of alignment between various experiments for the solar maximum mission are described. The configuration studied consists of the instruments, mounts and instrument support platform located within the experiment module. Hardware design, fabrication methods and alignment techniques were studied with regard to optimizing the coalignment between the experiments and the fine sun sensor. The proposed hardware design was reviewed with regard to loads, stress, thermal distortion, alignment error budgets, fabrication techniques, alignment techniques and producibility. Methods of achieving comparable alignment accuracies on previous projects were also reviewed.
Is whole-culture synchronization biology's 'perpetual-motion machine'?
Cooper, Stephen
2004-06-01
Whole-culture or batch synchronization cannot, in theory, produce a synchronized culture because it violates a fundamental law that proposes that no batch treatment can alter the cell-age order of a culture. In analogy with the history of perpetual-motion machines, it is suggested that the study of these whole-culture 'synchronization' methods might lead to an understanding of general biological principles even though these methods cannot be used to study the normal cell cycle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Almansouri, Hani; Venkatakrishnan, Singanallur V.; Clayton, Dwight A.
One-sided non-destructive evaluation (NDE) is widely used to inspect materials, such as concrete structures in nuclear power plants (NPP). A widely used method for one-sided NDE is the synthetic aperture focusing technique (SAFT). The SAFT algorithm produces reasonable results when inspecting simple structures. However, for complex structures, such as heavily reinforced thick concrete structures, SAFT results in artifacts and hence there is a need for a more sophisticated inversion technique. Model-based iterative reconstruction (MBIR) algorithms, which are typically equivalent to regularized inversion techniques, offer a powerful framework to incorporate complex models for the physics, detector miscalibrations and the materials beingmore » imaged to obtain high quality reconstructions. Previously, we have proposed an ultrasonic MBIR method that signifcantly improves reconstruction quality compared to SAFT. However, the method made some simplifying assumptions on the propagation model and did not disucss ways to handle data that is obtained by raster scanning a system over a surface to inspect large regions. In this paper, we propose a novel MBIR algorithm that incorporates an anisotropic forward model and allows for the joint processing of data obtained from a system that raster scans a large surface. We demonstrate that the new MBIR method can produce dramatic improvements in reconstruction quality compared to SAFT and suppresses articfacts compared to the perviously presented MBIR approach.« less
Fernández de Gorostiza, Erlantz; Mabe, Jon
2018-01-01
Industrial wireless applications often share the communication channel with other wireless technologies and communication protocols. This coexistence produces interferences and transmission errors which require appropriate mechanisms to manage retransmissions. Nevertheless, these mechanisms increase the network latency and overhead due to the retransmissions. Thus, the loss of data packets and the measures to handle them produce an undesirable drop in the QoS and hinder the overall robustness and energy efficiency of the network. Interference avoidance mechanisms, such as frequency hopping techniques, reduce the need for retransmissions due to interferences but they are often tailored to specific scenarios and are not easily adapted to other use cases. On the other hand, the total absence of interference avoidance mechanisms introduces a security risk because the communication channel may be intentionally attacked and interfered with to hinder or totally block it. In this paper we propose a method for supporting the design of communication solutions under dynamic channel interference conditions and we implement dynamic management policies for frequency hopping technique and channel selection at runtime. The method considers several standard frequency hopping techniques and quality metrics, and the quality and status of the available frequency channels to propose the best combined solution to minimize the side effects of interferences. A simulation tool has been developed and used in this work to validate the method. PMID:29473910
NASA Astrophysics Data System (ADS)
Almansouri, Hani; Venkatakrishnan, Singanallur; Clayton, Dwight; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector
2018-04-01
One-sided non-destructive evaluation (NDE) is widely used to inspect materials, such as concrete structures in nuclear power plants (NPP). A widely used method for one-sided NDE is the synthetic aperture focusing technique (SAFT). The SAFT algorithm produces reasonable results when inspecting simple structures. However, for complex structures, such as heavily reinforced thick concrete structures, SAFT results in artifacts and hence there is a need for a more sophisticated inversion technique. Model-based iterative reconstruction (MBIR) algorithms, which are typically equivalent to regularized inversion techniques, offer a powerful framework to incorporate complex models for the physics, detector miscalibrations and the materials being imaged to obtain high quality reconstructions. Previously, we have proposed an ultrasonic MBIR method that signifcantly improves reconstruction quality compared to SAFT. However, the method made some simplifying assumptions on the propagation model and did not disucss ways to handle data that is obtained by raster scanning a system over a surface to inspect large regions. In this paper, we propose a novel MBIR algorithm that incorporates an anisotropic forward model and allows for the joint processing of data obtained from a system that raster scans a large surface. We demonstrate that the new MBIR method can produce dramatic improvements in reconstruction quality compared to SAFT and suppresses articfacts compared to the perviously presented MBIR approach.
NASA Astrophysics Data System (ADS)
Belfort, Benjamin; Weill, Sylvain; Lehmann, François
2017-04-01
A novel, non-invasive imaging technique that determines 2D maps of water content in unsaturated porous media is presented. This method directly relates digitally measured intensities to the water content of the porous medium. This method requires the classical image analysis steps, i.e., normalization, filtering, background subtraction, scaling and calibration. The main advantages of this approach are that no calibration experiment is needed and that no tracer or dye is injected into the flow tank. The procedure enables effective processing of a large number of photographs and thus produces 2D water content maps at high temporal resolution. A drainage / imbibition experiment in a 2D flow tank with inner dimensions of 40 cm x 14 cm x 6 cm (L x W x D) is carried out to validate the methodology. The accuracy of the proposed approach is assessed using numerical simulations with a state-of-the-art computational code that solves the Richards. Comparison of the cumulative mass leaving and entering the flow tank and water content maps produced by the photographic measurement technique and the numerical simulations demonstrate the efficiency and high accuracy of the proposed method for investigating vadose zone flow processes. Application examples to a larger flow tank with various boundary conditions are finally presented to illustrate the potential of the methodology.
Fernández de Gorostiza, Erlantz; Berzosa, Jorge; Mabe, Jon; Cortiñas, Roberto
2018-02-23
Industrial wireless applications often share the communication channel with other wireless technologies and communication protocols. This coexistence produces interferences and transmission errors which require appropriate mechanisms to manage retransmissions. Nevertheless, these mechanisms increase the network latency and overhead due to the retransmissions. Thus, the loss of data packets and the measures to handle them produce an undesirable drop in the QoS and hinder the overall robustness and energy efficiency of the network. Interference avoidance mechanisms, such as frequency hopping techniques, reduce the need for retransmissions due to interferences but they are often tailored to specific scenarios and are not easily adapted to other use cases. On the other hand, the total absence of interference avoidance mechanisms introduces a security risk because the communication channel may be intentionally attacked and interfered with to hinder or totally block it. In this paper we propose a method for supporting the design of communication solutions under dynamic channel interference conditions and we implement dynamic management policies for frequency hopping technique and channel selection at runtime. The method considers several standard frequency hopping techniques and quality metrics, and the quality and status of the available frequency channels to propose the best combined solution to minimize the side effects of interferences. A simulation tool has been developed and used in this work to validate the method.
Meher, J K; Meher, P K; Dash, G N; Raval, M K
2012-01-01
The first step in gene identification problem based on genomic signal processing is to convert character strings into numerical sequences. These numerical sequences are then analysed spectrally or using digital filtering techniques for the period-3 peaks, which are present in exons (coding areas) and absent in introns (non-coding areas). In this paper, we have shown that single-indicator sequences can be generated by encoding schemes based on physico-chemical properties. Two new methods are proposed for generating single-indicator sequences based on hydration energy and dipole moments. The proposed methods produce high peak at exon locations and effectively suppress false exons (intron regions having greater peak than exon regions) resulting in high discriminating factor, sensitivity and specificity.
Smeared spectrum jamming suppression based on generalized S transform and threshold segmentation
NASA Astrophysics Data System (ADS)
Li, Xin; Wang, Chunyang; Tan, Ming; Fu, Xiaolong
2018-04-01
Smeared Spectrum (SMSP) jamming is an effective jamming in countering linear frequency modulation (LFM) radar. According to the time-frequency distribution difference between jamming and echo, a jamming suppression method based on Generalized S transform (GST) and threshold segmentation is proposed. The sub-pulse period is firstly estimated based on auto correlation function firstly. Secondly, the time-frequency image and the related gray scale image are achieved based on GST. Finally, the Tsallis cross entropy is utilized to compute the optimized segmentation threshold, and then the jamming suppression filter is constructed based on the threshold. The simulation results show that the proposed method is of good performance in the suppression of false targets produced by SMSP.
Kappa statistic for the clustered dichotomous responses from physicians and patients
Kang, Chaeryon; Qaqish, Bahjat; Monaco, Jane; Sheridan, Stacey L.; Cai, Jianwen
2013-01-01
The bootstrap method for estimating the standard error of the kappa statistic in the presence of clustered data is evaluated. Such data arise, for example, in assessing agreement between physicians and their patients regarding their understanding of the physician-patient interaction and discussions. We propose a computationally efficient procedure for generating correlated dichotomous responses for physicians and assigned patients for simulation studies. The simulation result demonstrates that the proposed bootstrap method produces better estimate of the standard error and better coverage performance compared to the asymptotic standard error estimate that ignores dependence among patients within physicians with at least a moderately large number of clusters. An example of an application to a coronary heart disease prevention study is presented. PMID:23533082
Magnetic quadrupoles lens for hot spot proton imaging in inertial confinement fusion
NASA Astrophysics Data System (ADS)
Teng, J.; Gu, Y. Q.; Chen, J.; Zhu, B.; Zhang, B.; Zhang, T. K.; Tan, F.; Hong, W.; Zhang, B. H.; Wang, X. Q.
2016-08-01
Imaging of DD-produced protons from an implosion hot spot region by miniature permanent magnetic quadrupole (PMQ) lens is proposed. Corresponding object-image relation is deduced and an adjust method for this imaging system is discussed. Ideal point-to-point imaging demands a monoenergetic proton source; nevertheless, we proved that the blur of image induced by proton energy spread is a second order effect therefore controllable. A proton imaging system based on miniature PMQ lens is designed for 2.8 MeV DD-protons and the adjust method in case of proton energy shift is proposed. The spatial resolution of this system is better than 10 μm when proton yield is above 109 and the spectra width is within 10%.
Compressive Classification for TEM-EELS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hao, Weituo; Stevens, Andrew; Yang, Hao
Electron energy loss spectroscopy (EELS) is typically conducted in STEM mode with a spectrometer, or in TEM mode with energy selction. These methods produce a 3D data set (x, y, energy). Some compressive sensing [1,2] and inpainting [3,4,5] approaches have been proposed for recovering a full set of spectra from compressed measurements. In many cases the final form of the spectral data is an elemental map (an image with channels corresponding to elements). This means that most of the collected data is unused or summarized. We propose a method to directly recover the elemental map with reduced dose and acquisitionmore » time. We have designed a new computational TEM sensor for compressive classification [6,7] of energy loss spectra called TEM-EELS.« less
Lock-in amplifier error prediction and correction in frequency sweep measurements.
Sonnaillon, Maximiliano Osvaldo; Bonetto, Fabian Jose
2007-01-01
This article proposes an analytical algorithm for predicting errors in lock-in amplifiers (LIAs) working with time-varying reference frequency. Furthermore, a simple method for correcting such errors is presented. The reference frequency can be swept in order to measure the frequency response of a system within a given spectrum. The continuous variation of the reference frequency produces a measurement error that depends on three factors: the sweep speed, the LIA low-pass filters, and the frequency response of the measured system. The proposed error prediction algorithm is based on the final value theorem of the Laplace transform. The correction method uses a double-sweep measurement. A mathematical analysis is presented and validated with computational simulations and experimental measurements.
An unsupervised method for summarizing egocentric sport videos
NASA Astrophysics Data System (ADS)
Habibi Aghdam, Hamed; Jahani Heravi, Elnaz; Puig, Domenec
2015-12-01
People are getting more interested to record their sport activities using head-worn or hand-held cameras. This type of videos which is called egocentric sport videos has different motion and appearance patterns compared with life-logging videos. While a life-logging video can be defined in terms of well-defined human-object interactions, notwithstanding, it is not trivial to describe egocentric sport videos using well-defined activities. For this reason, summarizing egocentric sport videos based on human-object interaction might fail to produce meaningful results. In this paper, we propose an unsupervised method for summarizing egocentric videos by identifying the key-frames of the video. Our method utilizes both appearance and motion information and it automatically finds the number of the key-frames. Our blind user study on the new dataset collected from YouTube shows that in 93:5% cases, the users choose the proposed method as their first video summary choice. In addition, our method is within the top 2 choices of the users in 99% of studies.
Deep learning architecture for air quality predictions.
Li, Xiang; Peng, Ling; Hu, Yuan; Shao, Jing; Chi, Tianhe
2016-11-01
With the rapid development of urbanization and industrialization, many developing countries are suffering from heavy air pollution. Governments and citizens have expressed increasing concern regarding air pollution because it affects human health and sustainable development worldwide. Current air quality prediction methods mainly use shallow models; however, these methods produce unsatisfactory results, which inspired us to investigate methods of predicting air quality based on deep architecture models. In this paper, a novel spatiotemporal deep learning (STDL)-based air quality prediction method that inherently considers spatial and temporal correlations is proposed. A stacked autoencoder (SAE) model is used to extract inherent air quality features, and it is trained in a greedy layer-wise manner. Compared with traditional time series prediction models, our model can predict the air quality of all stations simultaneously and shows the temporal stability in all seasons. Moreover, a comparison with the spatiotemporal artificial neural network (STANN), auto regression moving average (ARMA), and support vector regression (SVR) models demonstrates that the proposed method of performing air quality predictions has a superior performance.
Li, Xingyu; Plataniotis, Konstantinos N
2015-07-01
In digital histopathology, tasks of segmentation and disease diagnosis are achieved by quantitative analysis of image content. However, color variation in image samples makes it challenging to produce reliable results. This paper introduces a complete normalization scheme to address the problem of color variation in histopathology images jointly caused by inconsistent biopsy staining and nonstandard imaging condition. Method : Different from existing normalization methods that either address partial cause of color variation or lump them together, our method identifies causes of color variation based on a microscopic imaging model and addresses inconsistency in biopsy imaging and staining by an illuminant normalization module and a spectral normalization module, respectively. In evaluation, we use two public datasets that are representative of histopathology images commonly received in clinics to examine the proposed method from the aspects of robustness to system settings, performance consistency against achromatic pixels, and normalization effectiveness in terms of histological information preservation. As the saturation-weighted statistics proposed in this study generates stable and reliable color cues for stain normalization, our scheme is robust to system parameters and insensitive to image content and achromatic colors. Extensive experimentation suggests that our approach outperforms state-of-the-art normalization methods as the proposed method is the only approach that succeeds to preserve histological information after normalization. The proposed color normalization solution would be useful to mitigate effects of color variation in pathology images on subsequent quantitative analysis.
A Machine Learning-based Method for Question Type Classification in Biomedical Question Answering.
Sarrouti, Mourad; Ouatik El Alaoui, Said
2017-05-18
Biomedical question type classification is one of the important components of an automatic biomedical question answering system. The performance of the latter depends directly on the performance of its biomedical question type classification system, which consists of assigning a category to each question in order to determine the appropriate answer extraction algorithm. This study aims to automatically classify biomedical questions into one of the four categories: (1) yes/no, (2) factoid, (3) list, and (4) summary. In this paper, we propose a biomedical question type classification method based on machine learning approaches to automatically assign a category to a biomedical question. First, we extract features from biomedical questions using the proposed handcrafted lexico-syntactic patterns. Then, we feed these features for machine-learning algorithms. Finally, the class label is predicted using the trained classifiers. Experimental evaluations performed on large standard annotated datasets of biomedical questions, provided by the BioASQ challenge, demonstrated that our method exhibits significant improved performance when compared to four baseline systems. The proposed method achieves a roughly 10-point increase over the best baseline in terms of accuracy. Moreover, the obtained results show that using handcrafted lexico-syntactic patterns as features' provider of support vector machine (SVM) lead to the highest accuracy of 89.40 %. The proposed method can automatically classify BioASQ questions into one of the four categories: yes/no, factoid, list, and summary. Furthermore, the results demonstrated that our method produced the best classification performance compared to four baseline systems.
2014-01-01
Background Neurology is complex, abstract, and difficult for students to learn. However, a good learning method for neurology clerkship training is required to help students quickly develop strong clinical thinking as well as problem-solving skills. Both the traditional lecture-based learning (LBL) and the relatively new team-based learning (TBL) methods have inherent strengths and weaknesses when applied to neurology clerkship education. However, the strengths of each method may complement the weaknesses of the other. Combining TBL with LBL may produce better learning outcomes than TBL or LBL alone. We propose a hybrid method (TBL + LBL) and designed an experiment to compare the learning outcomes with those of pure LBL and pure TBL. Methods One hundred twenty-seven fourth-year medical students attended a two-week neurology clerkship program organized by the Department of Neurology, Sun Yat-Sen Memorial Hospital. All of the students were from Grade 2007, Department of Clinical Medicine, Zhongshan School of Medicine, Sun Yat-Sen University. These students were assigned to one of three groups randomly: Group A (TBL + LBL, with 41 students), Group B (LBL, with 43 students), and Group C (TBL, with 43 students). The learning outcomes were evaluated by a questionnaire and two tests covering basic knowledge of neurology and clinical practice. Results The practice test scores of Group A were similar to those of Group B, but significantly higher than those of Group C. The theoretical test scores and the total scores of Group A were significantly higher than those of Groups B and C. In addition, 100% of the students in Group A were satisfied with the combination of TBL + LBL. Conclusions Our results support our proposal that the combination of TBL + LBL is acceptable to students and produces better learning outcomes than either method alone in neurology clerkships. In addition, the proposed hybrid method may also be suited for other medical clerkships that require students to absorb a large amount of abstract and complex course materials in a short period, such as pediatrics and internal medicine clerkships. PMID:24884854
Cheng, Xiaoya; Shaw, Stephen B; Marjerison, Rebecca D; Yearick, Christopher D; DeGloria, Stephen D; Walter, M Todd
2014-05-01
Predicting runoff producing areas and their corresponding risks of generating storm runoff is important for developing watershed management strategies to mitigate non-point source pollution. However, few methods for making these predictions have been proposed, especially operational approaches that would be useful in areas where variable source area (VSA) hydrology dominates storm runoff. The objective of this study is to develop a simple approach to estimate spatially-distributed risks of runoff production. By considering the development of overland flow as a bivariate process, we incorporated both rainfall and antecedent soil moisture conditions into a method for predicting VSAs based on the Natural Resource Conservation Service-Curve Number equation. We used base-flow immediately preceding storm events as an index of antecedent soil wetness status. Using nine sub-basins of the Upper Susquehanna River Basin, we demonstrated that our estimated runoff volumes and extent of VSAs agreed with observations. We further demonstrated a method for mapping these areas in a Geographic Information System using a Soil Topographic Index. The proposed methodology provides a new tool for watershed planners for quantifying runoff risks across watersheds, which can be used to target water quality protection strategies. Copyright © 2014 Elsevier Ltd. All rights reserved.
Bountris, Panagiotis; Haritou, Maria; Pouliakis, Abraham; Margari, Niki; Kyrgiou, Maria; Spathis, Aris; Pappas, Asimakis; Panayiotides, Ioannis; Paraskevaidis, Evangelos A; Karakitsos, Petros; Koutsouris, Dimitrios-Dionyssios
2014-01-01
Nowadays, there are molecular biology techniques providing information related to cervical cancer and its cause: the human Papillomavirus (HPV), including DNA microarrays identifying HPV subtypes, mRNA techniques such as nucleic acid based amplification or flow cytometry identifying E6/E7 oncogenes, and immunocytochemistry techniques such as overexpression of p16. Each one of these techniques has its own performance, limitations and advantages, thus a combinatorial approach via computational intelligence methods could exploit the benefits of each method and produce more accurate results. In this article we propose a clinical decision support system (CDSS), composed by artificial neural networks, intelligently combining the results of classic and ancillary techniques for diagnostic accuracy improvement. We evaluated this method on 740 cases with complete series of cytological assessment, molecular tests, and colposcopy examination. The CDSS demonstrated high sensitivity (89.4%), high specificity (97.1%), high positive predictive value (89.4%), and high negative predictive value (97.1%), for detecting cervical intraepithelial neoplasia grade 2 or worse (CIN2+). In comparison to the tests involved in this study and their combinations, the CDSS produced the most balanced results in terms of sensitivity, specificity, PPV, and NPV. The proposed system may reduce the referral rate for colposcopy and guide personalised management and therapeutic interventions.
Bountris, Panagiotis; Haritou, Maria; Pouliakis, Abraham; Margari, Niki; Kyrgiou, Maria; Spathis, Aris; Pappas, Asimakis; Panayiotides, Ioannis; Paraskevaidis, Evangelos A.; Karakitsos, Petros; Koutsouris, Dimitrios-Dionyssios
2014-01-01
Nowadays, there are molecular biology techniques providing information related to cervical cancer and its cause: the human Papillomavirus (HPV), including DNA microarrays identifying HPV subtypes, mRNA techniques such as nucleic acid based amplification or flow cytometry identifying E6/E7 oncogenes, and immunocytochemistry techniques such as overexpression of p16. Each one of these techniques has its own performance, limitations and advantages, thus a combinatorial approach via computational intelligence methods could exploit the benefits of each method and produce more accurate results. In this article we propose a clinical decision support system (CDSS), composed by artificial neural networks, intelligently combining the results of classic and ancillary techniques for diagnostic accuracy improvement. We evaluated this method on 740 cases with complete series of cytological assessment, molecular tests, and colposcopy examination. The CDSS demonstrated high sensitivity (89.4%), high specificity (97.1%), high positive predictive value (89.4%), and high negative predictive value (97.1%), for detecting cervical intraepithelial neoplasia grade 2 or worse (CIN2+). In comparison to the tests involved in this study and their combinations, the CDSS produced the most balanced results in terms of sensitivity, specificity, PPV, and NPV. The proposed system may reduce the referral rate for colposcopy and guide personalised management and therapeutic interventions. PMID:24812614
Low-loss ultracompact optical power splitter using a multistep structure.
Huang, Zhe; Chan, Hau Ping; Afsar Uddin, Mohammad
2010-04-01
We propose a low-loss ultracompact optical power splitter for broadband passive optical network applications. The design is based on a multistep structure involving a two-material (core/cladding) system. The performance of the proposed device was evaluated through the three-dimensional finite-difference beam propagation method. By using the proposed design, an excess loss of 0.4 dB was achieved at a full branching angle of 24 degrees. The wavelength-dependent loss was found to be less than 0.3 dB, and the polarization-dependent loss was less than 0.05 dB from O to L bands. The device offers the potential of being mass-produced using low-cost polymer-based embossing techniques.
Coherent x-ray zoom condenser lens for diffractive and scanning microscopy.
Kimura, Takashi; Matsuyama, Satoshi; Yamauchi, Kazuto; Nishino, Yoshinori
2013-04-22
We propose a coherent x-ray zoom condenser lens composed of two-stage deformable Kirkpatrick-Baez mirrors. The lens delivers coherent x-rays with a controllable beam size, from one micrometer to a few tens of nanometers, at a fixed focal position. The lens is suitable for diffractive and scanning microscopy. We also propose non-scanning coherent diffraction microscopy for extended objects by using an apodized focused beam produced by the lens with a spatial filter. The proposed apodized-illumination method will be useful in highly efficient imaging with ultimate storage ring sources, and will also open the way to single-shot coherent diffraction microscopy of extended objects with x-ray free-electron lasers.
Mishina, T; Okano, F; Yuyama, I
1999-06-10
The single-sideband method of holography, as is well known, cuts off beams that come from conjugate images for holograms produced in the Fraunhofer region and from objects with no phase components. The single-sideband method with half-zone-plate processing is also effective in the Fresnel region for beams from an object that has phase components. However, this method restricts the viewing zone to a narrow range. We propose a method to improve this restriction by time-alternating switching of hologram patterns and a spatial filter set on the focal plane of a reconstruction lens.
Agin, Patricia Poh; Edmonds, Susan H
2002-08-01
The goals of this study were (i) to demonstrate that existing and widely used sun protection factor (SPF) test methodologies can produce accurate and reproducible results for high SPF formulations and (ii) to provide data on the number of test-subjects needed, the variability of the data, and the appropriate exposure increments needed for testing high SPF formulations. Three high SPF formulations were tested, according to the Food and Drug Administration's (FDA) 1993 tentative final monograph (TFM) 'very water resistant' test method and/or the 1978 proposed monograph 'waterproof' test method, within one laboratory. A fourth high SPF formulation was tested at four independent SPF testing laboratories, using the 1978 waterproof SPF test method. All laboratories utilized xenon arc solar simulators. The data illustrate that the testing conducted within one laboratory, following either the 1978 proposed or the 1993 TFM SPF test method, was able to reproducibly determine the SPFs of the formulations tested, using either the statistical analysis method in the proposed monograph or the statistical method described in the TFM. When one formulation was tested at four different laboratories, the anticipated variation in the data owing to the equipment and other operational differences was minimized through the use of the statistical method described in the 1993 monograph. The data illustrate that either the 1978 proposed monograph SPF test method or the 1993 TFM SPF test method can provide accurate and reproducible results for high SPF formulations. Further, these results can be achieved with panels of 20-25 subjects with an acceptable level of variability. Utilization of the statistical controls from the 1993 sunscreen monograph can help to minimize lab-to-lab variability for well-formulated products.
Multichannel-Hadamard calibration of high-order adaptive optics systems.
Guo, Youming; Rao, Changhui; Bao, Hua; Zhang, Ang; Zhang, Xuejun; Wei, Kai
2014-06-02
we present a novel technique of calibrating the interaction matrix for high-order adaptive optics systems, called the multichannel-Hadamard method. In this method, the deformable mirror actuators are firstly divided into a series of channels according to their coupling relationship, and then the voltage-oriented Hadamard method is applied to these channels. Taking the 595-element adaptive optics system as an example, the procedure is described in detail. The optimal channel dividing is discussed and tested by numerical simulation. The proposed method is also compared with the voltage-oriented Hadamard only method and the multichannel only method by experiments. Results show that the multichannel-Hadamard method can produce significant improvement on interaction matrix measurement.
Electroslag Treatment of Liquid Cast Iron
NASA Astrophysics Data System (ADS)
Grachev, V. A.
2018-01-01
The processes that occur in the liquid metal-slag system during electroslag treatment of cast iron are studied from an electrochemical standpoint. The role of electrolysis in the electroslag process is shown, and a method for producing high-strength cast iron with globular graphite using electrolysis of a slag containing magnesium oxides and fluorides is proposed and tested.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-25
... purpose and need, the alternatives to be studied, the impacts to be evaluated, and the evaluation methods... clear roadmap for concise development of the environmental document. In the interest of producing a... unincorporated Los Angeles County which includes east Los Angeles and west Whittier-Los Nietos. A diverse mix of...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-04
... years, and produce average litter sizes of 1 to 2 kits. In one study of known-aged females, none... comments by one of the following methods: (1) Electronically: Go to the Federal eRulemaking Portal: http... data available.'' You may submit your comments and materials concerning this proposed rule by one of...
A Proposal for a Compilation of Requirements and Teaching Methods of Courses at Gaston College.
ERIC Educational Resources Information Center
Jones, Dean H.
The purpose of this practicum was to determine the response of faculty members and students to the possibility of producing a pamphlet listing instructional methodologies and course requirements for classes at Gaston College (North Carolina). A questionnaire was completed by twenty-five instructors and twenty-five students, and additional…
Brain tumor segmentation in MR slices using improved GrowCut algorithm
NASA Astrophysics Data System (ADS)
Ji, Chunhong; Yu, Jinhua; Wang, Yuanyuan; Chen, Liang; Shi, Zhifeng; Mao, Ying
2015-12-01
The detection of brain tumor from MR images is very significant for medical diagnosis and treatment. However, the existing methods are mostly based on manual or semiautomatic segmentation which are awkward when dealing with a large amount of MR slices. In this paper, a new fully automatic method for the segmentation of brain tumors in MR slices is presented. Based on the hypothesis of the symmetric brain structure, the method improves the interactive GrowCut algorithm by further using the bounding box algorithm in the pre-processing step. More importantly, local reflectional symmetry is used to make up the deficiency of the bounding box method. After segmentation, 3D tumor image is reconstructed. We evaluate the accuracy of the proposed method on MR slices with synthetic tumors and actual clinical MR images. Result of the proposed method is compared with the actual position of simulated 3D tumor qualitatively and quantitatively. In addition, our automatic method produces equivalent performance as manual segmentation and the interactive GrowCut with manual interference while providing fully automatic segmentation.
NASA Astrophysics Data System (ADS)
Nakajima, Kazuhiro; Yamamoto, Yuji; Arima, Yutaka
2018-04-01
To easily assemble a three-dimensional binocular range sensor, we devised an alignment method for two image sensors using a silicon interposer with trenches. The trenches were formed using deep reactive ion etching (RIE) equipment. We produced a three-dimensional (3D) range sensor using the method and experimentally confirmed that sufficient alignment accuracy was realized. It was confirmed that the alignment accuracy of the two image sensors when using the proposed method is more than twice that of the alignment assembly method on a conventional board. In addition, as a result of evaluating the deterioration of the detection performance caused by the alignment accuracy, it was confirmed that the vertical deviation between the corresponding pixels in the two image sensors is substantially proportional to the decrease in detection performance. Therefore, we confirmed that the proposed method can realize more than twice the detection performance of the conventional method. Through these evaluations, the effectiveness of the 3D binocular range sensor aligned by the silicon interposer with the trenches was confirmed.
Removal of nuisance signals from limited and sparse 1H MRSI data using a union-of-subspaces model.
Ma, Chao; Lam, Fan; Johnson, Curtis L; Liang, Zhi-Pei
2016-02-01
To remove nuisance signals (e.g., water and lipid signals) for (1) H MRSI data collected from the brain with limited and/or sparse (k, t)-space coverage. A union-of-subspace model is proposed for removing nuisance signals. The model exploits the partial separability of both the nuisance signals and the metabolite signal, and decomposes an MRSI dataset into several sets of generalized voxels that share the same spectral distributions. This model enables the estimation of the nuisance signals from an MRSI dataset that has limited and/or sparse (k, t)-space coverage. The proposed method has been evaluated using in vivo MRSI data. For conventional chemical shift imaging data with limited k-space coverage, the proposed method produced "lipid-free" spectra without lipid suppression during data acquisition at 130 ms echo time. For sparse (k, t)-space data acquired with conventional pulses for water and lipid suppression, the proposed method was also able to remove the remaining water and lipid signals with negligible residuals. Nuisance signals in (1) H MRSI data reside in low-dimensional subspaces. This property can be utilized for estimation and removal of nuisance signals from (1) H MRSI data even when they have limited and/or sparse coverage of (k, t)-space. The proposed method should prove useful especially for accelerated high-resolution (1) H MRSI of the brain. © 2015 Wiley Periodicals, Inc.
A nonparametric multiple imputation approach for missing categorical data.
Zhou, Muhan; He, Yulei; Yu, Mandi; Hsu, Chiu-Hsieh
2017-06-06
Incomplete categorical variables with more than two categories are common in public health data. However, most of the existing missing-data methods do not use the information from nonresponse (missingness) probabilities. We propose a nearest-neighbour multiple imputation approach to impute a missing at random categorical outcome and to estimate the proportion of each category. The donor set for imputation is formed by measuring distances between each missing value with other non-missing values. The distance function is calculated based on a predictive score, which is derived from two working models: one fits a multinomial logistic regression for predicting the missing categorical outcome (the outcome model) and the other fits a logistic regression for predicting missingness probabilities (the missingness model). A weighting scheme is used to accommodate contributions from two working models when generating the predictive score. A missing value is imputed by randomly selecting one of the non-missing values with the smallest distances. We conduct a simulation to evaluate the performance of the proposed method and compare it with several alternative methods. A real-data application is also presented. The simulation study suggests that the proposed method performs well when missingness probabilities are not extreme under some misspecifications of the working models. However, the calibration estimator, which is also based on two working models, can be highly unstable when missingness probabilities for some observations are extremely high. In this scenario, the proposed method produces more stable and better estimates. In addition, proper weights need to be chosen to balance the contributions from the two working models and achieve optimal results for the proposed method. We conclude that the proposed multiple imputation method is a reasonable approach to dealing with missing categorical outcome data with more than two levels for assessing the distribution of the outcome. In terms of the choices for the working models, we suggest a multinomial logistic regression for predicting the missing outcome and a binary logistic regression for predicting the missingness probability.
Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection
Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun
2016-01-01
Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated by OKTAL-SE. PMID:27447635
Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection.
Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun
2016-07-19
Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated by OKTAL-SE.
Castejón, Natalia; Luna, Pilar; Señoráns, Francisco J
2018-04-01
The edible oil processing industry involves large losses of organic solvent into the atmosphere and long extraction times. In this work, fast and environmentally friendly alternatives for the production of echium oil using green solvents are proposed. Advanced extraction techniques such as Pressurized Liquid Extraction (PLE), Microwave Assisted Extraction (MAE) and Ultrasound Assisted Extraction (UAE) were evaluated to efficiently extract omega-3 rich oil from Echium plantagineum seeds. Extractions were performed with ethyl acetate, ethanol, water and ethanol:water to develop a hexane-free processing method. Optimal PLE conditions with ethanol at 150 °C during 10 min produced a very similar oil yield (31.2%) to Soxhlet using hexane for 8 h (31.3%). UAE optimized method with ethanol at mild conditions (55 °C) produced a high oil yield (29.1%). Consequently, advanced extraction techniques showed good lipid yields and furthermore, the produced echium oil had the same omega-3 fatty acid composition than traditionally extracted oil. Copyright © 2017 Elsevier Ltd. All rights reserved.
Yi, B; Rao, B; Ding, Y H; Li, M; Xu, H Y; Zhang, M; Zhuang, G; Pan, Y
2014-11-01
The dynamic resonant magnetic perturbation (DRMP) system has been developed for the J-TEXT tokamak to study the interaction between the rotating perturbation magnetic field and the plasma. When the DRMP coils are energized by two phase sinusoidal currents with the same frequency, a 2/1 rotating resonant magnetic perturbation component will be generated. But at the same time, a small perturbation component rotating in the opposite direction is also produced because of the control error of the currents. This small component has bad influence on the experiment investigations. Actually, the mode spectrum of the generated DRMP can be optimized with an accurate control of phase difference between the two currents. In this paper, a new phase control method based on a novel all-digital phase-locked loop (ADPLL) is proposed. The proposed method features accurate phase control and flexible phase adjustment. Modeling and analysis of the proposed ADPLL is presented to guide the design of the parameters of the phase controller in order to obtain a better performance. Testing results verify the effectiveness of the ADPLL and validity of the method applying to the DRMP system.
NASA Astrophysics Data System (ADS)
Yi, B.; Rao, B.; Ding, Y. H.; Li, M.; Xu, H. Y.; Zhang, M.; Zhuang, G.; Pan, Y.
2014-11-01
The dynamic resonant magnetic perturbation (DRMP) system has been developed for the J-TEXT tokamak to study the interaction between the rotating perturbation magnetic field and the plasma. When the DRMP coils are energized by two phase sinusoidal currents with the same frequency, a 2/1 rotating resonant magnetic perturbation component will be generated. But at the same time, a small perturbation component rotating in the opposite direction is also produced because of the control error of the currents. This small component has bad influence on the experiment investigations. Actually, the mode spectrum of the generated DRMP can be optimized with an accurate control of phase difference between the two currents. In this paper, a new phase control method based on a novel all-digital phase-locked loop (ADPLL) is proposed. The proposed method features accurate phase control and flexible phase adjustment. Modeling and analysis of the proposed ADPLL is presented to guide the design of the parameters of the phase controller in order to obtain a better performance. Testing results verify the effectiveness of the ADPLL and validity of the method applying to the DRMP system.
[Using neural networks based template matching method to obtain redshifts of normal galaxies].
Xu, Xin; Luo, A-li; Wu, Fu-chao; Zhao, Yong-heng
2005-06-01
Galaxies can be divided into two classes: normal galaxy (NG) and active galaxy (AG). In order to determine NG redshifts, an automatic effective method is proposed in this paper, which consists of the following three main steps: (1) From the template of normal galaxy, the two sets of samples are simulated, one with the redshift of 0.0-0.3, the other of 0.3-0.5, then the PCA is used to extract the main components, and train samples are projected to the main component subspace to obtain characteristic spectra. (2) The characteristic spectra are used to train a Probabilistic Neural Network to obtain a Bayes classifier. (3) An unknown real NG spectrum is first inputted to this Bayes classifier to determine the possible range of redshift, then the template matching is invoked to locate the redshift value within the estimated range. Compared with the traditional template matching technique with an unconstrained range, our proposed method not only halves the computational load, but also increases the estimation accuracy. As a result, the proposed method is particularly useful for automatic spectrum processing produced from a large-scale sky survey project.
Lam, Fan; Li, Yudu; Clifford, Bryan; Liang, Zhi-Pei
2018-05-01
To develop a practical method for mapping macromolecule distribution in the brain using ultrashort-TE MRSI data. An FID-based chemical shift imaging acquisition without metabolite-nulling pulses was used to acquire ultrashort-TE MRSI data that capture the macromolecule signals with high signal-to-noise-ratio (SNR) efficiency. To remove the metabolite signals from the ultrashort-TE data, single voxel spectroscopy data were obtained to determine a set of high-quality metabolite reference spectra. These spectra were then incorporated into a generalized series (GS) model to represent general metabolite spatiospectral distributions. A time-segmented algorithm was developed to back-extrapolate the GS model-based metabolite distribution from truncated FIDs and remove it from the MRSI data. Numerical simulations and in vivo experiments have been performed to evaluate the proposed method. Simulation results demonstrate accurate metabolite signal extrapolation by the proposed method given a high-quality reference. For in vivo experiments, the proposed method is able to produce spatiospectral distributions of macromolecules in the brain with high SNR from data acquired in about 10 minutes. We further demonstrate that the high-dimensional macromolecule spatiospectral distribution resides in a low-dimensional subspace. This finding provides a new opportunity to use subspace models for quantification and accelerated macromolecule mapping. Robustness of the proposed method is also demonstrated using multiple data sets from the same and different subjects. The proposed method is able to obtain macromolecule distributions in the brain from ultrashort-TE acquisitions. It can also be used for acquiring training data to determine a low-dimensional subspace to represent the macromolecule signals for subspace-based MRSI. Magn Reson Med 79:2460-2469, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Digital relief generation from 3D models
NASA Astrophysics Data System (ADS)
Wang, Meili; Sun, Yu; Zhang, Hongming; Qian, Kun; Chang, Jian; He, Dongjian
2016-09-01
It is difficult to extend image-based relief generation to high-relief generation, as the images contain insufficient height information. To generate reliefs from three-dimensional (3D) models, it is necessary to extract the height fields from the model, but this can only generate bas-reliefs. To overcome this problem, an efficient method is proposed to generate bas-reliefs and high-reliefs directly from 3D meshes. To produce relief features that are visually appropriate, the 3D meshes are first scaled. 3D unsharp masking is used to enhance the visual features in the 3D mesh, and average smoothing and Laplacian smoothing are implemented to achieve better smoothing results. A nonlinear variable scaling scheme is then employed to generate the final bas-reliefs and high-reliefs. Using the proposed method, relief models can be generated from arbitrary viewing positions with different gestures and combinations of multiple 3D models. The generated relief models can be printed by 3D printers. The proposed method provides a means of generating both high-reliefs and bas-reliefs in an efficient and effective way under the appropriate scaling factors.
Compressibility-aware media retargeting with structure preserving.
Wang, Shu-Fan; Lai, Shang-Hong
2011-03-01
A number of algorithms have been proposed for intelligent image/video retargeting with image content retained as much as possible. However, they usually suffer from some artifacts in the results, such as ridge or structure twist. In this paper, we present a structure-preserving media retargeting technique that preserves the content and image structure as best as possible. Different from the previous pixel or grid based methods, we estimate the image content saliency from the structure of the content. A block structure energy is introduced with a top-down strategy to constrain the image structure inside to deform uniformly in either x or y direction. However, the flexibilities for retargeting are quite different for different images. To cope with this problem, we propose a compressibility assessment scheme for media retargeting by combining the entropies of image gradient magnitude and orientation distributions. Thus, the resized media is produced to preserve the image content and structure as best as possible. Our experiments demonstrate that the proposed method provides resized images/videos with better preservation of content and structure than those by the previous methods.
Leung, Chung-Chu
2006-03-01
Digital subtraction radiography requires close matching of the contrast in each pair of X-ray images to be subtracted. Previous studies have shown that nonparametric contrast/brightness correction methods using the cumulative density function (CDF) and its improvements, which are based on gray-level transformation associated with the pixel histogram, perform well in uniform contrast/brightness difference conditions. However, for radiographs with nonuniform contrast/ brightness, the CDF produces unsatisfactory results. In this paper, we propose a new approach in contrast correction based on the generalized fuzzy operator with least square method. The result shows that 50% of the contrast/brightness errors can be corrected using this approach when the contrast/brightness difference between a radiographic pair is 10 U. A comparison of our approach with that of CDF is presented, and this modified GFO method produces better contrast normalization results than the CDF approach.
Improving Pharmaceutical Protein Production in Oryza sativa
Kuo, Yu-Chieh; Tan, Chia-Chun; Ku, Jung-Ting; Hsu, Wei-Cho; Su, Sung-Chieh; Lu, Chung-An; Huang, Li-Fen
2013-01-01
Application of plant expression systems in the production of recombinant proteins has several advantages, such as low maintenance cost, absence of human pathogens, and possession of complex post-translational glycosylation capabilities. Plants have been successfully used to produce recombinant cytokines, vaccines, antibodies, and other proteins, and rice (Oryza sativa) is a potential plant used as recombinant protein expression system. After successful transformation, transgenic rice cells can be either regenerated into whole plants or grown as cell cultures that can be upscaled into bioreactors. This review summarizes recent advances in the production of different recombinant protein produced in rice and describes their production methods as well as methods to improve protein yield and quality. Glycosylation and its impact in plant development and protein production are discussed, and several methods of improving yield and quality that have not been incorporated in rice expression systems are also proposed. Finally, different bioreactor options are explored and their advantages are analyzed. PMID:23615467
Damage severity estimation from the global stiffness decrease
NASA Astrophysics Data System (ADS)
Nitescu, C.; Gillich, G. R.; Abdel Wahab, M.; Manescu, T.; Korka, Z. I.
2017-05-01
In actual damage detection methods, localization and severity estimation can be treated separately. The severity is commonly estimated using fracture mechanics approach, with the main disadvantage of involving empirically deduced relations. In this paper, a damage severity estimator based on the global stiffness reduction is proposed. This feature is computed from the deflections of the intact and damaged beam, respectively. The damage is always located where the bending moment achieves maxima. If the damage is positioned elsewhere on the beam, its effect becomes lower, because the stress is produced by a diminished bending moment. It is shown that the global stiffness reduction produced by a crack is the same for all beams with a similar cross-section, regardless of the boundary conditions. One mathematical relation indicating the severity and another indicating the effect of removing damage from the beam. Measurements on damaged beams with different boundary conditions and cross-sections are carried out, and the location and severity are found using the proposed relations. These comparisons prove that the proposed approach can be used to accurately compute the severity estimator.
Modified Dual Three-Pulse Modulation technique for single-phase inverter topology
NASA Astrophysics Data System (ADS)
Sree Harsha, N. R.; Anitha, G. S.; Sreedevi, A.
2016-01-01
In a recent paper, a new modulation technique called Dual Three Pulse Modulation (DTPM) was proposed to improve the efficiency of the power converters of the Electric/Hybrid/Fuel-cell vehicles. It was simulated in PSIM 9.0.4 and uses analog multiplexers to generate the modulating signals for the DC/DC converter and inverter. The circuit used is complex and many other simulation softwares do not support the analog multiplexers as well. Also, the DTPM technique produces modulating signals for the converter, which are essentially needed to produce the modulating signals for the inverter. Hence, it cannot be used efficiently to switch the valves of a stand-alone inverter. We propose a new method to generate the modulating signals to switch MOSFETs of a single phase Dual-Three pulse Modulation based stand-alone inverter. The circuits proposed are simulated in Multisim 12.0. We also show an alternate way to switch a DC/DC converter in a way depicted by DTPM technique both in simulation (MATLAB/Simulink) and hardware. The circuitry is relatively simple and can be used for the further investigations of DTPM technique.
NASA Astrophysics Data System (ADS)
Huang, Zhenghua; Zhang, Tianxu; Deng, Lihua; Fang, Hao; Li, Qian
2015-12-01
Total variation(TV) based on regularization has been proven as a popular and effective model for image restoration, because of its ability of edge preserved. However, as the TV favors a piece-wise constant solution, the processing results in the flat regions of the image are easily produced "staircase effects", and the amplitude of the edges will be underestimated; the underlying cause of the problem is that the regularization parameter can not be changeable with spatial local information of image. In this paper, we propose a novel Scatter-matrix eigenvalues-based TV(SMETV) regularization with image blind restoration algorithm for deblurring medical images. The spatial information in different image regions is incorporated into regularization by using the edge indicator called difference eigenvalue to distinguish edges from flat areas. The proposed algorithm can effectively reduce the noise in flat regions as well as preserve the edge and detailed information. Moreover, it becomes more robust with the change of the regularization parameter. Extensive experiments demonstrate that the proposed approach produces results superior to most methods in both visual image quality and quantitative measures.
An exploratory survey of methods used to develop measures of performance
NASA Astrophysics Data System (ADS)
Hamner, Kenneth L.; Lafleur, Charles A.
1993-09-01
Nonmanufacturing organizations are being challenged to provide high-quality products and services to their customers, with an emphasis on continuous process improvement. Measures of performance, referred to as metrics, can be used to foster process improvement. The application of performance measurement to nonmanufacturing processes can be very difficult. This research explored methods used to develop metrics in nonmanufacturing organizations. Several methods were formally defined in the literature, and the researchers used a two-step screening process to determine the OMB Generic Method was most likely to produce high-quality metrics. The OMB Generic Method was then used to develop metrics. A few other metric development methods were found in use at nonmanufacturing organizations. The researchers interviewed participants in metric development efforts to determine their satisfaction and to have them identify the strengths and weaknesses of, and recommended improvements to, the metric development methods used. Analysis of participants' responses allowed the researchers to identify the key components of a sound metrics development method. Those components were incorporated into a proposed metric development method that was based on the OMB Generic Method, and should be more likely to produce high-quality metrics that will result in continuous process improvement.
Recurrent fuzzy ranking methods
NASA Astrophysics Data System (ADS)
Hajjari, Tayebeh
2012-11-01
With the increasing development of fuzzy set theory in various scientific fields and the need to compare fuzzy numbers in different areas. Therefore, Ranking of fuzzy numbers plays a very important role in linguistic decision-making, engineering, business and some other fuzzy application systems. Several strategies have been proposed for ranking of fuzzy numbers. Each of these techniques has been shown to produce non-intuitive results in certain case. In this paper, we reviewed some recent ranking methods, which will be useful for the researchers who are interested in this area.
NASA Astrophysics Data System (ADS)
Zhu, Xiong-Wei; Wang, Shu-Hong; Chen, Sen-Yu
2009-10-01
There are many methods based on linac for THz radiation production. As one of the options for the Beijing Advanced Light, an ERL test facility is proposed for THz radiation. In this test facility, there are 4 kinds of methods to produce THz radiation: coherent synchrotron radiation (CSR), synchrotron radiation (SR), low gain FEL oscillator, and high gain SASE FEL. In this paper, we study the characteristics of the 4 kinds of THz light sources.
Tourism forecasting using modified empirical mode decomposition and group method of data handling
NASA Astrophysics Data System (ADS)
Yahya, N. A.; Samsudin, R.; Shabri, A.
2017-09-01
In this study, a hybrid model using modified Empirical Mode Decomposition (EMD) and Group Method of Data Handling (GMDH) model is proposed for tourism forecasting. This approach reconstructs intrinsic mode functions (IMFs) produced by EMD using trial and error method. The new component and the remaining IMFs is then predicted respectively using GMDH model. Finally, the forecasted results for each component are aggregated to construct an ensemble forecast. The data used in this experiment are monthly time series data of tourist arrivals from China, Thailand and India to Malaysia from year 2000 to 2016. The performance of the model is evaluated using Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE) where conventional GMDH model and EMD-GMDH model are used as benchmark models. Empirical results proved that the proposed model performed better forecasts than the benchmarked models.
Hasani, E; Parravicini, J; Tartara, L; Tomaselli, A; Tomassini, D
2018-05-01
We propose an innovative experimental approach to estimate the two-photon absorption (TPA) spectrum of a fluorescent material. Our method develops the standard indirect fluorescence-based method for the TPA measurement by employing a line-shaped excitation beam, generating a line-shaped fluorescence emission. Such a configuration, which requires a relatively high amount of optical power, permits to have a greatly increased fluorescence signal, thus avoiding the photon counterdetection devices usually used in these measurements, and allowing to employ detectors such as charge-coupled device (CCD) cameras. The method is finally tested on a fluorescent isothiocyanate sample, whose TPA spectrum, which is measured with the proposed technique, is compared with the TPA spectra reported in the literature, confirming the validity of our experimental approach. © 2018 The Authors Journal of Microscopy © 2018 Royal Microscopical Society.
Image feature based GPS trace filtering for road network generation and road segmentation
Yuan, Jiangye; Cheriyadat, Anil M.
2015-10-19
We propose a new method to infer road networks from GPS trace data and accurately segment road regions in high-resolution aerial images. Unlike previous efforts that rely on GPS traces alone, we exploit image features to infer road networks from noisy trace data. The inferred road network is used to guide road segmentation. We show that the number of image segments spanned by the traces and the trace orientation validated with image features are important attributes for identifying GPS traces on road regions. Based on filtered traces , we construct road networks and integrate them with image features to segmentmore » road regions. Lastly, our experiments show that the proposed method produces more accurate road networks than the leading method that uses GPS traces alone, and also achieves high accuracy in segmenting road regions even with very noisy GPS data.« less
Dense motion estimation using regularization constraints on local parametric models.
Patras, Ioannis; Worring, Marcel; van den Boomgaard, Rein
2004-11-01
This paper presents a method for dense optical flow estimation in which the motion field within patches that result from an initial intensity segmentation is parametrized with models of different order. We propose a novel formulation which introduces regularization constraints between the model parameters of neighboring patches. In this way, we provide the additional constraints for very small patches and for patches whose intensity variation cannot sufficiently constrain the estimation of their motion parameters. In order to preserve motion discontinuities, we use robust functions as a regularization mean. We adopt a three-frame approach and control the balance between the backward and forward constraints by a real-valued direction field on which regularization constraints are applied. An iterative deterministic relaxation method is employed in order to solve the corresponding optimization problem. Experimental results show that the proposed method deals successfully with motions large in magnitude, motion discontinuities, and produces accurate piecewise-smooth motion fields.
Simple Method to Generate Terawatt-Attosecond X-Ray Free-Electron-Laser Pulses.
Prat, Eduard; Reiche, Sven
2015-06-19
X-ray free-electron lasers (XFELs) are cutting-edge research tools that produce almost fully coherent radiation with high power and short-pulse length with applications in multiple science fields. There is a strong demand to achieve even shorter pulses and higher radiation powers than the ones obtained at state-of-the-art XFEL facilities. In this context we propose a novel method to generate terawatt-attosecond XFEL pulses, where an XFEL pulse is pushed through several short good-beam regions of the electron bunch. In addition to the elements of conventional XFEL facilities, the method uses only a multiple-slotted foil and small electron delays between undulator sections. Our scheme is thus simple, compact, and easy to implement both in already operating as well as future XFEL projects. We present numerical simulations that confirm the feasibility and validity of our proposal.
Image feature based GPS trace filtering for road network generation and road segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, Jiangye; Cheriyadat, Anil M.
We propose a new method to infer road networks from GPS trace data and accurately segment road regions in high-resolution aerial images. Unlike previous efforts that rely on GPS traces alone, we exploit image features to infer road networks from noisy trace data. The inferred road network is used to guide road segmentation. We show that the number of image segments spanned by the traces and the trace orientation validated with image features are important attributes for identifying GPS traces on road regions. Based on filtered traces , we construct road networks and integrate them with image features to segmentmore » road regions. Lastly, our experiments show that the proposed method produces more accurate road networks than the leading method that uses GPS traces alone, and also achieves high accuracy in segmenting road regions even with very noisy GPS data.« less
On dealing with multiple correlation peaks in PIV
NASA Astrophysics Data System (ADS)
Masullo, A.; Theunissen, R.
2018-05-01
A novel algorithm to analyse PIV images in the presence of strong in-plane displacement gradients and reduce sub-grid filtering is proposed in this paper. Interrogation windows subjected to strong in-plane displacement gradients often produce correlation maps presenting multiple peaks. Standard multi-grid procedures discard such ambiguous correlation windows using a signal to noise (SNR) filter. The proposed algorithm improves the standard multi-grid algorithm allowing the detection of splintered peaks in a correlation map through an automatic threshold, producing multiple displacement vectors for each correlation area. Vector locations are chosen by translating images according to the peak displacements and by selecting the areas with the strongest match. The method is assessed on synthetic images of a boundary layer of varying intensity and a sinusoidal displacement field of changing wavelength. An experimental case of a flow exhibiting strong velocity gradients is also provided to show the improvements brought by this technique.
Bootstrap Enhanced Penalized Regression for Variable Selection with Neuroimaging Data.
Abram, Samantha V; Helwig, Nathaniel E; Moodie, Craig A; DeYoung, Colin G; MacDonald, Angus W; Waller, Niels G
2016-01-01
Recent advances in fMRI research highlight the use of multivariate methods for examining whole-brain connectivity. Complementary data-driven methods are needed for determining the subset of predictors related to individual differences. Although commonly used for this purpose, ordinary least squares (OLS) regression may not be ideal due to multi-collinearity and over-fitting issues. Penalized regression is a promising and underutilized alternative to OLS regression. In this paper, we propose a nonparametric bootstrap quantile (QNT) approach for variable selection with neuroimaging data. We use real and simulated data, as well as annotated R code, to demonstrate the benefits of our proposed method. Our results illustrate the practical potential of our proposed bootstrap QNT approach. Our real data example demonstrates how our method can be used to relate individual differences in neural network connectivity with an externalizing personality measure. Also, our simulation results reveal that the QNT method is effective under a variety of data conditions. Penalized regression yields more stable estimates and sparser models than OLS regression in situations with large numbers of highly correlated neural predictors. Our results demonstrate that penalized regression is a promising method for examining associations between neural predictors and clinically relevant traits or behaviors. These findings have important implications for the growing field of functional connectivity research, where multivariate methods produce numerous, highly correlated brain networks.
Bootstrap Enhanced Penalized Regression for Variable Selection with Neuroimaging Data
Abram, Samantha V.; Helwig, Nathaniel E.; Moodie, Craig A.; DeYoung, Colin G.; MacDonald, Angus W.; Waller, Niels G.
2016-01-01
Recent advances in fMRI research highlight the use of multivariate methods for examining whole-brain connectivity. Complementary data-driven methods are needed for determining the subset of predictors related to individual differences. Although commonly used for this purpose, ordinary least squares (OLS) regression may not be ideal due to multi-collinearity and over-fitting issues. Penalized regression is a promising and underutilized alternative to OLS regression. In this paper, we propose a nonparametric bootstrap quantile (QNT) approach for variable selection with neuroimaging data. We use real and simulated data, as well as annotated R code, to demonstrate the benefits of our proposed method. Our results illustrate the practical potential of our proposed bootstrap QNT approach. Our real data example demonstrates how our method can be used to relate individual differences in neural network connectivity with an externalizing personality measure. Also, our simulation results reveal that the QNT method is effective under a variety of data conditions. Penalized regression yields more stable estimates and sparser models than OLS regression in situations with large numbers of highly correlated neural predictors. Our results demonstrate that penalized regression is a promising method for examining associations between neural predictors and clinically relevant traits or behaviors. These findings have important implications for the growing field of functional connectivity research, where multivariate methods produce numerous, highly correlated brain networks. PMID:27516732
NASA Astrophysics Data System (ADS)
Wong, Jaime G.; Rosi, Giuseppe A.; Rouhi, Amirreza; Rival, David E.
2017-10-01
Particle tracking velocimetry (PTV) produces high-quality temporal information that is often neglected when computing spatial gradients. A method is presented here to utilize this temporal information in order to improve the estimation of spatial gradients for spatially unstructured Lagrangian data sets. Starting with an initial guess, this method penalizes any gradient estimate where the substantial derivative of vorticity along a pathline is not equal to the local vortex stretching/tilting. Furthermore, given an initial guess, this method can proceed on an individual pathline without any further reference to neighbouring pathlines. The equivalence of the substantial derivative and vortex stretching/tilting is based on the vorticity transport equation, where viscous diffusion is neglected. By minimizing the residual of the vorticity-transport equation, the proposed method is first tested to reduce error and noise on a synthetic Taylor-Green vortex field dissipating in time. Furthermore, when the proposed method is applied to high-density experimental data collected with `Shake-the-Box' PTV, noise within the spatial gradients is significantly reduced. In the particular test case investigated here of an accelerating circular plate captured during a single run, the method acts to delineate the shear layer and vortex core, as well as resolve the Kelvin-Helmholtz instabilities, which were previously unidentifiable without the use of ensemble averaging. The proposed method shows promise for improving PTV measurements that require robust spatial gradients while retaining the unstructured Lagrangian perspective.
A Hierarchical Auction-Based Mechanism for Real-Time Resource Allocation in Cloud Robotic Systems.
Wang, Lujia; Liu, Ming; Meng, Max Q-H
2017-02-01
Cloud computing enables users to share computing resources on-demand. The cloud computing framework cannot be directly mapped to cloud robotic systems with ad hoc networks since cloud robotic systems have additional constraints such as limited bandwidth and dynamic structure. However, most multirobotic applications with cooperative control adopt this decentralized approach to avoid a single point of failure. Robots need to continuously update intensive data to execute tasks in a coordinated manner, which implies real-time requirements. Thus, a resource allocation strategy is required, especially in such resource-constrained environments. This paper proposes a hierarchical auction-based mechanism, namely link quality matrix (LQM) auction, which is suitable for ad hoc networks by introducing a link quality indicator. The proposed algorithm produces a fast and robust method that is accurate and scalable. It reduces both global communication and unnecessary repeated computation. The proposed method is designed for firm real-time resource retrieval for physical multirobot systems. A joint surveillance scenario empirically validates the proposed mechanism by assessing several practical metrics. The results show that the proposed LQM auction outperforms state-of-the-art algorithms for resource allocation.
Unsupervised Learning of Overlapping Image Components Using Divisive Input Modulation
Spratling, M. W.; De Meyer, K.; Kompass, R.
2009-01-01
This paper demonstrates that nonnegative matrix factorisation is mathematically related to a class of neural networks that employ negative feedback as a mechanism of competition. This observation inspires a novel learning algorithm which we call Divisive Input Modulation (DIM). The proposed algorithm provides a mathematically simple and computationally efficient method for the unsupervised learning of image components, even in conditions where these elementary features overlap considerably. To test the proposed algorithm, a novel artificial task is introduced which is similar to the frequently-used bars problem but employs squares rather than bars to increase the degree of overlap between components. Using this task, we investigate how the proposed method performs on the parsing of artificial images composed of overlapping features, given the correct representation of the individual components; and secondly, we investigate how well it can learn the elementary components from artificial training images. We compare the performance of the proposed algorithm with its predecessors including variations on these algorithms that have produced state-of-the-art performance on the bars problem. The proposed algorithm is more successful than its predecessors in dealing with overlap and occlusion in the artificial task that has been used to assess performance. PMID:19424442
Extracting Communities from Complex Networks by the k-Dense Method
NASA Astrophysics Data System (ADS)
Saito, Kazumi; Yamada, Takeshi; Kazama, Kazuhiro
To understand the structural and functional properties of large-scale complex networks, it is crucial to efficiently extract a set of cohesive subnetworks as communities. There have been proposed several such community extraction methods in the literature, including the classical k-core decomposition method and, more recently, the k-clique based community extraction method. The k-core method, although computationally efficient, is often not powerful enough for uncovering a detailed community structure and it produces only coarse-grained and loosely connected communities. The k-clique method, on the other hand, can extract fine-grained and tightly connected communities but requires a substantial amount of computational load for large-scale complex networks. In this paper, we present a new notion of a subnetwork called k-dense, and propose an efficient algorithm for extracting k-dense communities. We applied our method to the three different types of networks assembled from real data, namely, from blog trackbacks, word associations and Wikipedia references, and demonstrated that the k-dense method could extract communities almost as efficiently as the k-core method, while the qualities of the extracted communities are comparable to those obtained by the k-clique method.
Abdul Kamal Nazer, Meeran Mohideen; Hameed, Abdul Rahman Shahul; Riyazuddin, Patel
2004-01-01
A simple and rapid potentiometric method for the estimation of ascorbic acid in pharmaceutical dosage forms has been developed. The method is based on treating ascorbic acid with iodine and titration of the iodide produced equivalent to ascorbic acid with silver nitrate using Copper Based Mercury Film Electrode (CBMFE) as an indicator electrode. Interference study was carried to check possible interference of usual excipients and other vitamins. The precision and accuracy of the method was assessed by the application of lack-of-fit test and other statistical methods. The results of the proposed method and British Pharmacopoeia method were compared using F and t-statistical tests of significance.
Lagrangian numerical methods for ocean biogeochemical simulations
NASA Astrophysics Data System (ADS)
Paparella, Francesco; Popolizio, Marina
2018-05-01
We propose two closely-related Lagrangian numerical methods for the simulation of physical processes involving advection, reaction and diffusion. The methods are intended to be used in settings where the flow is nearly incompressible and the Péclet numbers are so high that resolving all the scales of motion is unfeasible. This is commonplace in ocean flows. Our methods consist in augmenting the method of characteristics, which is suitable for advection-reaction problems, with couplings among nearby particles, producing fluxes that mimic diffusion, or unresolved small-scale transport. The methods conserve mass, obey the maximum principle, and allow to tune the strength of the diffusive terms down to zero, while avoiding unwanted numerical dissipation effects.
A new approach to characterize very-low-level radioactive waste produced at hadron accelerators.
Zaffora, Biagio; Magistris, Matteo; Chevalier, Jean-Pierre; Luccioni, Catherine; Saporta, Gilbert; Ulrici, Luisa
2017-04-01
Radioactive waste is produced as a consequence of preventive and corrective maintenance during the operation of high-energy particle accelerators or associated dismantling campaigns. Their radiological characterization must be performed to ensure an appropriate disposal in the disposal facilities. The radiological characterization of waste includes the establishment of the list of produced radionuclides, called "radionuclide inventory", and the estimation of their activity. The present paper describes the process adopted at CERN to characterize very-low-level radioactive waste with a focus on activated metals. The characterization method consists of measuring and estimating the activity of produced radionuclides either by experimental methods or statistical and numerical approaches. We adapted the so-called Scaling Factor (SF) and Correlation Factor (CF) techniques to the needs of hadron accelerators, and applied them to very-low-level metallic waste produced at CERN. For each type of metal we calculated the radionuclide inventory and identified the radionuclides that most contribute to hazard factors. The methodology proposed is of general validity, can be extended to other activated materials and can be used for the characterization of waste produced in particle accelerators and research centres, where the activation mechanisms are comparable to the ones occurring at CERN. Copyright © 2017 Elsevier Ltd. All rights reserved.
Dikbas, Salih; Altunbasak, Yucel
2013-08-01
In this paper, a new low-complexity true-motion estimation (TME) algorithm is proposed for video processing applications, such as motion-compensated temporal frame interpolation (MCTFI) or motion-compensated frame rate up-conversion (MCFRUC). Regular motion estimation, which is often used in video coding, aims to find the motion vectors (MVs) to reduce the temporal redundancy, whereas TME aims to track the projected object motion as closely as possible. TME is obtained by imposing implicit and/or explicit smoothness constraints on the block-matching algorithm. To produce better quality-interpolated frames, the dense motion field at interpolation time is obtained for both forward and backward MVs; then, bidirectional motion compensation using forward and backward MVs is applied by mixing both elegantly. Finally, the performance of the proposed algorithm for MCTFI is demonstrated against recently proposed methods and smoothness constraint optical flow employed by a professional video production suite. Experimental results show that the quality of the interpolated frames using the proposed method is better when compared with the MCFRUC techniques.
NASA Astrophysics Data System (ADS)
Rafiq Abuturab, Muhammad
2018-01-01
A new asymmetric multiple information cryptosystem based on chaotic spiral phase mask (CSPM) and random spectrum decomposition is put forwarded. In the proposed system, each channel of secret color image is first modulated with a CSPM and then gyrator transformed. The gyrator spectrum is randomly divided into two complex-valued masks. The same procedure is applied to multiple secret images to get their corresponding first and second complex-valued masks. Finally, first and second masks of each channel are independently added to produce first and second complex ciphertexts, respectively. The main feature of the proposed method is the different secret images encrypted by different CSPMs using different parameters as the sensitive decryption/private keys which are completely unknown to unauthorized users. Consequently, the proposed system would be resistant to potential attacks. Moreover, the CSPMs are easier to position in the decoding process owing to their own centering mark on axis focal ring. The retrieved secret images are free from cross-talk noise effects. The decryption process can be implemented by optical experiment. Numerical simulation results demonstrate the viability and security of the proposed method.
An Intelligent Monitoring Network for Detection of Cracks in Anvils of High-Press Apparatus.
Tian, Hao; Yan, Zhaoli; Yang, Jun
2018-04-09
Due to the endurance of alternating high pressure and temperature, the carbide anvils of the high-press apparatus, which are widely used in the synthetic diamond industry, are prone to crack. In this paper, an acoustic method is used to monitor the crack events, and the intelligent monitoring network is proposed to classify the sound samples. The pulse sound signals produced by such cracking are first extracted based on a short-time energy threshold. Then, the signals are processed with the proposed intelligent monitoring network to identify the operation condition of the anvil of the high-pressure apparatus. The monitoring network is an improved convolutional neural network that solves the problems that may occur in practice. The length of pulse sound excited by the crack growth is variable, so a spatial pyramid pooling layer is adopted to solve the variable-length input problem. An adaptive weighted algorithm for loss function is proposed in this method to handle the class imbalance problem. The good performance regarding the accuracy and balance of the proposed intelligent monitoring network is validated through the experiments finally.
NASA Astrophysics Data System (ADS)
Lee, Joohwi; Kim, Sun Hyung; Oguz, Ipek; Styner, Martin
2016-03-01
The cortical thickness of the mammalian brain is an important morphological characteristic that can be used to investigate and observe the brain's developmental changes that might be caused by biologically toxic substances such as ethanol or cocaine. Although various cortical thickness analysis methods have been proposed that are applicable for human brain and have developed into well-validated open-source software packages, cortical thickness analysis methods for rodent brains have not yet become as robust and accurate as those designed for human brains. Based on a previously proposed cortical thickness measurement pipeline for rodent brain analysis,1 we present an enhanced cortical thickness pipeline in terms of accuracy and anatomical consistency. First, we propose a Lagrangian-based computational approach in the thickness measurement step in order to minimize local truncation error using the fourth-order Runge-Kutta method. Second, by constructing a line object for each streamline of the thickness measurement, we can visualize the way the thickness is measured and achieve sub-voxel accuracy by performing geometric post-processing. Last, with emphasis on the importance of an anatomically consistent partial differential equation (PDE) boundary map, we propose an automatic PDE boundary map generation algorithm that is specific to rodent brain anatomy, which does not require manual labeling. The results show that the proposed cortical thickness pipeline can produce statistically significant regions that are not observed in the previous cortical thickness analysis pipeline.
A Multiple Period Problem in Distributed Energy Management Systems Considering CO2 Emissions
NASA Astrophysics Data System (ADS)
Muroda, Yuki; Miyamoto, Toshiyuki; Mori, Kazuyuki; Kitamura, Shoichi; Yamamoto, Takaya
Consider a special district (group) which is composed of multiple companies (agents), and where each agent responds to an energy demand and has a CO2 emission allowance imposed. A distributed energy management system (DEMS) optimizes energy consumption of a group through energy trading in the group. In this paper, we extended the energy distribution decision and optimal planning problem in DEMSs from a single period problem to a multiple periods one. The extension enabled us to consider more realistic constraints such as demand patterns, the start-up cost, and minimum running/outage times of equipment. At first, we extended the market-oriented programming (MOP) method for deciding energy distribution to the multiple periods problem. The bidding strategy of each agent is formulated by a 0-1 mixed non-linear programming problem. Secondly, we proposed decomposing the problem into a set of single period problems in order to solve it faster. In order to decompose the problem, we proposed a CO2 emission allowance distribution method, called an EP method. We confirmed that the proposed method was able to produce solutions whose group costs were close to lower-bound group costs by computational experiments. In addition, we verified that reduction in computational time was achieved without losing the quality of solutions by using the EP method.
NASA Astrophysics Data System (ADS)
Liu, Ruiwen; Jiao, Binbin; Kong, Yanmei; Li, Zhigang; Shang, Haiping; Lu, Dike; Gao, Chaoqun; Chen, Dapeng
2013-09-01
Micro-devices with a bi-material-cantilever (BMC) commonly suffer initial curvature due to the mismatch of residual stress. Traditional corrective methods to reduce the residual stress mismatch generally involve the development of different material deposition recipes. In this paper, a new method for reducing residual stress mismatch in a BMC is proposed based on various previously developed deposition recipes. An initial material film is deposited using two or more developed deposition recipes. This first film is designed to introduce a stepped stress gradient, which is then balanced by overlapping a second material film on the first and using appropriate deposition recipes to form a nearly stress-balanced structure. A theoretical model is proposed based on both the moment balance principle and total equal strain at the interface of two adjacent layers. Experimental results and analytical models suggest that the proposed method is effective in producing multi-layer micro cantilevers that display balanced residual stresses. The method provides a generic solution to the problem of mismatched initial stresses which universally exists in micro-electro-mechanical systems (MEMS) devices based on a BMC. Moreover, the method can be incorporated into a MEMS design automation package for efficient design of various multiple material layer devices from MEMS material library and developed deposition recipes.
A New Approach to Detect Mover Position in Linear Motors Using Magnetic Sensors
Paul, Sarbajit; Chang, Junghwan
2015-01-01
A new method to detect the mover position of a linear motor is proposed in this paper. This method employs a simple cheap Hall Effect sensor-based magnetic sensor unit to detect the mover position of the linear motor. With the movement of the linear motor, Hall Effect sensor modules electrically separated 120° along with the idea of three phase balanced condition (va + vb + vc = 0) are used to produce three phase signals. The amplitude of the sensor output voltage signals are adjusted to unit amplitude to minimize the amplitude errors. With the unit amplitude signals three to two phase transformation is done to reduce the three multiples of harmonic components. The final output thus obtained is converted to position data by the use of arctangent function. The measurement accuracy of the new method is analyzed by experiments and compared with the conventional two phase method. Using the same number of sensor modules as the conventional two phase method, the proposed method gives more accurate position information compared to the conventional system where sensors are separated by 90° electrical angles. PMID:26506348
A new method for electric impedance imaging using an eddy current with a tetrapolar circuit.
Ahsan-Ul-Ambia; Toda, Shogo; Takemae, Tadashi; Kosugi, Yukio; Hongo, Minoru
2009-02-01
A new contactless technique for electrical impedance imaging, using an eddy current managed along with the tetrapolar circuit method, is proposed. The eddy current produced by a magnetic field is superimposed on a constant current that is normally used in the tetrapolar circuit method, and thus is used to control the current distribution in the body. By changing the current distribution, a set of voltage differences is measured with a pair of electrodes. This set of voltage differences is used in the image reconstruction of the resistivity distribution. The least square error minimization method is used in the reconstruction algorithm. The principle of this method is explained theoretically. A backprojection algorithm was used to get 2-D images. Based on this principle, a measurement system was developed and model experiments were conducted with a saline-filled phantom. The estimated shape of each model in the reconstructed image was similar to that of the corresponding model. From the results of these experiments, it is confirmed that the proposed method is applicable to the realization of electrical conductivity imaging.
Learning binary code via PCA of angle projection for image retrieval
NASA Astrophysics Data System (ADS)
Yang, Fumeng; Ye, Zhiqiang; Wei, Xueqi; Wu, Congzhong
2018-01-01
With benefits of low storage costs and high query speeds, binary code representation methods are widely researched for efficiently retrieving large-scale data. In image hashing method, learning hashing function to embed highdimensions feature to Hamming space is a key step for accuracy retrieval. Principal component analysis (PCA) technical is widely used in compact hashing methods, and most these hashing methods adopt PCA projection functions to project the original data into several dimensions of real values, and then each of these projected dimensions is quantized into one bit by thresholding. The variances of different projected dimensions are different, and with real-valued projection produced more quantization error. To avoid the real-valued projection with large quantization error, in this paper we proposed to use Cosine similarity projection for each dimensions, the angle projection can keep the original structure and more compact with the Cosine-valued. We used our method combined the ITQ hashing algorithm, and the extensive experiments on the public CIFAR-10 and Caltech-256 datasets validate the effectiveness of the proposed method.
A Multimodal Deep Log-Based User Experience (UX) Platform for UX Evaluation
Ali Khan, Wajahat; Hur, Taeho; Muhammad Bilal, Hafiz Syed; Ul Hassan, Anees; Lee, Sungyoung
2018-01-01
The user experience (UX) is an emerging field in user research and design, and the development of UX evaluation methods presents a challenge for both researchers and practitioners. Different UX evaluation methods have been developed to extract accurate UX data. Among UX evaluation methods, the mixed-method approach of triangulation has gained importance. It provides more accurate and precise information about the user while interacting with the product. However, this approach requires skilled UX researchers and developers to integrate multiple devices, synchronize them, analyze the data, and ultimately produce an informed decision. In this paper, a method and system for measuring the overall UX over time using a triangulation method are proposed. The proposed platform incorporates observational and physiological measurements in addition to traditional ones. The platform reduces the subjective bias and validates the user’s perceptions, which are measured by different sensors through objectification of the subjective nature of the user in the UX assessment. The platform additionally offers plug-and-play support for different devices and powerful analytics for obtaining insight on the UX in terms of multiple participants. PMID:29783712
A Multimodal Deep Log-Based User Experience (UX) Platform for UX Evaluation.
Hussain, Jamil; Khan, Wajahat Ali; Hur, Taeho; Bilal, Hafiz Syed Muhammad; Bang, Jaehun; Hassan, Anees Ul; Afzal, Muhammad; Lee, Sungyoung
2018-05-18
The user experience (UX) is an emerging field in user research and design, and the development of UX evaluation methods presents a challenge for both researchers and practitioners. Different UX evaluation methods have been developed to extract accurate UX data. Among UX evaluation methods, the mixed-method approach of triangulation has gained importance. It provides more accurate and precise information about the user while interacting with the product. However, this approach requires skilled UX researchers and developers to integrate multiple devices, synchronize them, analyze the data, and ultimately produce an informed decision. In this paper, a method and system for measuring the overall UX over time using a triangulation method are proposed. The proposed platform incorporates observational and physiological measurements in addition to traditional ones. The platform reduces the subjective bias and validates the user's perceptions, which are measured by different sensors through objectification of the subjective nature of the user in the UX assessment. The platform additionally offers plug-and-play support for different devices and powerful analytics for obtaining insight on the UX in terms of multiple participants.
NASA Astrophysics Data System (ADS)
Wang, Ke; Guo, Ping; Luo, A.-Li
2017-03-01
Spectral feature extraction is a crucial procedure in automated spectral analysis. This procedure starts from the spectral data and produces informative and non-redundant features, facilitating the subsequent automated processing and analysis with machine-learning and data-mining techniques. In this paper, we present a new automated feature extraction method for astronomical spectra, with application in spectral classification and defective spectra recovery. The basic idea of our approach is to train a deep neural network to extract features of spectra with different levels of abstraction in different layers. The deep neural network is trained with a fast layer-wise learning algorithm in an analytical way without any iterative optimization procedure. We evaluate the performance of the proposed scheme on real-world spectral data. The results demonstrate that our method is superior regarding its comprehensive performance, and the computational cost is significantly lower than that for other methods. The proposed method can be regarded as a new valid alternative general-purpose feature extraction method for various tasks in spectral data analysis.
Highly efficient nonrigid motion‐corrected 3D whole‐heart coronary vessel wall imaging
Atkinson, David; Henningsson, Markus; Botnar, Rene M.; Prieto, Claudia
2016-01-01
Purpose To develop a respiratory motion correction framework to accelerate free‐breathing three‐dimensional (3D) whole‐heart coronary lumen and coronary vessel wall MRI. Methods We developed a 3D flow‐independent approach for vessel wall imaging based on the subtraction of data with and without T2‐preparation prepulses acquired interleaved with image navigators. The proposed method corrects both datasets to the same respiratory position using beat‐to‐beat translation and bin‐to‐bin nonrigid corrections, producing coregistered, motion‐corrected coronary lumen and coronary vessel wall images. The proposed method was studied in 10 healthy subjects and was compared with beat‐to‐beat translational correction (TC) and no motion correction for the left and right coronary arteries. Additionally, the coronary lumen images were compared with a 6‐mm diaphragmatic navigator gated and tracked scan. Results No significant differences (P > 0.01) were found between the proposed method and the gated and tracked scan for coronary lumen, despite an average improvement in scan efficiency to 96% from 59%. Significant differences (P < 0.01) were found in right coronary artery vessel wall thickness, right coronary artery vessel wall sharpness, and vessel wall visual score between the proposed method and TC. Conclusion The feasibility of a highly efficient motion correction framework for simultaneous whole‐heart coronary lumen and vessel wall has been demonstrated. Magn Reson Med 77:1894–1908, 2017. © 2016 International Society for Magnetic Resonance in Medicine PMID:27221073
Arabi, Hossein; Koutsouvelis, Nikolaos; Rouzaud, Michel; Miralbell, Raymond; Zaidi, Habib
2016-09-07
Magnetic resonance imaging (MRI)-guided attenuation correction (AC) of positron emission tomography (PET) data and/or radiation therapy (RT) treatment planning is challenged by the lack of a direct link between MRI voxel intensities and electron density. Therefore, even if this is not a trivial task, a pseudo-computed tomography (CT) image must be predicted from MRI alone. In this work, we propose a two-step (segmentation and fusion) atlas-based algorithm focusing on bone tissue identification to create a pseudo-CT image from conventional MRI sequences and evaluate its performance against the conventional MRI segmentation technique and a recently proposed multi-atlas approach. The clinical studies consisted of pelvic CT, PET and MRI scans of 12 patients with loco-regionally advanced rectal disease. In the first step, bone segmentation of the target image is optimized through local weighted atlas voting. The obtained bone map is then used to assess the quality of deformed atlases to perform voxel-wise weighted atlas fusion. To evaluate the performance of the method, a leave-one-out cross-validation (LOOCV) scheme was devised to find optimal parameters for the model. Geometric evaluation of the produced pseudo-CT images and quantitative analysis of the accuracy of PET AC were performed. Moreover, a dosimetric evaluation of volumetric modulated arc therapy photon treatment plans calculated using the different pseudo-CT images was carried out and compared to those produced using CT images serving as references. The pseudo-CT images produced using the proposed method exhibit bone identification accuracy of 0.89 based on the Dice similarity metric compared to 0.75 achieved by the other atlas-based method. The superior bone extraction resulted in a mean standard uptake value bias of -1.5 ± 5.0% (mean ± SD) in bony structures compared to -19.9 ± 11.8% and -8.1 ± 8.2% achieved by MRI segmentation-based (water-only) and atlas-guided AC. Dosimetric evaluation using dose volume histograms and the average difference between minimum/maximum absorbed doses revealed a mean error of less than 1% for the both target volumes and organs at risk. Two-dimensional (2D) gamma analysis of the isocenter dose distributions at 1%/1 mm criterion revealed pass rates of 91.40 ± 7.56%, 96.00 ± 4.11% and 97.67 ± 3.6% for MRI segmentation, atlas-guided and the proposed methods, respectively. The proposed method generates accurate pseudo-CT images from conventional Dixon MRI sequences with improved bone extraction accuracy. The approach is promising for potential use in PET AC and MRI-only or hybrid PET/MRI-guided RT treatment planning.
NASA Astrophysics Data System (ADS)
Arabi, Hossein; Koutsouvelis, Nikolaos; Rouzaud, Michel; Miralbell, Raymond; Zaidi, Habib
2016-09-01
Magnetic resonance imaging (MRI)-guided attenuation correction (AC) of positron emission tomography (PET) data and/or radiation therapy (RT) treatment planning is challenged by the lack of a direct link between MRI voxel intensities and electron density. Therefore, even if this is not a trivial task, a pseudo-computed tomography (CT) image must be predicted from MRI alone. In this work, we propose a two-step (segmentation and fusion) atlas-based algorithm focusing on bone tissue identification to create a pseudo-CT image from conventional MRI sequences and evaluate its performance against the conventional MRI segmentation technique and a recently proposed multi-atlas approach. The clinical studies consisted of pelvic CT, PET and MRI scans of 12 patients with loco-regionally advanced rectal disease. In the first step, bone segmentation of the target image is optimized through local weighted atlas voting. The obtained bone map is then used to assess the quality of deformed atlases to perform voxel-wise weighted atlas fusion. To evaluate the performance of the method, a leave-one-out cross-validation (LOOCV) scheme was devised to find optimal parameters for the model. Geometric evaluation of the produced pseudo-CT images and quantitative analysis of the accuracy of PET AC were performed. Moreover, a dosimetric evaluation of volumetric modulated arc therapy photon treatment plans calculated using the different pseudo-CT images was carried out and compared to those produced using CT images serving as references. The pseudo-CT images produced using the proposed method exhibit bone identification accuracy of 0.89 based on the Dice similarity metric compared to 0.75 achieved by the other atlas-based method. The superior bone extraction resulted in a mean standard uptake value bias of -1.5 ± 5.0% (mean ± SD) in bony structures compared to -19.9 ± 11.8% and -8.1 ± 8.2% achieved by MRI segmentation-based (water-only) and atlas-guided AC. Dosimetric evaluation using dose volume histograms and the average difference between minimum/maximum absorbed doses revealed a mean error of less than 1% for the both target volumes and organs at risk. Two-dimensional (2D) gamma analysis of the isocenter dose distributions at 1%/1 mm criterion revealed pass rates of 91.40 ± 7.56%, 96.00 ± 4.11% and 97.67 ± 3.6% for MRI segmentation, atlas-guided and the proposed methods, respectively. The proposed method generates accurate pseudo-CT images from conventional Dixon MRI sequences with improved bone extraction accuracy. The approach is promising for potential use in PET AC and MRI-only or hybrid PET/MRI-guided RT treatment planning.
Maskless micro/nanofabrication on GaAs surface by friction-induced selective etching
2014-01-01
In the present study, a friction-induced selective etching method was developed to produce nanostructures on GaAs surface. Without any resist mask, the nanofabrication can be achieved by scratching and post-etching in sulfuric acid solution. The effects of the applied normal load and etching period on the formation of the nanostructure were studied. Results showed that the height of the nanostructure increased with the normal load or the etching period. XPS and Raman detection demonstrated that residual compressive stress and lattice densification were probably the main reason for selective etching, which eventually led to the protrusive nanostructures from the scratched area on the GaAs surface. Through a homemade multi-probe instrument, the capability of this fabrication method was demonstrated by producing various nanostructures on the GaAs surface, such as linear array, intersecting parallel, surface mesas, and special letters. In summary, the proposed method provided a straightforward and more maneuverable micro/nanofabrication method on the GaAs surface. PMID:24495647
Auxiliary-field-based trial wave functions in quantum Monte Carlo calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Chia -Chen; Rubenstein, Brenda M.; Morales, Miguel A.
2016-12-19
Quantum Monte Carlo (QMC) algorithms have long relied on Jastrow factors to incorporate dynamic correlation into trial wave functions. While Jastrow-type wave functions have been widely employed in real-space algorithms, they have seen limited use in second-quantized QMC methods, particularly in projection methods that involve a stochastic evolution of the wave function in imaginary time. Here we propose a scheme for generating Jastrow-type correlated trial wave functions for auxiliary-field QMC methods. The method is based on decoupling the two-body Jastrow into one-body projectors coupled to auxiliary fields, which then operate on a single determinant to produce a multideterminant trial wavemore » function. We demonstrate that intelligent sampling of the most significant determinants in this expansion can produce compact trial wave functions that reduce errors in the calculated energies. Lastly, our technique may be readily generalized to accommodate a wide range of two-body Jastrow factors and applied to a variety of model and chemical systems.« less
The Concept about the Regeneration of Spent Borohydrides and Used Catalysts from Green Electricity
Liu, Cheng-Hong; Chen, Bing-Hung
2015-01-01
Currently, the Brown-Schlesinger process is still regarded as the most common and mature method for the commercial production of sodium borohydride (NaBH4). However, the metallic sodium, currently produced from the electrolysis of molten NaCl that is mass-produced by evaporation of seawater or brine, is probably the most costly raw material. Recently, several reports have demonstrated the feasibility of utilizing green electricity such as offshore wind power to produce metallic sodium through electrolysis of seawater. Based on this concept, we have made improvements and modified our previously proposed life cycle of sodium borohydride (NaBH4) and ammonia borane (NH3BH3), in order to further reduce costs in the conventional Brown-Schlesinger process. In summary, the revision in the concept combining the regeneration of the spent borohydrides and the used catalysts with the green electricity is reflected in (1) that metallic sodium could be produced from NaCl of high purity obtained from the conversion of the byproduct in the synthesis of NH3BH3 to devoid the complicated purification procedures if produced from seawater; and (2) that the recycling and the regeneration processes of the spent NaBH4 and NH3BH3 as well as the used catalysts could be simultaneously carried out and combined with the proposed life cycle of borohydrides.
NASA Astrophysics Data System (ADS)
Cochard, Étienne; Prada, Claire; Aubry, Jean-François; Fink, Mathias
2010-03-01
Thermal ablation induced by high intensity focused ultrasound has produced promising clinical results to treat hepatocarcinoma and other liver tumors. However skin burns have been reported due to the high absorption of ultrasonic energy by the ribs. This study proposes a method to produce an acoustic field focusing on a chosen target while sparing the ribs, using the decomposition of the time-reversal operator (DORT method). The idea is to apply an excitation weight vector to the transducers array which is orthogonal to the subspace of emissions focusing on the ribs. The ratio of the energies absorbed at the focal point and on the ribs has been enhanced up to 100-fold as demonstrated by the measured specific absorption rates.
Adaptive phase k-means algorithm for waveform classification
NASA Astrophysics Data System (ADS)
Song, Chengyun; Liu, Zhining; Wang, Yaojun; Xu, Feng; Li, Xingming; Hu, Guangmin
2018-01-01
Waveform classification is a powerful technique for seismic facies analysis that describes the heterogeneity and compartments within a reservoir. Horizon interpretation is a critical step in waveform classification. However, the horizon often produces inconsistent waveform phase, and thus results in an unsatisfied classification. To alleviate this problem, an adaptive phase waveform classification method called the adaptive phase k-means is introduced in this paper. Our method improves the traditional k-means algorithm using an adaptive phase distance for waveform similarity measure. The proposed distance is a measure with variable phases as it moves from sample to sample along the traces. Model traces are also updated with the best phase interference in the iterative process. Therefore, our method is robust to phase variations caused by the interpretation horizon. We tested the effectiveness of our algorithm by applying it to synthetic and real data. The satisfactory results reveal that the proposed method tolerates certain waveform phase variation and is a good tool for seismic facies analysis.
Wang, Zhouping; Zhang, Zhujun; Fu, Zhifeng; Fang, Luqiu; Zhang, Xiao
2004-02-01
A novel and highly sensitive method for the determination of phenformin over the range of 6 x 10(-9) - 1 x 10(-5) g ml(-1) in pharmaceutical formulations with flow-injection chemiluminescence (CL) detection is proposed. The method is based on the CL produced during the oxidation of N-bromosuccinimide (NBS) in an alkaline medium in the presence of fluorescein as an effective energy transfer agent. The use of cetyltrimethylammonium bromide (CTAB) as a sensitizer enhances the signal magnitude by about 100 times. The detection limit is 2 x 10(-9) g ml(-1) (3sigma) with a relative standard deviation of 2.3% (n = 11) at 1 x 10(-7) g ml(-1) phenformin. Ninety samples can be determined per hour. The method was evaluated by carrying out a recovery study and by the analysis of commercial formulations. The obtained results compared well with those by an official method, and demonstrated good accuracy and precision. The possible CL mechanism of the proposed system was also briefly analyzed.
Robustness of S1 statistic with Hodges-Lehmann for skewed distributions
NASA Astrophysics Data System (ADS)
Ahad, Nor Aishah; Yahaya, Sharipah Soaad Syed; Yin, Lee Ping
2016-10-01
Analysis of variance (ANOVA) is a common use parametric method to test the differences in means for more than two groups when the populations are normally distributed. ANOVA is highly inefficient under the influence of non- normal and heteroscedastic settings. When the assumptions are violated, researchers are looking for alternative such as Kruskal-Wallis under nonparametric or robust method. This study focused on flexible method, S1 statistic for comparing groups using median as the location estimator. S1 statistic was modified by substituting the median with Hodges-Lehmann and the default scale estimator with the variance of Hodges-Lehmann and MADn to produce two different test statistics for comparing groups. Bootstrap method was used for testing the hypotheses since the sampling distributions of these modified S1 statistics are unknown. The performance of the proposed statistic in terms of Type I error was measured and compared against the original S1 statistic, ANOVA and Kruskal-Wallis. The propose procedures show improvement compared to the original statistic especially under extremely skewed distribution.
A solid state tunable laser for resonance measurements of atmospheric sodium
NASA Technical Reports Server (NTRS)
Philbrick, C. R.; Bufton, J. L.; Gardner, C. S.
1985-01-01
The measurement of wave dynamics in the upper mesosphere using a solid-state laser to excite the resonance fluorescence line of sodium is examined. Two Nd:YAG lasers are employed to produce the sodium resonance line. The method involves mixing the 1064 nm radiation with that from a second Nd:YAG operating at 1319 nm in a nonlinear infrared crystal to directly produce 589 nm radiation by sum frequency generation. The use of the transmitter to measure the sodium layer from the Space Shuttle Platform is proposed. A diagram of the laser transmitter is presented.
[Flotation and extraction spectrophotometric determination of trace silicate in water].
Di, J; Liu, Q; Li, W
2000-12-01
In HCl solution, silicate reacted with molybdate ammonium to produce silicomolibdic, then a yellow compound which was produced from the oxidation of TMB was simultaneously isolated to benzene phase by flotation and then isolated to dimethylsulfoxideformic acid by extraction. The compound gives a high absorption at 458 nm. The apparent molar absorptivity is 1.26 x 10(5) L.mol-1.cm-1. In the range of 0.02-1 mg.L-1 Si obeys Beer's law. The proposed method which combines with enrichment and measurement is simple, rapid, selective and convenient to determine silicate in water with satisfied results.
NASA Astrophysics Data System (ADS)
Krasnoveikin, V. A.; Kozulin, A. A.; Skripnyak, V. A.
2017-11-01
Severe plastic deformation by equal channel angular pressing has been performed to produce light aluminum and magnesium alloy billets with ultrafine-grained structure. The physical and mechanical properties of the processed alloys are examined by studying their microstructure, measuring microhardness, yield strength, and uniaxial tensile strength. A nondestructive testing technique using three-dimensional X-ray tomography is proposed for detecting internal structural defects and monitoring damage formation in the structure of alloys subjected to severe plastic deformation. The investigation results prove the efficiency of the chosen method and selected mode of producing ultrafine-grained light alloys.
Comic image understanding based on polygon detection
NASA Astrophysics Data System (ADS)
Li, Luyuan; Wang, Yongtao; Tang, Zhi; Liu, Dong
2013-01-01
Comic image understanding aims to automatically decompose scanned comic page images into storyboards and then identify the reading order of them, which is the key technique to produce digital comic documents that are suitable for reading on mobile devices. In this paper, we propose a novel comic image understanding method based on polygon detection. First, we segment a comic page images into storyboards by finding the polygonal enclosing box of each storyboard. Then, each storyboard can be represented by a polygon, and the reading order of them is determined by analyzing the relative geometric relationship between each pair of polygons. The proposed method is tested on 2000 comic images from ten printed comic series, and the experimental results demonstrate that it works well on different types of comic images.
Three-dimensional illusion thermal device for location camouflage.
Wang, Jing; Bi, Yanqiang; Hou, Quanwen
2017-08-08
Thermal metamaterials, proposed in recent years, provide a new method to manipulate the energy flux in heat transfer, and result in many novel thermal devices. In this paper, an illusion thermal device for location camouflage in 3-dimensional heat conduction regime is proposed based on the transformation thermodynamics. The heat source covered by the device produces a fake signal outside the device, which makes the source look like appearing at another position away from its real position. The parameters required by the device are deduced and the method is validated by simulations. The possible scheme to obtain the thermal conductivities required in the device by composing natural materials is supplied, and the influence of some problems in practical fabrication process of the device on the effect of the camouflage is also discussed.
Three-dimensional information hierarchical encryption based on computer-generated holograms
NASA Astrophysics Data System (ADS)
Kong, Dezhao; Shen, Xueju; Cao, Liangcai; Zhang, Hao; Zong, Song; Jin, Guofan
2016-12-01
A novel approach for encrypting three-dimensional (3-D) scene information hierarchically based on computer-generated holograms (CGHs) is proposed. The CGHs of the layer-oriented 3-D scene information are produced by angular-spectrum propagation algorithm at different depths. All the CGHs are then modulated by different chaotic random phase masks generated by the logistic map. Hierarchical encryption encoding is applied when all the CGHs are accumulated one by one, and the reconstructed volume of the 3-D scene information depends on permissions of different users. The chaotic random phase masks could be encoded into several parameters of the chaotic sequences to simplify the transmission and preservation of the keys. Optical experiments verify the proposed method and numerical simulations show the high key sensitivity, high security, and application flexibility of the method.
A graph-Laplacian-based feature extraction algorithm for neural spike sorting.
Ghanbari, Yasser; Spence, Larry; Papamichalis, Panos
2009-01-01
Analysis of extracellular neural spike recordings is highly dependent upon the accuracy of neural waveform classification, commonly referred to as spike sorting. Feature extraction is an important stage of this process because it can limit the quality of clustering which is performed in the feature space. This paper proposes a new feature extraction method (which we call Graph Laplacian Features, GLF) based on minimizing the graph Laplacian and maximizing the weighted variance. The algorithm is compared with Principal Components Analysis (PCA, the most commonly-used feature extraction method) using simulated neural data. The results show that the proposed algorithm produces more compact and well-separated clusters compared to PCA. As an added benefit, tentative cluster centers are output which can be used to initialize a subsequent clustering stage.
Novel liquid equilibrium valving on centrifugal microfluidic CD platform.
Al-Faqheri, Wisam; Ibrahim, Fatimah; Thio, Tzer Hwai Gilbert; Arof, Hamzah; Madou, Marc
2013-01-01
One of the main challenges faced by researchers in the field of microfluidic compact disc (CD) platforms is the control of liquid movement and sequencing during spinning. This paper presents a novel microfluidic valve based on the principle of liquid equilibrium on a rotating CD. The proposed liquid equilibrium valve operates by balancing the pressure produced by the liquids in a source and a venting chamber during spinning. The valve does not require external forces or triggers, and is able to regulate burst frequencies with high accuracy. In this work, we demonstrate that the burst frequency can be significantly raised by making just a small adjustment of the liquid height in the vent chamber. Finally, the proposed valve ng method can be used separately or combined with other valving methods in advance microfluidic processes.
Kappa statistic for clustered dichotomous responses from physicians and patients.
Kang, Chaeryon; Qaqish, Bahjat; Monaco, Jane; Sheridan, Stacey L; Cai, Jianwen
2013-09-20
The bootstrap method for estimating the standard error of the kappa statistic in the presence of clustered data is evaluated. Such data arise, for example, in assessing agreement between physicians and their patients regarding their understanding of the physician-patient interaction and discussions. We propose a computationally efficient procedure for generating correlated dichotomous responses for physicians and assigned patients for simulation studies. The simulation result demonstrates that the proposed bootstrap method produces better estimate of the standard error and better coverage performance compared with the asymptotic standard error estimate that ignores dependence among patients within physicians with at least a moderately large number of clusters. We present an example of an application to a coronary heart disease prevention study. Copyright © 2013 John Wiley & Sons, Ltd.
Structural optimization with approximate sensitivities
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Hopkins, D. A.; Coroneos, R.
1994-01-01
Computational efficiency in structural optimization can be enhanced if the intensive computations associated with the calculation of the sensitivities, that is, gradients of the behavior constraints, are reduced. Approximation to gradients of the behavior constraints that can be generated with small amount of numerical calculations is proposed. Structural optimization with these approximate sensitivities produced correct optimum solution. Approximate gradients performed well for different nonlinear programming methods, such as the sequence of unconstrained minimization technique, method of feasible directions, sequence of quadratic programming, and sequence of linear programming. Structural optimization with approximate gradients can reduce by one third the CPU time that would otherwise be required to solve the problem with explicit closed-form gradients. The proposed gradient approximation shows potential to reduce intensive computation that has been associated with traditional structural optimization.
Replacement Attack: A New Zero Text Watermarking Attack
NASA Astrophysics Data System (ADS)
Bashardoost, Morteza; Mohd Rahim, Mohd Shafry; Saba, Tanzila; Rehman, Amjad
2017-03-01
The main objective of zero watermarking methods that are suggested for the authentication of textual properties is to increase the fragility of produced watermarks against tampering attacks. On the other hand, zero watermarking attacks intend to alter the contents of document without changing the watermark. In this paper, the Replacement attack is proposed, which focuses on maintaining the location of the words in the document. The proposed text watermarking attack is specifically effective on watermarking approaches that exploit words' transition in the document. The evaluation outcomes prove that tested word-based method are unable to detect the existence of replacement attack in the document. Moreover, the comparison results show that the size of Replacement attack is estimated less accurate than other common types of zero text watermarking attacks.
Plant tissue-based chemiluminescence biosensor for ethanol.
Huang, Yuming; Wu, Fangqiong
2006-07-01
A plant tissue-based chemiluminescence biosensor for ethanol based on using mushroom (Agaricus bisporus) tissue as the recognition element is proposed in this paper. The principle for ethanol sensing relies on the luminol-potassium hexacyanoferrate(III)-hydrogen peroxide transducer reaction, in which hydrogen peroxide is produced from the ethanol enzymatic catalytic oxidation by oxygen under the catalysis of alcohol oxidase in the tissue column. Under optimum conditions, the method allowed the measurement of ethanol in the range of 0.001 - 2 mmol/l with a detection limit (3 sigma) of 0.2 micromol/l. The relative standard deviation (RSD) was 4.14% (n = 11) for 0.05 mmol/l ethanol. The proposed method has been applied to the determination of ethanol in biological fluids and beverages with satisfactory results.
An object tracking method based on guided filter for night fusion image
NASA Astrophysics Data System (ADS)
Qian, Xiaoyan; Wang, Yuedong; Han, Lei
2016-01-01
Online object tracking is a challenging problem as it entails learning an effective model to account for appearance change caused by intrinsic and extrinsic factors. In this paper, we propose a novel online object tracking with guided image filter for accurate and robust night fusion image tracking. Firstly, frame difference is applied to produce the coarse target, which helps to generate observation models. Under the restriction of these models and local source image, guided filter generates sufficient and accurate foreground target. Then accurate boundaries of the target can be extracted from detection results. Finally timely updating for observation models help to avoid tracking shift. Both qualitative and quantitative evaluations on challenging image sequences demonstrate that the proposed tracking algorithm performs favorably against several state-of-art methods.
Multimodal Image Registration through Simultaneous Segmentation.
Aganj, Iman; Fischl, Bruce
2017-11-01
Multimodal image registration facilitates the combination of complementary information from images acquired with different modalities. Most existing methods require computation of the joint histogram of the images, while some perform joint segmentation and registration in alternate iterations. In this work, we introduce a new non-information-theoretical method for pairwise multimodal image registration, in which the error of segmentation - using both images - is considered as the registration cost function. We empirically evaluate our method via rigid registration of multi-contrast brain magnetic resonance images, and demonstrate an often higher registration accuracy in the results produced by the proposed technique, compared to those by several existing methods.
Domain Decomposition Algorithms for First-Order System Least Squares Methods
NASA Technical Reports Server (NTRS)
Pavarino, Luca F.
1996-01-01
Least squares methods based on first-order systems have been recently proposed and analyzed for second-order elliptic equations and systems. They produce symmetric and positive definite discrete systems by using standard finite element spaces, which are not required to satisfy the inf-sup condition. In this paper, several domain decomposition algorithms for these first-order least squares methods are studied. Some representative overlapping and substructuring algorithms are considered in their additive and multiplicative variants. The theoretical and numerical results obtained show that the classical convergence bounds (on the iteration operator) for standard Galerkin discretizations are also valid for least squares methods.
NASA Astrophysics Data System (ADS)
Přibil, Jiří; Přibilová, Anna; Frollo, Ivan
2017-12-01
The paper focuses on two methods of evaluation of successfulness of speech signal enhancement recorded in the open-air magnetic resonance imager during phonation for the 3D human vocal tract modeling. The first approach enables to obtain a comparison based on statistical analysis by ANOVA and hypothesis tests. The second method is based on classification by Gaussian mixture models (GMM). The performed experiments have confirmed that the proposed ANOVA and GMM classifiers for automatic evaluation of the speech quality are functional and produce fully comparable results with the standard evaluation based on the listening test method.
Diffracted diffraction radiation and its application to beam diagnostics
NASA Astrophysics Data System (ADS)
Goponov, Yu. A.; Shatokhin, R. A.; Sumitani, K.; Syshchenko, V. V.; Takabayashi, Y.; Vnukov, I. E.
2018-03-01
We present theoretical considerations for diffracted diffraction radiation and also propose an application of this process to diagnosing ultra-relativistic electron (positron) beams for the first time. Diffraction radiation is produced when relativistic particles move near a target. If the target is a crystal or X-ray mirror, diffraction radiation in the X-ray region is expected to be diffracted at the Bragg angle and therefore be detectable. We present a scheme for applying this process to measurements of the beam angular spread, and consider how to conduct a proof-of-principle experiment for the proposed method.
Two-Wavelength Multi-Gigahertz Frequency Comb-Based Interferometry for Full-Field Profilometry
NASA Astrophysics Data System (ADS)
Choi, Samuel; Kashiwagi, Ken; Kojima, Shuto; Kasuya, Yosuke; Kurokawa, Takashi
2013-10-01
The multi-gigahertz frequency comb-based interferometer exhibits only the interference amplitude peak without the phase fringes, which can produce a rapid axial scan for full-field profilometry and tomography. Despite huge technical advantages, there remain problems that the interference intensity undulations occurred depending on the interference phase. To avoid such problems, we propose a compensation technique of the interference signals using two frequency combs with slightly varied center wavelengths. The compensated full-field surface profile measurements of cover glass and onion skin were demonstrated experimentally to verify the advantages of the proposed method.
NASA Technical Reports Server (NTRS)
Holloway, C. M.; Johnson, C. W.
2007-01-01
In the early years of powered flight, the National Advisory Committee on Aeronautics in the United States produced three reports describing a method of analysis of aircraft accidents. The first report was published in 1928; the second, which was a revision of the first, was published in 1930; and the third, which was a revision and update of the second, was published in 1936. This paper describes the contents of these reports, and compares the method of analysis proposed therein to the methods used today.
NASA Astrophysics Data System (ADS)
Sanchez, J.
2018-06-01
In this paper, the application and analysis of the asymptotic approximation method to a single degree-of-freedom has recently been produced. The original concepts are summarized, and the necessary probabilistic concepts are developed and applied to single degree-of-freedom systems. Then, these concepts are united, and the theoretical and computational models are developed. To determine the viability of the proposed method in a probabilistic context, numerical experiments are conducted, and consist of a frequency analysis, analysis of the effects of measurement noise, and a statistical analysis. In addition, two examples are presented and discussed.
Accelerated Compressed Sensing Based CT Image Reconstruction.
Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R; Paul, Narinder S; Cobbold, Richard S C
2015-01-01
In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.
Accelerated Compressed Sensing Based CT Image Reconstruction
Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R.; Paul, Narinder S.; Cobbold, Richard S. C.
2015-01-01
In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization. PMID:26167200
Stationarity conditions for physicochemical processes in the interior ballistics of a gun
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lipanov, A.M.
1995-09-01
An original method is proposed for ensuring time-invariant (stationary) interior ballistic parameters in the postprojectile space of a gun barrel. Stationarity of the parameters is achieved by investing the solid-propellant charge with highly original structures that produce the required pressure condition and linear growth of the projectile velocity. Simple relations are obtained for calculating the principal characteristics.
This multi-year pilot study evaluated a proposed field method for its effectiveness in the collection of a benthic macroinvertebrate sample adequate for use in the condition assessment of streams and rivers in the Neuquén Province, Argentina. A total of 13 sites, distribut...
Economics of cutting hardwood dimension parts with an automated system
Henry A. Huber; Steve Ruddell; Kalinath Mukherjee; Charles W. McMillin
1989-01-01
A financial analysis using discounted cash-flow decision methods was completed to determine the economic feasibility of replacing a conventional roughmill crosscut and rip operation with a proposed automated computer vision and laser cutting system. Red oak and soft maple lumber were cut at production levels of 30 thousand board feet (MBF)/day and 5 MBF/day to produce...
NASA Astrophysics Data System (ADS)
Aigyl Ilshatovna, Sabirova; Svetlana Fanilevna, Khasanova; Vildanovna, Nagumanova Regina
2018-05-01
On the basis of decision making theory (minimax and maximin approaches) the authors propose a technique with the results of calculations of the critical values of effectiveness indicators of agricultural producers in the Republic of Tatarstan for 2013-2015. There is justified necessity of monitoring the effectiveness of the state support and the direction of its improvement.
Elucidation of Diels-Alder Reaction Network of 2,5-Dimethylfuran and Ethylene on HY Zeolite Catalyst
DOE Office of Scientific and Technical Information (OSTI.GOV)
Do, Phuong T. M.; McAtee, Jesse R.; Watson, Donald A.
2012-12-12
The reaction of 2,5-dimethylfuran and ethylene to produce p-xylene represents a potentially important route for the conversion of biomass to high-value organic chemicals. Current preparation methods suffer from low selectivity and produce a number of byproducts. Using modern separation and analytical techniques, the structures of many of the byproducts produced in this reaction when HY zeolite is employed as a catalyst have been identified. From these data, a detailed reaction network is proposed, demonstrating that hydrolysis and electrophilic alkylation reactions compete with the desired Diels–Alder/dehydration sequence. This information will allow the rational identification of more selective catalysts and more selectivemore » reaction conditions.« less
Letfullin, Renat R; George, Thomas F
2017-05-01
We introduce a new method for selectively destroying cancer cell organelles by electrons emitted from the surface of intracellularly localized nanoparticles exposed to the nonionizing ultraviolet (UV) radiation. We propose to target cancerous intracellular organelles by nanoparticles and expose them to UV radiation with energy density safe for healthy tissue. We simulate the number of photoelectrons produced by the nanoparticles made of various metals and radii, calculate their kinetic energy and compare it to the threshold energy for producing biological damage. Exposure of metal nanoparticles to UV radiation generates photoelectrons with kinetic energies up to 11 eV, which is high enough to produce single- to double-strand breaks in the DNA and damage the cancerous cell organelles.
Martins, Rui; Oliveira, Paulo Eduardo; Schmitt, Aurore
2012-06-10
We discuss here the estimation of age at death from two indicators (pubic symphysis and the sacro-pelvic surface of the ilium) based on four different osteological series from Portugal, Great-Britain, South Africa or USA (European origin). These samples and the scoring system of the two indicators were used by Schmitt et al. (2002), applying the methodology proposed by Lucy et al. (1996). In the present work, the same data was processed using a modification of the empirical method proposed by Lucy et al. (2002). The various probability distributions are estimated from training data by using kernel density procedures and Jackknife methodology. Bayes's theorem is then used to produce the posterior distribution from which point and interval estimates may be made. This statistical approach reduces the bias of the estimates to less than 70% of what was obtained by the initial method. This reduction going up to 52% if knowledge of sex of the individual is available, and produces an age for all the individuals that improves age at death assessment. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
One-step fabrication of nickel nanocones by electrodeposition using CaCl2·2H2O as capping reagent
NASA Astrophysics Data System (ADS)
Lee, Jae Min; Jung, Kyung Kuk; Lee, Sung Ho; Ko, Jong Soo
2016-04-01
In this research, a method for the fabrication of nickel nanocones through the addition of CaCl2·2H2O to an electrodeposition solution was proposed. When electrodeposition was performed after CaCl2·2H2O addition, precipitation of the Ni ions onto the (2 0 0) crystal face was suppressed and anisotropic growth of the nickel electrodeposited structures was promoted. Sharper nanocones were produced with increasing concentration of CaCl2·2H2O added to the solution. Moreover, when temperature of the electrodeposition solutions approached 60 °C, the apex angle of the nanostructures decreased. In addition, the nanocones produced were applied to superhydrophobic surface modification using a plasma-polymerized fluorocarbon (PPFC) coating. When the solution temperature was maintained at 60 °C and the concentration of the added CaCl2·2H2O was 1.2 M or higher, the fabricated samples showed superhydrophobic surface properties. The proposed nickel nanocone formation method can be applied to various industrial fields that require metal nanocones, including superhydrophobic surface modification.
Maskless and low-destructive nanofabrication on quartz by friction-induced selective etching
2013-01-01
A low-destructive friction-induced nanofabrication method is proposed to produce three-dimensional nanostructures on a quartz surface. Without any template, nanofabrication can be achieved by low-destructive scanning on a target area and post-etching in a KOH solution. Various nanostructures, such as slopes, hierarchical stages and chessboard-like patterns, can be fabricated on the quartz surface. Although the rise of etching temperature can improve fabrication efficiency, fabrication depth is dependent only upon contact pressure and scanning cycles. With the increase of contact pressure during scanning, selective etching thickness of the scanned area increases from 0 to 2.9 nm before the yield of the quartz surface and then tends to stabilise after the appearance of a wear. Refabrication on existing nanostructures can be realised to produce deeper structures on the quartz surface. Based on Arrhenius fitting of the etching rate and transmission electron microscopy characterization of the nanostructure, fabrication mechanism could be attributed to the selective etching of the friction-induced amorphous layer on the quartz surface. As a maskless and low-destructive technique, the proposed friction-induced method will open up new possibilities for further nanofabrication. PMID:23531381
An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection
Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail
2013-01-01
One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data. PMID:23593445
Wu, Yao; Wu, Guorong; Wang, Li; Munsell, Brent C.; Wang, Qian; Lin, Weili; Feng, Qianjin; Chen, Wufan; Shen, Dinggang
2015-01-01
Purpose: To investigate anatomical differences across individual subjects, or longitudinal changes in early brain development, it is important to perform accurate image registration. However, due to fast brain development and dynamic tissue appearance changes, it is very difficult to align infant brain images acquired from birth to 1-yr-old. Methods: To solve this challenging problem, a novel image registration method is proposed to align two infant brain images, regardless of age at acquisition. The main idea is to utilize the growth trajectories, or spatial-temporal correspondences, learned from a set of longitudinal training images, for guiding the registration of two different time-point images with different image appearances. Specifically, in the training stage, an intrinsic growth trajectory is first estimated for each training subject using the longitudinal images. To register two new infant images with potentially a large age gap, the corresponding images patches between each new image and its respective training images with similar age are identified. Finally, the registration between the two new images can be assisted by the learned growth trajectories from one time point to another time point that have been established in the training stage. To further improve registration accuracy, the proposed method is combined with a hierarchical and symmetric registration framework that can iteratively add new key points in both images to steer the estimation of the deformation between the two infant brain images under registration. Results: To evaluate image registration accuracy, the proposed method is used to align 24 infant subjects at five different time points (2-week-old, 3-month-old, 6-month-old, 9-month-old, and 12-month-old). Compared to the state-of-the-art methods, the proposed method demonstrated superior registration performance. Conclusions: The proposed method addresses the difficulties in the infant brain registration and produces better results compared to existing state-of-the-art registration methods. PMID:26133617
Tang, P; Brouwers, H J H
2017-04-01
The cold-bonding pelletizing technique is applied in this study as an integrated method to recycle municipal solid waste incineration (MSWI) bottom ash fines (BAF, 0-2mm) and several other industrial powder wastes. Artificial lightweight aggregates are produced successfully based on the combination of these solid wastes, and the properties of these artificial aggregates are investigated and then compared with others' results reported in literature. Additionally, methods for improving the aggregate properties are suggested, and the corresponding experimental results show that increasing the BAF amount, higher binder content and addition of polypropylene fibres can improve the pellet properties (bulk density, crushing resistance, etc.). The mechanisms regarding to the improvement of the pellet properties are discussed. Furthermore, the leaching behaviours of contaminants from the produced aggregates are investigated and compared with Dutch environmental legislation. The application of these produced artificial lightweight aggregates are proposed according to their properties. Copyright © 2017 Elsevier Ltd. All rights reserved.
Jamzad, Amoon; Setarehdan, Seyed Kamaledin
2014-04-01
The twinkling artifact is an undesired phenomenon within color Doppler sonograms that usually appears at the site of internal calcifications. Since the appearance of the twinkling artifact is correlated with the roughness of the calculi, noninvasive roughness estimation of the internal stones may be considered as a potential twinkling artifact application. This article proposes a novel quantitative approach for measurement and analysis of twinkling artifact data for roughness estimation. A phantom was developed with 7 quantified levels of roughness. The Doppler system was initially calibrated by the proposed procedure to facilitate the analysis. A total of 1050 twinkling artifact images were acquired from the phantom, and 32 novel numerical measures were introduced and computed for each image. The measures were then ranked on the basis of roughness quantification ability using different methods. The performance of the proposed twinkling artifact-based surface roughness quantification method was finally investigated for different combinations of features and classifiers. Eleven features were shown to be the most efficient numerical twinkling artifact measures in roughness characterization. The linear classifier outperformed other methods for twinkling artifact classification. The pixel count measures produced better results among the other categories. The sequential selection method showed higher accuracy than other individual rankings. The best roughness recognition average accuracy of 98.33% was obtained by the first 5 principle components and the linear classifier. The proposed twinkling artifact analysis method could recognize the phantom surface roughness with average accuracy of 98.33%. This method may also be applicable for noninvasive calculi characterization in treatment management.
NASA Astrophysics Data System (ADS)
Su, Yuanchao; Sun, Xu; Gao, Lianru; Li, Jun; Zhang, Bing
2016-10-01
Endmember extraction is a key step in hyperspectral unmixing. A new endmember extraction framework is proposed for hyperspectral endmember extraction. The proposed approach is based on the swarm intelligence (SI) algorithm, where discretization is used to solve the SI algorithm because pixels in a hyperspectral image are naturally defined within a discrete space. Moreover, a "distance" factor is introduced into the objective function to limit the endmember numbers which is generally limited in real scenarios, while traditional SI algorithms likely produce superabundant spectral signatures, which generally belong to the same classes. Three endmember extraction methods are proposed based on the artificial bee colony, ant colony optimization, and particle swarm optimization algorithms. Experiments with both simulated and real hyperspectral images indicate that the proposed framework can improve the accuracy of endmember extraction.
de la Torre, Xavier; Colamonici, Cristiana; Curcio, Davide; Molaioni, Francesco; Pizzardi, Marta; Botrè, Francesco
2011-04-01
Nandrolone and/or its precursors are included in the World Anti-doping Agency (WADA) list of forbidden substances and methods and as such their use is banned in sport. 19-Norandrosterone (19-NA) the main metabolite of these compounds can also be produced endogenously. The need to establish the origin of 19-NA in human urine samples obliges the antidoping laboratories to use isotope ratio mass spectrometry (IRMS) coupled to gas chromatography (GC/C/IRMS). In this work a simple liquid chromatographic method without any additional derivatization step is proposed, allowing to drastically simplify the urine pretreatment procedure, leading to extracts free of interferences permitting precise and accurate IRMS analysis. The purity of the extracts was verified by parallel analysis by gas chromatography coupled to mass spectrometry with GC conditions identical to those of the GC/C/IRMS assay. The method has been validated according to ISO17025 requirements (within assay precision of ±0.3‰ and between assay precision of ±0.4‰). The method has been tested with samples obtained after the administration of synthetic 19-norandrostenediol and samples collected during pregnancy where 19-NA is known to be produced endogenously. Twelve drugs and synthetic standards able to produce through metabolism 19-NA have shown to present δ(13)C values around -29‰ being quite homogeneous (-28.8±1.5; mean±standard deviation) while endogenously produced 19-NA has shown values comparable to other endogenous produced steroids in the range -21 to -24‰ as already reported. The efficacy of the method was tested on real samples from routine antidoping analyses. Copyright © 2011 Elsevier Inc. All rights reserved.
Mollah, Mohammad Manir Hossain; Jamal, Rahman; Mokhtar, Norfilza Mohd; Harun, Roslan; Mollah, Md. Nurul Haque
2015-01-01
Background Identifying genes that are differentially expressed (DE) between two or more conditions with multiple patterns of expression is one of the primary objectives of gene expression data analysis. Several statistical approaches, including one-way analysis of variance (ANOVA), are used to identify DE genes. However, most of these methods provide misleading results for two or more conditions with multiple patterns of expression in the presence of outlying genes. In this paper, an attempt is made to develop a hybrid one-way ANOVA approach that unifies the robustness and efficiency of estimation using the minimum β-divergence method to overcome some problems that arise in the existing robust methods for both small- and large-sample cases with multiple patterns of expression. Results The proposed method relies on a β-weight function, which produces values between 0 and 1. The β-weight function with β = 0.2 is used as a measure of outlier detection. It assigns smaller weights (≥ 0) to outlying expressions and larger weights (≤ 1) to typical expressions. The distribution of the β-weights is used to calculate the cut-off point, which is compared to the observed β-weight of an expression to determine whether that gene expression is an outlier. This weight function plays a key role in unifying the robustness and efficiency of estimation in one-way ANOVA. Conclusion Analyses of simulated gene expression profiles revealed that all eight methods (ANOVA, SAM, LIMMA, EBarrays, eLNN, KW, robust BetaEB and proposed) perform almost identically for m = 2 conditions in the absence of outliers. However, the robust BetaEB method and the proposed method exhibited considerably better performance than the other six methods in the presence of outliers. In this case, the BetaEB method exhibited slightly better performance than the proposed method for the small-sample cases, but the the proposed method exhibited much better performance than the BetaEB method for both the small- and large-sample cases in the presence of more than 50% outlying genes. The proposed method also exhibited better performance than the other methods for m > 2 conditions with multiple patterns of expression, where the BetaEB was not extended for this condition. Therefore, the proposed approach would be more suitable and reliable on average for the identification of DE genes between two or more conditions with multiple patterns of expression. PMID:26413858
Inference in randomized trials with death and missingness.
Wang, Chenguang; Scharfstein, Daniel O; Colantuoni, Elizabeth; Girard, Timothy D; Yan, Ying
2017-06-01
In randomized studies involving severely ill patients, functional outcomes are often unobserved due to missed clinic visits, premature withdrawal, or death. It is well known that if these unobserved functional outcomes are not handled properly, biased treatment comparisons can be produced. In this article, we propose a procedure for comparing treatments that is based on a composite endpoint that combines information on both the functional outcome and survival. We further propose a missing data imputation scheme and sensitivity analysis strategy to handle the unobserved functional outcomes not due to death. Illustrations of the proposed method are given by analyzing data from a recent non-small cell lung cancer clinical trial and a recent trial of sedation interruption among mechanically ventilated patients. © 2016, The International Biometric Society.
Predicting Presynaptic and Postsynaptic Neurotoxins by Developing Feature Selection Technique
Yang, Yunchun; Zhang, Chunmei; Chen, Rong; Huang, Po
2017-01-01
Presynaptic and postsynaptic neurotoxins are proteins which act at the presynaptic and postsynaptic membrane. Correctly predicting presynaptic and postsynaptic neurotoxins will provide important clues for drug-target discovery and drug design. In this study, we developed a theoretical method to discriminate presynaptic neurotoxins from postsynaptic neurotoxins. A strict and objective benchmark dataset was constructed to train and test our proposed model. The dipeptide composition was used to formulate neurotoxin samples. The analysis of variance (ANOVA) was proposed to find out the optimal feature set which can produce the maximum accuracy. In the jackknife cross-validation test, the overall accuracy of 94.9% was achieved. We believe that the proposed model will provide important information to study neurotoxins. PMID:28303250
A new method for measuring low resistivity contacts between silver and YBa2Cu3O(7-x) superconductor
NASA Technical Reports Server (NTRS)
Hsi, Chi-Shiung; Haertling, Gene H.; Sherrill, Max D.
1991-01-01
Several methods of measuring contact resistivity between silver electrodes and YBa2Cu3O(7-x) superconductors were investigated; including the two-point, the three point, and the lap-joint methods. The lap-joint method was found to yield the most consistent and reliable results and is proposed as a new technique for this measurement. Painting, embedding, and melting methods were used to apply the electrodes to the superconductor. Silver electrodes produced good ohmic contacts to YBa2Cu3O(7-x) superconductors with contact resistivities as low as 1.9 x 10 to the -9th ohm sq cm.
Development of a Remote Consultation System Using Avatar Technology
NASA Astrophysics Data System (ADS)
Ohnishi, Tatsuya; Yajima, Hiroshi; Sawamoto, Jun
The chance to use the Internet as a communications tool has been increasing, and the consultation businesses for customers at remote places are diversifying in their communication media and forms. In the remote consultation, the lack of non-verbal information is reported as one of the reasons for inefficiency and customer's dissatisfaction compared with the face-to-face consultation. The technique for supplementing non-verbal information with a TV telephone is proposed, and helps to confirm understanding degree or the utterance timing by watching the movement of the face. But the displayed face of the partner causes strong feeling of strain between strangers and the participants also care about background scene displayed on the monitor producing risks in the consultation tasks. In this paper, we propose a remote consultation method that uses avatar technology in the virtual space in order to provide non-verbal information and also avoiding the problem of TV telephone at the same time. The effectiveness of the proposed remote consultation method is confirmed by experiments.
A Kinect based sign language recognition system using spatio-temporal features
NASA Astrophysics Data System (ADS)
Memiş, Abbas; Albayrak, Songül
2013-12-01
This paper presents a sign language recognition system that uses spatio-temporal features on RGB video images and depth maps for dynamic gestures of Turkish Sign Language. Proposed system uses motion differences and accumulation approach for temporal gesture analysis. Motion accumulation method, which is an effective method for temporal domain analysis of gestures, produces an accumulated motion image by combining differences of successive video frames. Then, 2D Discrete Cosine Transform (DCT) is applied to accumulated motion images and temporal domain features transformed into spatial domain. These processes are performed on both RGB images and depth maps separately. DCT coefficients that represent sign gestures are picked up via zigzag scanning and feature vectors are generated. In order to recognize sign gestures, K-Nearest Neighbor classifier with Manhattan distance is performed. Performance of the proposed sign language recognition system is evaluated on a sign database that contains 1002 isolated dynamic signs belongs to 111 words of Turkish Sign Language (TSL) in three different categories. Proposed sign language recognition system has promising success rates.
Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Mueller-Lisse, Ullrich; Moeller, Knut
2016-06-01
Electrical impedance tomography (EIT) reconstructs the conductivity distribution of a domain using electrical data on its boundary. This is an ill-posed inverse problem usually solved on a finite element mesh. For this article, a special regularization method incorporating structural information of the targeted domain is proposed and evaluated. Structural information was obtained either from computed tomography images or from preliminary EIT reconstructions by a modified k-means clustering. The proposed regularization method integrates this structural information into the reconstruction as a soft constraint preferring sparsity in group level. A first evaluation with Monte Carlo simulations indicated that the proposed solver is more robust to noise and the resulting images show fewer artifacts. This finding is supported by real data analysis. The structure based regularization has the potential to balance structural a priori information with data driven reconstruction. It is robust to noise, reduces artifacts and produces images that reflect anatomy and are thus easier to interpret for physicians.
Perez-Cruz, Angel; Stiharu, Ion; Dominguez-Gonzalez, Aurelio
2017-07-20
In recent years paper-based microfluidic systems have emerged as versatile tools for developing sensors in different areas. In this work; we report a novel physical sensing principle for the characterization of liquids using a paper-based hygro-mechanical system (PB-HMS). The PB-HMS is formed by the interaction of liquid droplets and paper-based mini-structures such as cantilever beams. The proposed principle takes advantage of the hygroscopic properties of paper to produce hygro-mechanical motion. The dynamic response of the PB-HMS reveals information about the tested liquid that can be applied to characterize certain properties of liquids. A suggested method to characterize liquids by means of the proposed principle is introduced. The experimental results show the feasibility of such a method. It is expected that the proposed principle may be applied to sense properties of liquids in different applications where both disposability and portability are of extreme importance.
Esteves, Lorena C R; Oliveira, Thaís R O; Souza, Elias C; Bomfeti, Cleide A; Gonçalves, Andrea M; Oliveira, Luiz C A; Barbosa, Fernando; Pereira, Márcio C; Rodrigues, Jairo L
2015-04-01
An easy, fast and environment-friendly method for COD determination in water is proposed. The procedure is based on the oxidation of organic matter by the H2O2/Fe(3-x)Co(x)O4 system. The Fe(3-x)Co(x)O4 nanoparticles activate the H2O2 molecule to produce hydroxyl radicals, which are highly reactive for oxidizing organic matter in an aqueous medium. After the oxidation step, the organic matter amounts can be quantified by comparing the quantity of H2O2 consumed. Moreover, the proposed COD method has several distinct advantages, since it does not use toxic reagents and the oxidation reaction of organic matter is conducted at room temperature and atmospheric pressure. Method detection limit is 2.0 mg L(-1) with intra- and inter-day precision lower than 1% (n=5). The calibration graph is linear in the range of 2.0-50 mg L(-1) with a sample throughput of 25 samples h(-1). Data are validated based on the analysis of six contaminated river water samples by the proposed method and by using a comparative method validated and marketed by Merck, with good agreement between the results (t test, 95%). Copyright © 2014 Elsevier B.V. All rights reserved.
Monteiro, C A
1991-01-01
Two methods for estimating the prevalence of growth retardation in a population are evaluated: the classical method, which is based on the proportion of children whose height is more than 2 standard deviations below the expected mean of a reference population; and a new method recently proposed by Mora, which is based on the whole height distribution of observed and reference populations. Application of the classical method to several simulated populations leads to the conclusion that in most situations in developing countries the prevalence of growth retardation is grossly underestimated, and reflects only the presence of severe growth deficits. A second constraint with this method is a marked reduction of the relative differentials between more and less exposed strata. Application of Mora's method to the same simulated populations reduced but did not eliminate these constraints. A novel method for estimating the prevalence of growth retardation, which is based also on the whole height distribution of observed and reference populations, is also described and evaluated. This method produces better estimates of the true prevalence of growth retardation with no reduction in relative differentials.
Nicolás Carcelén, Jesús; Marchante-Gayón, Juan Manuel; González, Pablo Rodríguez; Valledor, Luis; Cañal, María Jesús; Alonso, José Ignacio García
2017-08-18
The use of enriched stable isotopes is of outstanding importance in chemical metrology as it allows the application of isotope dilution mass spectrometry (IDMS). Primary methods based on IDMS ensure the quality of the analytical measurements and traceability of the results to the international system of units. However, the synthesis of isotopically labelled molecules from enriched stable isotopes is an expensive and a difficult task. Either chemical and biochemical methods to produce labelled molecules have been proposed, but so far, few cost-effective methods have been described. The aim of this study was to use the microalgae Chlamydomonas reinhardtii to produce, at laboratory scale, 15 N-labelled amino acids with a high isotopic enrichment. To do that, a culture media containing 15 NH 4 Cl was used. No kinetic isotope effect (KIE) was observed. The labelled proteins biosynthesized by the microorganism were extracted from the biomass and the 15 N-labelled amino acids were obtained after a protein hydrolysis with HCl. The use of the wall deficient strain CC503 cw92 mt+ is fit for purpose, as it only assimilates ammonia as nitrogen source, avoiding isotope contamination with nitrogen from the atmosphere or the reagents used in the culture medium, and enhancing the protein extraction efficiency compared to cell-walled wild type Chlamydomonas. The isotopic enrichment of the labelled amino acids was calculated from their isotopic composition measured by gas chromatography mass spectrometry (GC-MS). The average isotopic enrichment for the 16 amino acids characterized was 99.56 ± 0.05% and the concentration of the amino acids in the hydrolysate ranged from 18 to 90 µg/mL. Previously reported biochemical methods to produce isotopically labelled proteins have been applied in the fields of proteomics and fluxomics. For these approaches, low amounts of products are required and the isotopic enrichment of the molecules has never been properly determined. So far, only 13 C-labelled fatty acids have been isolated from labelled microalga biomass as valuable industrial products. In this study, we propose Chlamydomonas reinhardtii CC503 as a feasible microorganism and strain to produce labelled biomass from which a standard containing sixteen 15 N-labelled amino acids could be obtained.
NASA Astrophysics Data System (ADS)
Chen, Zhe; Qiu, Zurong; Huo, Xinming; Fan, Yuming; Li, Xinghua
2017-03-01
A fiber-capacitive drop analyzer is an instrument which monitors a growing droplet to produce a capacitive opto-tensiotrace (COT). Each COT is an integration of fiber light intensity signals and capacitance signals and can reflect the unique physicochemical property of a liquid. In this study, we propose a solution analytical and concentration quantitative method based on multivariate statistical methods. Eight characteristic values are extracted from each COT. A series of COT characteristic values of training solutions at different concentrations compose a data library of this kind of solution. A two-stage linear discriminant analysis is applied to analyze different solution libraries and establish discriminant functions. Test solutions can be discriminated by these functions. After determining the variety of test solutions, Spearman correlation test and principal components analysis are used to filter and reduce dimensions of eight characteristic values, producing a new representative parameter. A cubic spline interpolation function is built between the parameters and concentrations, based on which we can calculate the concentration of the test solution. Methanol, ethanol, n-propanol, and saline solutions are taken as experimental subjects in this paper. For each solution, nine or ten different concentrations are chosen to be the standard library, and the other two concentrations compose the test group. By using the methods mentioned above, all eight test solutions are correctly identified and the average relative error of quantitative analysis is 1.11%. The method proposed is feasible which enlarges the applicable scope of recognizing liquids based on the COT and improves the concentration quantitative precision, as well.
A practical approach to superresolution
NASA Astrophysics Data System (ADS)
Farsiu, Sina; Elad, Michael; Milanfar, Peyman
2006-01-01
Theoretical and practical limitations usually constrain the achievable resolution of any imaging device. Super-Resolution (SR) methods are developed through the years to go beyond this limit by acquiring and fusing several low-resolution (LR) images of the same scene, producing a high-resolution (HR) image. The early works on SR, although occasionally mathematically optimal for particular models of data and noise, produced poor results when applied to real images. In this paper, we discuss two of the main issues related to designing a practical SR system, namely reconstruction accuracy and computational efficiency. Reconstruction accuracy refers to the problem of designing a robust SR method applicable to images from different imaging systems. We study a general framework for optimal reconstruction of images from grayscale, color, or color filtered (CFA) cameras. The performance of our proposed method is boosted by using powerful priors and is robust to both measurement (e.g. CCD read out noise) and system noise (e.g. motion estimation error). Noting that the motion estimation is often considered a bottleneck in terms of SR performance, we introduce the concept of "constrained motions" for enhancing the quality of super-resolved images. We show that using such constraints will enhance the quality of the motion estimation and therefore results in more accurate reconstruction of the HR images. We also justify some practical assumptions that greatly reduce the computational complexity and memory requirements of the proposed methods. We use efficient approximation of the Kalman Filter (KF) and adopt a dynamic point of view to the SR problem. Novel methods for addressing these issues are accompanied by experimental results on real data.
Lin, Chen-Yen; Halabi, Susan
2017-01-01
We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox’s proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non-zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a phase III clinical trial in prostate cancer. PMID:29326496
Lin, Chen-Yen; Halabi, Susan
2017-01-01
We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox's proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non-zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a phase III clinical trial in prostate cancer.
Multifunctional 3D printing of heterogeneous hydrogel structures
NASA Astrophysics Data System (ADS)
Nadernezhad, Ali; Khani, Navid; Skvortsov, Gözde Akdeniz; Toprakhisar, Burak; Bakirci, Ezgi; Menceloglu, Yusuf; Unal, Serkan; Koc, Bahattin
2016-09-01
Multimaterial additive manufacturing or three-dimensional (3D) printing of hydrogel structures provides the opportunity to engineer geometrically dependent functionalities. However, current fabrication methods are mostly limited to one type of material or only provide one type of functionality. In this paper, we report a novel method of multimaterial deposition of hydrogel structures based on an aspiration-on-demand protocol, in which the constitutive multimaterial segments of extruded filaments were first assembled in liquid state by sequential aspiration of inks into a glass capillary, followed by in situ gel formation. We printed different patterned objects with varying chemical, electrical, mechanical, and biological properties by tuning process and material related parameters, to demonstrate the abilities of this method in producing heterogeneous and multi-functional hydrogel structures. Our results show the potential of proposed method in producing heterogeneous objects with spatially controlled functionalities while preserving structural integrity at the switching interface between different segments. We anticipate that this method would introduce new opportunities in multimaterial additive manufacturing of hydrogels for diverse applications such as biosensors, flexible electronics, tissue engineering and organ printing.
Multifunctional 3D printing of heterogeneous hydrogel structures
Nadernezhad, Ali; Khani, Navid; Skvortsov, Gözde Akdeniz; Toprakhisar, Burak; Bakirci, Ezgi; Menceloglu, Yusuf; Unal, Serkan; Koc, Bahattin
2016-01-01
Multimaterial additive manufacturing or three-dimensional (3D) printing of hydrogel structures provides the opportunity to engineer geometrically dependent functionalities. However, current fabrication methods are mostly limited to one type of material or only provide one type of functionality. In this paper, we report a novel method of multimaterial deposition of hydrogel structures based on an aspiration-on-demand protocol, in which the constitutive multimaterial segments of extruded filaments were first assembled in liquid state by sequential aspiration of inks into a glass capillary, followed by in situ gel formation. We printed different patterned objects with varying chemical, electrical, mechanical, and biological properties by tuning process and material related parameters, to demonstrate the abilities of this method in producing heterogeneous and multi-functional hydrogel structures. Our results show the potential of proposed method in producing heterogeneous objects with spatially controlled functionalities while preserving structural integrity at the switching interface between different segments. We anticipate that this method would introduce new opportunities in multimaterial additive manufacturing of hydrogels for diverse applications such as biosensors, flexible electronics, tissue engineering and organ printing. PMID:27630079
Satellite image fusion based on principal component analysis and high-pass filtering.
Metwalli, Mohamed R; Nasr, Ayman H; Allah, Osama S Farag; El-Rabaie, S; Abd El-Samie, Fathi E
2010-06-01
This paper presents an integrated method for the fusion of satellite images. Several commercial earth observation satellites carry dual-resolution sensors, which provide high spatial resolution or simply high-resolution (HR) panchromatic (pan) images and low-resolution (LR) multi-spectral (MS) images. Image fusion methods are therefore required to integrate a high-spectral-resolution MS image with a high-spatial-resolution pan image to produce a pan-sharpened image with high spectral and spatial resolutions. Some image fusion methods such as the intensity, hue, and saturation (IHS) method, the principal component analysis (PCA) method, and the Brovey transform (BT) method provide HR MS images, but with low spectral quality. Another family of image fusion methods, such as the high-pass-filtering (HPF) method, operates on the basis of the injection of high frequency components from the HR pan image into the MS image. This family of methods provides less spectral distortion. In this paper, we propose the integration of the PCA method and the HPF method to provide a pan-sharpened MS image with superior spatial resolution and less spectral distortion. The experimental results show that the proposed fusion method retains the spectral characteristics of the MS image and, at the same time, improves the spatial resolution of the pan-sharpened image.
HUGO: Hierarchical mUlti-reference Genome cOmpression for aligned reads
Li, Pinghao; Jiang, Xiaoqian; Wang, Shuang; Kim, Jihoon; Xiong, Hongkai; Ohno-Machado, Lucila
2014-01-01
Background and objective Short-read sequencing is becoming the standard of practice for the study of structural variants associated with disease. However, with the growth of sequence data largely surpassing reasonable storage capability, the biomedical community is challenged with the management, transfer, archiving, and storage of sequence data. Methods We developed Hierarchical mUlti-reference Genome cOmpression (HUGO), a novel compression algorithm for aligned reads in the sorted Sequence Alignment/Map (SAM) format. We first aligned short reads against a reference genome and stored exactly mapped reads for compression. For the inexact mapped or unmapped reads, we realigned them against different reference genomes using an adaptive scheme by gradually shortening the read length. Regarding the base quality value, we offer lossy and lossless compression mechanisms. The lossy compression mechanism for the base quality values uses k-means clustering, where a user can adjust the balance between decompression quality and compression rate. The lossless compression can be produced by setting k (the number of clusters) to the number of different quality values. Results The proposed method produced a compression ratio in the range 0.5–0.65, which corresponds to 35–50% storage savings based on experimental datasets. The proposed approach achieved 15% more storage savings over CRAM and comparable compression ratio with Samcomp (CRAM and Samcomp are two of the state-of-the-art genome compression algorithms). The software is freely available at https://sourceforge.net/projects/hierachicaldnac/with a General Public License (GPL) license. Limitation Our method requires having different reference genomes and prolongs the execution time for additional alignments. Conclusions The proposed multi-reference-based compression algorithm for aligned reads outperforms existing single-reference based algorithms. PMID:24368726
Pezzulo, Giovanni; Rigoli, Francesco; Chersi, Fabian
2013-01-01
Instrumental behavior depends on both goal-directed and habitual mechanisms of choice. Normative views cast these mechanisms in terms of model-free and model-based methods of reinforcement learning, respectively. An influential proposal hypothesizes that model-free and model-based mechanisms coexist and compete in the brain according to their relative uncertainty. In this paper we propose a novel view in which a single Mixed Instrumental Controller produces both goal-directed and habitual behavior by flexibly balancing and combining model-based and model-free computations. The Mixed Instrumental Controller performs a cost-benefits analysis to decide whether to chose an action immediately based on the available “cached” value of actions (linked to model-free mechanisms) or to improve value estimation by mentally simulating the expected outcome values (linked to model-based mechanisms). Since mental simulation entails cognitive effort and increases the reward delay, it is activated only when the associated “Value of Information” exceeds its costs. The model proposes a method to compute the Value of Information, based on the uncertainty of action values and on the distance of alternative cached action values. Overall, the model by default chooses on the basis of lighter model-free estimates, and integrates them with costly model-based predictions only when useful. Mental simulation uses a sampling method to produce reward expectancies, which are used to update the cached value of one or more actions; in turn, this updated value is used for the choice. The key predictions of the model are tested in different settings of a double T-maze scenario. Results are discussed in relation with neurobiological evidence on the hippocampus – ventral striatum circuit in rodents, which has been linked to goal-directed spatial navigation. PMID:23459512
Pezzulo, Giovanni; Rigoli, Francesco; Chersi, Fabian
2013-01-01
Instrumental behavior depends on both goal-directed and habitual mechanisms of choice. Normative views cast these mechanisms in terms of model-free and model-based methods of reinforcement learning, respectively. An influential proposal hypothesizes that model-free and model-based mechanisms coexist and compete in the brain according to their relative uncertainty. In this paper we propose a novel view in which a single Mixed Instrumental Controller produces both goal-directed and habitual behavior by flexibly balancing and combining model-based and model-free computations. The Mixed Instrumental Controller performs a cost-benefits analysis to decide whether to chose an action immediately based on the available "cached" value of actions (linked to model-free mechanisms) or to improve value estimation by mentally simulating the expected outcome values (linked to model-based mechanisms). Since mental simulation entails cognitive effort and increases the reward delay, it is activated only when the associated "Value of Information" exceeds its costs. The model proposes a method to compute the Value of Information, based on the uncertainty of action values and on the distance of alternative cached action values. Overall, the model by default chooses on the basis of lighter model-free estimates, and integrates them with costly model-based predictions only when useful. Mental simulation uses a sampling method to produce reward expectancies, which are used to update the cached value of one or more actions; in turn, this updated value is used for the choice. The key predictions of the model are tested in different settings of a double T-maze scenario. Results are discussed in relation with neurobiological evidence on the hippocampus - ventral striatum circuit in rodents, which has been linked to goal-directed spatial navigation.
Deductive Derivation and Turing-Computerization of Semiparametric Efficient Estimation
Frangakis, Constantine E.; Qian, Tianchen; Wu, Zhenke; Diaz, Ivan
2015-01-01
Summary Researchers often seek robust inference for a parameter through semiparametric estimation. Efficient semiparametric estimation currently requires theoretical derivation of the efficient influence function (EIF), which can be a challenging and time-consuming task. If this task can be computerized, it can save dramatic human effort, which can be transferred, for example, to the design of new studies. Although the EIF is, in principle, a derivative, simple numerical differentiation to calculate the EIF by a computer masks the EIF’s functional dependence on the parameter of interest. For this reason, the standard approach to obtaining the EIF relies on the theoretical construction of the space of scores under all possible parametric submodels. This process currently depends on the correctness of conjectures about these spaces, and the correct verification of such conjectures. The correct guessing of such conjectures, though successful in some problems, is a nondeductive process, i.e., is not guaranteed to succeed (e.g., is not computerizable), and the verification of conjectures is generally susceptible to mistakes. We propose a method that can deductively produce semiparametric locally efficient estimators. The proposed method is computerizable, meaning that it does not need either conjecturing, or otherwise theoretically deriving the functional form of the EIF, and is guaranteed to produce the desired estimates even for complex parameters. The method is demonstrated through an example. PMID:26237182