Sample records for proposed technique compared

  1. Performance of dual inverter fed open end winding induction motor drive using carrier shift PWM techniques

    NASA Astrophysics Data System (ADS)

    Priya Darshini, B.; Ranjit, M.; Babu, V. Ramesh

    2018-04-01

    In this paper different Multicarrier PWM (MCPWM) techniques are proposed for dual inverter fed open end induction motor (IM) drive to achieve multilevel operation. To generate the switching pulses for the dual inverter sinusoidal modulating signal is compared with multi carrier signals. A common mode voltage (CMV) has been analyzed in the proposed open end winding induction motor drive. All the proposed techniques mitigate the CMV along with the harmonic distortion in the phase voltage. To authenticate the proposed work several simulation techniques have been carried out using MATLAB/SIMULINK and the corresponding results are presented and compared.

  2. Wavelet filtered shifted phase-encoded joint transform correlation for face recognition

    NASA Astrophysics Data System (ADS)

    Moniruzzaman, Md.; Alam, Mohammad S.

    2017-05-01

    A new wavelet-filtered-based Shifted- phase-encoded Joint Transform Correlation (WPJTC) technique has been proposed for efficient face recognition. The proposed technique uses discrete wavelet decomposition for preprocessing and can effectively accommodate various 3D facial distortions, effects of noise, and illumination variations. After analyzing different forms of wavelet basis functions, an optimal method has been proposed by considering the discrimination capability and processing speed as performance trade-offs. The proposed technique yields better correlation discrimination compared to alternate pattern recognition techniques such as phase-shifted phase-encoded fringe-adjusted joint transform correlator. The performance of the proposed WPJTC has been tested using the Yale facial database and extended Yale facial database under different environments such as illumination variation, noise, and 3D changes in facial expressions. Test results show that the proposed WPJTC yields better performance compared to alternate JTC based face recognition techniques.

  3. Flood Detection/Monitoring Using Adjustable Histogram Equalization Technique

    PubMed Central

    Riaz, Muhammad Mohsin; Ghafoor, Abdul

    2014-01-01

    Flood monitoring technique using adjustable histogram equalization is proposed. The technique overcomes the limitations (overenhancement, artifacts, and unnatural look) of existing technique by adjusting the contrast of images. The proposed technique takes pre- and postimages and applies different processing steps for generating flood map without user interaction. The resultant flood maps can be used for flood monitoring and detection. Simulation results show that the proposed technique provides better output quality compared to the state of the art existing technique. PMID:24558332

  4. Localized Spatio-Temporal Constraints for Accelerated CMR Perfusion

    PubMed Central

    Akçakaya, Mehmet; Basha, Tamer A.; Pflugi, Silvio; Foppa, Murilo; Kissinger, Kraig V.; Hauser, Thomas H.; Nezafat, Reza

    2013-01-01

    Purpose To develop and evaluate an image reconstruction technique for cardiac MRI (CMR)perfusion that utilizes localized spatio-temporal constraints. Methods CMR perfusion plays an important role in detecting myocardial ischemia in patients with coronary artery disease. Breath-hold k-t based image acceleration techniques are typically used in CMR perfusion for superior spatial/temporal resolution, and improved coverage. In this study, we propose a novel compressed sensing based image reconstruction technique for CMR perfusion, with applicability to free-breathing examinations. This technique uses local spatio-temporal constraints by regularizing image patches across a small number of dynamics. The technique is compared to conventional dynamic-by-dynamic reconstruction, and sparsity regularization using a temporal principal-component (pc) basis, as well as zerofilled data in multi-slice 2D and 3D CMR perfusion. Qualitative image scores are used (1=poor, 4=excellent) to evaluate the technique in 3D perfusion in 10 patients and 5 healthy subjects. On 4 healthy subjects, the proposed technique was also compared to a breath-hold multi-slice 2D acquisition with parallel imaging in terms of signal intensity curves. Results The proposed technique results in images that are superior in terms of spatial and temporal blurring compared to the other techniques, even in free-breathing datasets. The image scores indicate a significant improvement compared to other techniques in 3D perfusion (2.8±0.5 vs. 2.3±0.5 for x-pc regularization, 1.7±0.5 for dynamic-by-dynamic, 1.1±0.2 for zerofilled). Signal intensity curves indicate similar dynamics of uptake between the proposed method with a 3D acquisition and the breath-hold multi-slice 2D acquisition with parallel imaging. Conclusion The proposed reconstruction utilizes sparsity regularization based on localized information in both spatial and temporal domains for highly-accelerated CMR perfusion with potential utility in free-breathing 3D acquisitions. PMID:24123058

  5. Information Hiding In Digital Video Using DCT, DWT and CvT

    NASA Astrophysics Data System (ADS)

    Abed Shukur, Wisam; Najah Abdullah, Wathiq; Kareem Qurban, Luheb

    2018-05-01

    The type of video that used in this proposed hiding a secret information technique is .AVI; the proposed technique of a data hiding to embed a secret information into video frames by using Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT) and Curvelet Transform (CvT). An individual pixel consists of three color components (RGB), the secret information is embedded in Red (R) color channel. On the receiver side, the secret information is extracted from received video. After extracting secret information, robustness of proposed hiding a secret information technique is measured and obtained by computing the degradation of the extracted secret information by comparing it with the original secret information via calculating the Normalized cross Correlation (NC). The experiments shows the error ratio of the proposed technique is (8%) while accuracy ratio is (92%) when the Curvelet Transform (CvT) is used, but compared with Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT), the error rates are 11% and 14% respectively, while the accuracy ratios are (89%) and (86%) respectively. So, the experiments shows the Poisson noise gives better results than other types of noises, while the speckle noise gives worst results compared with other types of noises. The proposed technique has been established by using MATLAB R2016a programming language.

  6. Novel permutation measures for image encryption algorithms

    NASA Astrophysics Data System (ADS)

    Abd-El-Hafiz, Salwa K.; AbdElHaleem, Sherif H.; Radwan, Ahmed G.

    2016-10-01

    This paper proposes two measures for the evaluation of permutation techniques used in image encryption. First, a general mathematical framework for describing the permutation phase used in image encryption is presented. Using this framework, six different permutation techniques, based on chaotic and non-chaotic generators, are described. The two new measures are, then, introduced to evaluate the effectiveness of permutation techniques. These measures are (1) Percentage of Adjacent Pixels Count (PAPC) and (2) Distance Between Adjacent Pixels (DBAP). The proposed measures are used to evaluate and compare the six permutation techniques in different scenarios. The permutation techniques are applied on several standard images and the resulting scrambled images are analyzed. Moreover, the new measures are used to compare the permutation algorithms on different matrix sizes irrespective of the actual parameters used in each algorithm. The analysis results show that the proposed measures are good indicators of the effectiveness of the permutation technique.

  7. 2D DOST based local phase pattern for face recognition

    NASA Astrophysics Data System (ADS)

    Moniruzzaman, Md.; Alam, Mohammad S.

    2017-05-01

    A new two dimensional (2-D) Discrete Orthogonal Stcokwell Transform (DOST) based Local Phase Pattern (LPP) technique has been proposed for efficient face recognition. The proposed technique uses 2-D DOST as preliminary preprocessing and local phase pattern to form robust feature signature which can effectively accommodate various 3D facial distortions and illumination variations. The S-transform, is an extension of the ideas of the continuous wavelet transform (CWT), is also known for its local spectral phase properties in time-frequency representation (TFR). It provides a frequency dependent resolution of the time-frequency space and absolutely referenced local phase information while maintaining a direct relationship with the Fourier spectrum which is unique in TFR. After utilizing 2-D Stransform as the preprocessing and build local phase pattern from extracted phase information yield fast and efficient technique for face recognition. The proposed technique shows better correlation discrimination compared to alternate pattern recognition techniques such as wavelet or Gabor based face recognition. The performance of the proposed method has been tested using the Yale and extended Yale facial database under different environments such as illumination variation and 3D changes in facial expressions. Test results show that the proposed technique yields better performance compared to alternate time-frequency representation (TFR) based face recognition techniques.

  8. An angle-dependent estimation of CT x-ray spectrum from rotational transmission measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Yuan, E-mail: yuan.lin@duke.edu; Samei, Ehsan; Ramirez-Giraldo, Juan Carlos

    2014-06-15

    Purpose: Computed tomography (CT) performance as well as dose and image quality is directly affected by the x-ray spectrum. However, the current assessment approaches of the CT x-ray spectrum require costly measurement equipment and complicated operational procedures, and are often limited to the spectrum corresponding to the center of rotation. In order to address these limitations, the authors propose an angle-dependent estimation technique, where the incident spectra across a wide range of angular trajectories can be estimated accurately with only a single phantom and a single axial scan in the absence of the knowledge of the bowtie filter. Methods: Themore » proposed technique uses a uniform cylindrical phantom, made of ultra-high-molecular-weight polyethylene and positioned in an off-centered geometry. The projection data acquired with an axial scan have a twofold purpose. First, they serve as a reflection of the transmission measurements across different angular trajectories. Second, they are used to reconstruct the cross sectional image of the phantom, which is then utilized to compute the intersection length of each transmission measurement. With each CT detector element recording a range of transmission measurements for a single angular trajectory, the spectrum is estimated for that trajectory. A data conditioning procedure is used to combine information from hundreds of collected transmission measurements to accelerate the estimation speed, to reduce noise, and to improve estimation stability. The proposed spectral estimation technique was validated experimentally using a clinical scanner (Somatom Definition Flash, Siemens Healthcare, Germany) with spectra provided by the manufacturer serving as the comparison standard. Results obtained with the proposed technique were compared against those obtained from a second conventional transmission measurement technique with two materials (i.e., Cu and Al). After validation, the proposed technique was applied to measure spectra from the clinical system across a range of angular trajectories [−15°, 15°] and spectrum settings (80, 100, 120, 140 kVp). Results: At 140 kVp, the proposed technique was comparable to the conventional technique in terms of the mean energy difference (MED, −0.29 keV) and the normalized root mean square difference (NRMSD, 0.84%) from the comparison standard compared to 0.64 keV and 1.56%, respectively, with the conventional technique. The average absolute MEDs and NRMSDs across kVp settings and angular trajectories were less than 0.61 keV and 3.41%, respectively, which indicates a high level of estimation accuracy and stability. Conclusions: An angle-dependent estimation technique of CT x-ray spectra from rotational transmission measurements was proposed. Compared with the conventional technique, the proposed method simplifies the measurement procedures and enables incident spectral estimation for a wide range of angular trajectories. The proposed technique is suitable for rigorous research objectives as well as routine clinical quality control procedures.« less

  9. Alternative Constraint Handling Technique for Four-Bar Linkage Path Generation

    NASA Astrophysics Data System (ADS)

    Sleesongsom, S.; Bureerat, S.

    2018-03-01

    This paper proposes an extension of a new concept for path generation from our previous work by adding a new constraint handling technique. The propose technique was initially designed for problems without prescribed timing by avoiding the timing constraint, while remain constraints are solving with a new constraint handling technique. The technique is one kind of penalty technique. The comparative study is optimisation of path generation problems are solved using self-adaptive population size teaching-learning based optimization (SAP-TLBO) and original TLBO. In this study, two traditional path generation test problem are used to test the proposed technique. The results show that the new technique can be applied with the path generation problem without prescribed timing and gives better results than the previous technique. Furthermore, SAP-TLBO outperforms the original one.

  10. Feature-extracted joint transform correlation.

    PubMed

    Alam, M S

    1995-12-10

    A new technique for real-time optical character recognition that uses a joint transform correlator is proposed. This technique employs feature-extracted patterns for the reference image to detect a wide range of characters in one step. The proposed technique significantly enhances the processing speed when compared with the presently available joint transform correlator architectures and shows feasibility for multichannel joint transform correlation.

  11. Novel Fourier-based iterative reconstruction for sparse fan projection using alternating direction total variation minimization

    NASA Astrophysics Data System (ADS)

    Zhao, Jin; Han-Ming, Zhang; Bin, Yan; Lei, Li; Lin-Yuan, Wang; Ai-Long, Cai

    2016-03-01

    Sparse-view x-ray computed tomography (CT) imaging is an interesting topic in CT field and can efficiently decrease radiation dose. Compared with spatial reconstruction, a Fourier-based algorithm has advantages in reconstruction speed and memory usage. A novel Fourier-based iterative reconstruction technique that utilizes non-uniform fast Fourier transform (NUFFT) is presented in this work along with advanced total variation (TV) regularization for a fan sparse-view CT. The proposition of a selective matrix contributes to improve reconstruction quality. The new method employs the NUFFT and its adjoin to iterate back and forth between the Fourier and image space. The performance of the proposed algorithm is demonstrated through a series of digital simulations and experimental phantom studies. Results of the proposed algorithm are compared with those of existing TV-regularized techniques based on compressed sensing method, as well as basic algebraic reconstruction technique. Compared with the existing TV-regularized techniques, the proposed Fourier-based technique significantly improves convergence rate and reduces memory allocation, respectively. Projected supported by the National High Technology Research and Development Program of China (Grant No. 2012AA011603) and the National Natural Science Foundation of China (Grant No. 61372172).

  12. A comparative study of progressive versus successive spectrophotometric resolution techniques applied for pharmaceutical ternary mixtures

    NASA Astrophysics Data System (ADS)

    Saleh, Sarah S.; Lotfy, Hayam M.; Hassan, Nagiba Y.; Salem, Hesham

    2014-11-01

    This work represents a comparative study of a novel progressive spectrophotometric resolution technique namely, amplitude center method (ACM), versus the well-established successive spectrophotometric resolution techniques namely; successive derivative subtraction (SDS); successive derivative of ratio spectra (SDR) and mean centering of ratio spectra (MCR). All the proposed spectrophotometric techniques consist of several consecutive steps utilizing ratio and/or derivative spectra. The novel amplitude center method (ACM) can be used for the determination of ternary mixtures using single divisor where the concentrations of the components are determined through progressive manipulation performed on the same ratio spectrum. Those methods were applied for the analysis of the ternary mixture of chloramphenicol (CHL), dexamethasone sodium phosphate (DXM) and tetryzoline hydrochloride (TZH) in eye drops in the presence of benzalkonium chloride as a preservative. The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs. The proposed methods were validated according to the ICH guidelines. A comparative study was conducted between those methods regarding simplicity, limitation and sensitivity. The obtained results were statistically compared with those obtained from the official BP methods, showing no significant difference with respect to accuracy and precision.

  13. Floating-point scaling technique for sources separation automatic gain control

    NASA Astrophysics Data System (ADS)

    Fermas, A.; Belouchrani, A.; Ait-Mohamed, O.

    2012-07-01

    Based on the floating-point representation and taking advantage of scaling factor indetermination in blind source separation (BSS) processing, we propose a scaling technique applied to the separation matrix, to avoid the saturation or the weakness in the recovered source signals. This technique performs an automatic gain control in an on-line BSS environment. We demonstrate the effectiveness of this technique by using the implementation of a division-free BSS algorithm with two inputs, two outputs. The proposed technique is computationally cheaper and efficient for a hardware implementation compared to the Euclidean normalisation.

  14. Dynamic lens and monovision 3D displays to improve viewer comfort.

    PubMed

    Johnson, Paul V; Parnell, Jared Aq; Kim, Joohwan; Saunter, Christopher D; Love, Gordon D; Banks, Martin S

    2016-05-30

    Stereoscopic 3D (S3D) displays provide an additional sense of depth compared to non-stereoscopic displays by sending slightly different images to the two eyes. But conventional S3D displays do not reproduce all natural depth cues. In particular, focus cues are incorrect causing mismatches between accommodation and vergence: The eyes must accommodate to the display screen to create sharp retinal images even when binocular disparity drives the eyes to converge to other distances. This mismatch causes visual discomfort and reduces visual performance. We propose and assess two new techniques that are designed to reduce the vergence-accommodation conflict and thereby decrease discomfort and increase visual performance. These techniques are much simpler to implement than previous conflict-reducing techniques. The first proposed technique uses variable-focus lenses between the display and the viewer's eyes. The power of the lenses is yoked to the expected vergence distance thereby reducing the mismatch between vergence and accommodation. The second proposed technique uses a fixed lens in front of one eye and relies on the binocularly fused percept being determined by one eye and then the other, depending on simulated distance. We conducted performance tests and discomfort assessments with both techniques and compared the results to those of a conventional S3D display. The first proposed technique, but not the second, yielded clear improvements in performance and reductions in discomfort. This dynamic-lens technique therefore offers an easily implemented technique for reducing the vergence-accommodation conflict and thereby improving viewer experience.

  15. Application of Cross-Correlation Greens Function Along With FDTD for Fast Computation of Envelope Correlation Coefficient Over Wideband for MIMO Antennas

    NASA Astrophysics Data System (ADS)

    Sarkar, Debdeep; Srivastava, Kumar Vaibhav

    2017-02-01

    In this paper, the concept of cross-correlation Green's functions (CGF) is used in conjunction with the finite difference time domain (FDTD) technique for calculation of envelope correlation coefficient (ECC) of any arbitrary MIMO antenna system over wide frequency band. Both frequency-domain (FD) and time-domain (TD) post-processing techniques are proposed for possible application with this FDTD-CGF scheme. The FDTD-CGF time-domain (FDTD-CGF-TD) scheme utilizes time-domain signal processing methods and exhibits significant reduction in ECC computation time as compared to the FDTD-CGF frequency domain (FDTD-CGF-FD) scheme, for high frequency-resolution requirements. The proposed FDTD-CGF based schemes can be applied for accurate and fast prediction of wideband ECC response, instead of the conventional scattering parameter based techniques which have several limitations. Numerical examples of the proposed FDTD-CGF techniques are provided for two-element MIMO systems involving thin-wire half-wavelength dipoles in parallel side-by-side as well as orthogonal arrangements. The results obtained from the FDTD-CGF techniques are compared with results from commercial electromagnetic solver Ansys HFSS, to verify the validity of proposed approach.

  16. Classification of forensic autopsy reports through conceptual graph-based document representation model.

    PubMed

    Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa; Al-Garadi, Mohammed Ali

    2018-06-01

    Text categorization has been used extensively in recent years to classify plain-text clinical reports. This study employs text categorization techniques for the classification of open narrative forensic autopsy reports. One of the key steps in text classification is document representation. In document representation, a clinical report is transformed into a format that is suitable for classification. The traditional document representation technique for text categorization is the bag-of-words (BoW) technique. In this study, the traditional BoW technique is ineffective in classifying forensic autopsy reports because it merely extracts frequent but discriminative features from clinical reports. Moreover, this technique fails to capture word inversion, as well as word-level synonymy and polysemy, when classifying autopsy reports. Hence, the BoW technique suffers from low accuracy and low robustness unless it is improved with contextual and application-specific information. To overcome the aforementioned limitations of the BoW technique, this research aims to develop an effective conceptual graph-based document representation (CGDR) technique to classify 1500 forensic autopsy reports from four (4) manners of death (MoD) and sixteen (16) causes of death (CoD). Term-based and Systematized Nomenclature of Medicine-Clinical Terms (SNOMED CT) based conceptual features were extracted and represented through graphs. These features were then used to train a two-level text classifier. The first level classifier was responsible for predicting MoD. In addition, the second level classifier was responsible for predicting CoD using the proposed conceptual graph-based document representation technique. To demonstrate the significance of the proposed technique, its results were compared with those of six (6) state-of-the-art document representation techniques. Lastly, this study compared the effects of one-level classification and two-level classification on the experimental results. The experimental results indicated that the CGDR technique achieved 12% to 15% improvement in accuracy compared with fully automated document representation baseline techniques. Moreover, two-level classification obtained better results compared with one-level classification. The promising results of the proposed conceptual graph-based document representation technique suggest that pathologists can adopt the proposed system as their basis for second opinion, thereby supporting them in effectively determining CoD. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Wavelet Transform Based Filter to Remove the Notches from Signal Under Harmonic Polluted Environment

    NASA Astrophysics Data System (ADS)

    Das, Sukanta; Ranjan, Vikash

    2017-12-01

    The work proposes to annihilate the notches present in the synchronizing signal required for converter operation appearing due to switching of semiconductor devices connected to the system in the harmonic polluted environment. The disturbances in the signal are suppressed by wavelet based novel filtering technique. In the proposed technique, the notches in the signal are determined and eliminated by the wavelet based multi-rate filter using `Daubechies4' (db4) as mother wavelet. The computational complexity of the adapted technique is very less as compared to any other conventional notch filtering techniques. The proposed technique is developed in MATLAB/Simulink and finally validated with dSPACE-1103 interface. The recovered signal, thus obtained, is almost free of the notches.

  18. Modeling and control of distributed energy systems during transition between grid connected and standalone modes

    NASA Astrophysics Data System (ADS)

    Arafat, Md Nayeem

    Distributed generation systems (DGs) have been penetrating into our energy networks with the advancement in the renewable energy sources and energy storage elements. These systems can operate in synchronism with the utility grid referred to as the grid connected (GC) mode of operation, or work independently, referred to as the standalone (SA) mode of operation. There is a need to ensure continuous power flow during transition between GC and SA modes, referred to as the transition mode, in operating DGs. In this dissertation, efficient and effective transition control algorithms are developed for DGs operating either independently or collectively with other units. Three techniques are proposed in this dissertation to manage the proper transition operations. In the first technique, a new control algorithm is proposed for an independent DG which can operate in SA and GC modes. The proposed transition control algorithm ensures low total harmonic distortion (THD) and less voltage fluctuation during mode transitions compared to the other techniques. In the second technique, a transition control is suggested for a collective of DGs operating in a microgrid system architecture to improve the reliability of the system, reduce the cost, and provide better performance. In this technique, one of the DGs in a microgrid system, referred to as a dispatch unit , takes the additional responsibility of mode transitioning to ensure smooth transition and supply/demand balance in the microgrid. In the third technique, an alternative transition technique is proposed through hybridizing the current and droop controllers. The proposed hybrid transition control technique has higher reliability compared to the dispatch unit concept. During the GC mode, the proposed hybrid controller uses current control. During the SA mode, the hybrid controller uses droop control. During the transition mode, both of the controllers participate in formulating the inverter output voltage but with different weights or coefficients. Voltage source inverters interfacing the DGs as well as the proposed transition control algorithms have been modeled to analyze the stability of the algorithms in different configurations. The performances of the proposed algorithms are verified through simulation and experimental studies. It has been found that the proposed control techniques can provide smooth power flow to the local loads during the GC, SA and transition modes.

  19. Statistical normalization techniques for magnetic resonance imaging.

    PubMed

    Shinohara, Russell T; Sweeney, Elizabeth M; Goldsmith, Jeff; Shiee, Navid; Mateen, Farrah J; Calabresi, Peter A; Jarso, Samson; Pham, Dzung L; Reich, Daniel S; Crainiceanu, Ciprian M

    2014-01-01

    While computed tomography and other imaging techniques are measured in absolute units with physical meaning, magnetic resonance images are expressed in arbitrary units that are difficult to interpret and differ between study visits and subjects. Much work in the image processing literature on intensity normalization has focused on histogram matching and other histogram mapping techniques, with little emphasis on normalizing images to have biologically interpretable units. Furthermore, there are no formalized principles or goals for the crucial comparability of image intensities within and across subjects. To address this, we propose a set of criteria necessary for the normalization of images. We further propose simple and robust biologically motivated normalization techniques for multisequence brain imaging that have the same interpretation across acquisitions and satisfy the proposed criteria. We compare the performance of different normalization methods in thousands of images of patients with Alzheimer's disease, hundreds of patients with multiple sclerosis, and hundreds of healthy subjects obtained in several different studies at dozens of imaging centers.

  20. Self-Calibration Approach for Mixed Signal Circuits in Systems-on-Chip

    NASA Astrophysics Data System (ADS)

    Jung, In-Seok

    MOSFET scaling has served industry very well for a few decades by proving improvements in transistor performance, power, and cost. However, they require high test complexity and cost due to several issues such as limited pin count and integration of analog and digital mixed circuits. Therefore, self-calibration is an excellent and promising method to improve yield and to reduce manufacturing cost by simplifying the test complexity, because it is possible to address the process variation effects by means of self-calibration technique. Since the prior published calibration techniques were developed for a specific targeted application, it is not easy to be utilized for other applications. In order to solve the aforementioned issues, in this dissertation, several novel self-calibration design techniques in mixed-signal mode circuits are proposed for an analog to digital converter (ADC) to reduce mismatch error and improve performance. These are essential components in SOCs and the proposed self-calibration approach also compensates the process variations. The proposed novel self-calibration approach targets the successive approximation (SA) ADC. First of all, the offset error of the comparator in the SA-ADC is reduced using the proposed approach by enabling the capacitor array in the input nodes for better matching. In addition, the auxiliary capacitors for each capacitor of DAC in the SA-ADC are controlled by using synthesized digital controller to minimize the mismatch error of the DAC. Since the proposed technique is applied during foreground operation, the power overhead in SA-ADC case is minimal because the calibration circuit is deactivated during normal operation time. Another benefit of the proposed technique is that the offset voltage of the comparator is continuously adjusted for every step to decide one-bit code, because not only the inherit offset voltage of the comparator but also the mismatch of DAC are compensated simultaneously. Synthesized digital calibration control circuit operates as fore-ground mode, and the controller has been highly optimized for low power and better performance with simplified structure. In addition, in order to increase the sampling clock frequency of proposed self-calibration approach, novel variable clock period method is proposed. To achieve high speed SAR operation, a variable clock time technique is used to reduce not only peak current but also die area. The technique removes conversion time waste and extends the SAR operation speed easily. To verify and demonstrate the proposed techniques, a prototype charge-redistribution SA-ADCs with the proposed self-calibration is implemented in a 130nm standard CMOS process. The prototype circuit's silicon area is 0.0715 mm 2 and consumers 4.62mW with 1.2V power supply.

  1. A comparative study of progressive versus successive spectrophotometric resolution techniques applied for pharmaceutical ternary mixtures.

    PubMed

    Saleh, Sarah S; Lotfy, Hayam M; Hassan, Nagiba Y; Salem, Hesham

    2014-11-11

    This work represents a comparative study of a novel progressive spectrophotometric resolution technique namely, amplitude center method (ACM), versus the well-established successive spectrophotometric resolution techniques namely; successive derivative subtraction (SDS); successive derivative of ratio spectra (SDR) and mean centering of ratio spectra (MCR). All the proposed spectrophotometric techniques consist of several consecutive steps utilizing ratio and/or derivative spectra. The novel amplitude center method (ACM) can be used for the determination of ternary mixtures using single divisor where the concentrations of the components are determined through progressive manipulation performed on the same ratio spectrum. Those methods were applied for the analysis of the ternary mixture of chloramphenicol (CHL), dexamethasone sodium phosphate (DXM) and tetryzoline hydrochloride (TZH) in eye drops in the presence of benzalkonium chloride as a preservative. The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs. The proposed methods were validated according to the ICH guidelines. A comparative study was conducted between those methods regarding simplicity, limitation and sensitivity. The obtained results were statistically compared with those obtained from the official BP methods, showing no significant difference with respect to accuracy and precision. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Visibility enhancement of color images using Type-II fuzzy membership function

    NASA Astrophysics Data System (ADS)

    Singh, Harmandeep; Khehra, Baljit Singh

    2018-04-01

    Images taken in poor environmental conditions decrease the visibility and hidden information of digital images. Therefore, image enhancement techniques are necessary for improving the significant details of these images. An extensive review has shown that histogram-based enhancement techniques greatly suffer from over/under enhancement issues. Fuzzy-based enhancement techniques suffer from over/under saturated pixels problems. In this paper, a novel Type-II fuzzy-based image enhancement technique has been proposed for improving the visibility of images. The Type-II fuzzy logic can automatically extract the local atmospheric light and roughly eliminate the atmospheric veil in local detail enhancement. The proposed technique has been evaluated on 10 well-known weather degraded color images and is also compared with four well-known existing image enhancement techniques. The experimental results reveal that the proposed technique outperforms others regarding visible edge ratio, color gradients and number of saturated pixels.

  3. A new technique for solving puzzles.

    PubMed

    Makridis, Michael; Papamarkos, Nikos

    2010-06-01

    This paper proposes a new technique for solving jigsaw puzzles. The novelty of the proposed technique is that it provides an automatic jigsaw puzzle solution without any initial restriction about the shape of pieces, the number of neighbor pieces, etc. The proposed technique uses both curve- and color-matching similarity features. A recurrent procedure is applied, which compares and merges puzzle pieces in pairs, until the original puzzle image is reformed. Geometrical and color features are extracted on the characteristic points (CPs) of the puzzle pieces. CPs, which can be considered as high curvature points, are detected by a rotationally invariant corner detection algorithm. The features which are associated with color are provided by applying a color reduction technique using the Kohonen self-organized feature map. Finally, a postprocessing stage checks and corrects the relative position between puzzle pieces to improve the quality of the resulting image. Experimental results prove the efficiency of the proposed technique, which can be further extended to deal with even more complex jigsaw puzzle problems.

  4. Defect inspection using a time-domain mode decomposition technique

    NASA Astrophysics Data System (ADS)

    Zhu, Jinlong; Goddard, Lynford L.

    2018-03-01

    In this paper, we propose a technique called time-varying frequency scanning (TVFS) to meet the challenges in killer defect inspection. The proposed technique enables the dynamic monitoring of defects by checking the hopping in the instantaneous frequency data and the classification of defect types by comparing the difference in frequencies. The TVFS technique utilizes the bidimensional empirical mode decomposition (BEMD) method to separate the defect information from the sea of system errors. This significantly improve the signal-to-noise ratio (SNR) and moreover, it potentially enables reference-free defect inspection.

  5. Application of empirical mode decomposition with local linear quantile regression in financial time series forecasting.

    PubMed

    Jaber, Abobaker M; Ismail, Mohd Tahir; Altaher, Alsaidi M

    2014-01-01

    This paper mainly forecasts the daily closing price of stock markets. We propose a two-stage technique that combines the empirical mode decomposition (EMD) with nonparametric methods of local linear quantile (LLQ). We use the proposed technique, EMD-LLQ, to forecast two stock index time series. Detailed experiments are implemented for the proposed method, in which EMD-LPQ, EMD, and Holt-Winter methods are compared. The proposed EMD-LPQ model is determined to be superior to the EMD and Holt-Winter methods in predicting the stock closing prices.

  6. Metrics in method engineering

    NASA Astrophysics Data System (ADS)

    Brinkkemper, S.; Rossi, M.

    1994-12-01

    As customizable computer aided software engineering (CASE) tools, or CASE shells, have been introduced in academia and industry, there has been a growing interest into the systematic construction of methods and their support environments, i.e. method engineering. To aid the method developers and method selectors in their tasks, we propose two sets of metrics, which measure the complexity of diagrammatic specification techniques on the one hand, and of complete systems development methods on the other hand. Proposed metrics provide a relatively fast and simple way to analyze the technique (or method) properties, and when accompanied with other selection criteria, can be used for estimating the cost of learning the technique and the relative complexity of a technique compared to others. To demonstrate the applicability of the proposed metrics, we have applied them to 34 techniques and 15 methods.

  7. Underground Mining Method Selection Using WPM and PROMETHEE

    NASA Astrophysics Data System (ADS)

    Balusa, Bhanu Chander; Singam, Jayanthu

    2018-04-01

    The aim of this paper is to represent the solution to the problem of selecting suitable underground mining method for the mining industry. It is achieved by using two multi-attribute decision making techniques. These two techniques are weighted product method (WPM) and preference ranking organization method for enrichment evaluation (PROMETHEE). In this paper, analytic hierarchy process is used for weight's calculation of the attributes (i.e. parameters which are used in this paper). Mining method selection depends on physical parameters, mechanical parameters, economical parameters and technical parameters. WPM and PROMETHEE techniques have the ability to consider the relationship between the parameters and mining methods. The proposed techniques give higher accuracy and faster computation capability when compared with other decision making techniques. The proposed techniques are presented to determine the effective mining method for bauxite mine. The results of these techniques are compared with methods used in the earlier research works. The results show, conventional cut and fill method is the most suitable mining method.

  8. High-speed technique based on a parallel projection correlation procedure for digital image correlation

    NASA Astrophysics Data System (ADS)

    Zaripov, D. I.; Renfu, Li

    2018-05-01

    The implementation of high-efficiency digital image correlation methods based on a zero-normalized cross-correlation (ZNCC) procedure for high-speed, time-resolved measurements using a high-resolution digital camera is associated with big data processing and is often time consuming. In order to speed-up ZNCC computation, a high-speed technique based on a parallel projection correlation procedure is proposed. The proposed technique involves the use of interrogation window projections instead of its two-dimensional field of luminous intensity. This simplification allows acceleration of ZNCC computation up to 28.8 times compared to ZNCC calculated directly, depending on the size of interrogation window and region of interest. The results of three synthetic test cases, such as a one-dimensional uniform flow, a linear shear flow and a turbulent boundary-layer flow, are discussed in terms of accuracy. In the latter case, the proposed technique is implemented together with an iterative window-deformation technique. On the basis of the results of the present work, the proposed technique is recommended to be used for initial velocity field calculation, with further correction using more accurate techniques.

  9. High resolution OCT image generation using super resolution via sparse representation

    NASA Astrophysics Data System (ADS)

    Asif, Muhammad; Akram, Muhammad Usman; Hassan, Taimur; Shaukat, Arslan; Waqar, Razi

    2017-02-01

    In this paper we propose a technique for obtaining a high resolution (HR) image from a single low resolution (LR) image -using joint learning dictionary - on the basis of image statistic research. It suggests that with an appropriate choice of an over-complete dictionary, image patches can be well represented as a sparse linear combination. Medical imaging for clinical analysis and medical intervention is being used for creating visual representations of the interior of a body, as well as visual representation of the function of some organs or tissues (physiology). A number of medical imaging techniques are in use like MRI, CT scan, X-rays and Optical Coherence Tomography (OCT). OCT is one of the new technologies in medical imaging and one of its uses is in ophthalmology where it is being used for analysis of the choroidal thickness in the eyes in healthy and disease states such as age-related macular degeneration, central serous chorioretinopathy, diabetic retinopathy and inherited retinal dystrophies. We have proposed a technique for enhancing the OCT images which can be used for clearly identifying and analyzing the particular diseases. Our method uses dictionary learning technique for generating a high resolution image from a single input LR image. We train two joint dictionaries, one with OCT images and the second with multiple different natural images, and compare the results with previous SR technique. Proposed method for both dictionaries produces HR images which are comparatively superior in quality with the other proposed method of SR. Proposed technique is very effective for noisy OCT images and produces up-sampled and enhanced OCT images.

  10. Calculation of grain boundary normals directly from 3D microstructure images

    DOE PAGES

    Lieberman, E. J.; Rollett, A. D.; Lebensohn, R. A.; ...

    2015-03-11

    The determination of grain boundary normals is an integral part of the characterization of grain boundaries in polycrystalline materials. These normal vectors are difficult to quantify due to the discretized nature of available microstructure characterization techniques. The most common method to determine grain boundary normals is by generating a surface mesh from an image of the microstructure, but this process can be slow, and is subject to smoothing issues. A new technique is proposed, utilizing first order Cartesian moments of binary indicator functions, to determine grain boundary normals directly from a voxelized microstructure image. In order to validate the accuracymore » of this technique, the surface normals obtained by the proposed method are compared to those generated by a surface meshing algorithm. Specifically, the local divergence between the surface normals obtained by different variants of the proposed technique and those generated from a surface mesh of a synthetic microstructure constructed using a marching cubes algorithm followed by Laplacian smoothing is quantified. Next, surface normals obtained with the proposed method from a measured 3D microstructure image of a Ni polycrystal are used to generate grain boundary character distributions (GBCD) for Σ3 and Σ9 boundaries, and compared to the GBCD generated using a surface mesh obtained from the same image. Finally, the results show that the proposed technique is an efficient and accurate method to determine voxelized fields of grain boundary normals.« less

  11. Application of machine learning techniques to lepton energy reconstruction in water Cherenkov detectors

    NASA Astrophysics Data System (ADS)

    Drakopoulou, E.; Cowan, G. A.; Needham, M. D.; Playfer, S.; Taani, M.

    2018-04-01

    The application of machine learning techniques to the reconstruction of lepton energies in water Cherenkov detectors is discussed and illustrated for TITUS, a proposed intermediate detector for the Hyper-Kamiokande experiment. It is found that applying these techniques leads to an improvement of more than 50% in the energy resolution for all lepton energies compared to an approach based upon lookup tables. Machine learning techniques can be easily applied to different detector configurations and the results are comparable to likelihood-function based techniques that are currently used.

  12. An automatic step adjustment method for average power analysis technique used in fiber amplifiers

    NASA Astrophysics Data System (ADS)

    Liu, Xue-Ming

    2006-04-01

    An automatic step adjustment (ASA) method for average power analysis (APA) technique used in fiber amplifiers is proposed in this paper for the first time. In comparison with the traditional APA technique, the proposed method has suggested two unique merits such as a higher order accuracy and an ASA mechanism, so that it can significantly shorten the computing time and improve the solution accuracy. A test example demonstrates that, by comparing to the APA technique, the proposed method increases the computing speed by more than a hundredfold under the same errors. By computing the model equations of erbium-doped fiber amplifiers, the numerical results show that our method can improve the solution accuracy by over two orders of magnitude at the same amplifying section number. The proposed method has the capacity to rapidly and effectively compute the model equations of fiber Raman amplifiers and semiconductor lasers.

  13. Fourier-Mellin moment-based intertwining map for image encryption

    NASA Astrophysics Data System (ADS)

    Kaur, Manjit; Kumar, Vijay

    2018-03-01

    In this paper, a robust image encryption technique that utilizes Fourier-Mellin moments and intertwining logistic map is proposed. Fourier-Mellin moment-based intertwining logistic map has been designed to overcome the issue of low sensitivity of an input image. Multi-objective Non-Dominated Sorting Genetic Algorithm (NSGA-II) based on Reinforcement Learning (MNSGA-RL) has been used to optimize the required parameters of intertwining logistic map. Fourier-Mellin moments are used to make the secret keys more secure. Thereafter, permutation and diffusion operations are carried out on input image using secret keys. The performance of proposed image encryption technique has been evaluated on five well-known benchmark images and also compared with seven well-known existing encryption techniques. The experimental results reveal that the proposed technique outperforms others in terms of entropy, correlation analysis, a unified average changing intensity and the number of changing pixel rate. The simulation results reveal that the proposed technique provides high level of security and robustness against various types of attacks.

  14. An effective content-based image retrieval technique for image visuals representation based on the bag-of-visual-words model

    PubMed Central

    Jabeen, Safia; Mehmood, Zahid; Mahmood, Toqeer; Saba, Tanzila; Rehman, Amjad; Mahmood, Muhammad Tariq

    2018-01-01

    For the last three decades, content-based image retrieval (CBIR) has been an active research area, representing a viable solution for retrieving similar images from an image repository. In this article, we propose a novel CBIR technique based on the visual words fusion of speeded-up robust features (SURF) and fast retina keypoint (FREAK) feature descriptors. SURF is a sparse descriptor whereas FREAK is a dense descriptor. Moreover, SURF is a scale and rotation-invariant descriptor that performs better in the case of repeatability, distinctiveness, and robustness. It is robust to noise, detection errors, geometric, and photometric deformations. It also performs better at low illumination within an image as compared to the FREAK descriptor. In contrast, FREAK is a retina-inspired speedy descriptor that performs better for classification-based problems as compared to the SURF descriptor. Experimental results show that the proposed technique based on the visual words fusion of SURF-FREAK descriptors combines the features of both descriptors and resolves the aforementioned issues. The qualitative and quantitative analysis performed on three image collections, namely Corel-1000, Corel-1500, and Caltech-256, shows that proposed technique based on visual words fusion significantly improved the performance of the CBIR as compared to the feature fusion of both descriptors and state-of-the-art image retrieval techniques. PMID:29694429

  15. An effective content-based image retrieval technique for image visuals representation based on the bag-of-visual-words model.

    PubMed

    Jabeen, Safia; Mehmood, Zahid; Mahmood, Toqeer; Saba, Tanzila; Rehman, Amjad; Mahmood, Muhammad Tariq

    2018-01-01

    For the last three decades, content-based image retrieval (CBIR) has been an active research area, representing a viable solution for retrieving similar images from an image repository. In this article, we propose a novel CBIR technique based on the visual words fusion of speeded-up robust features (SURF) and fast retina keypoint (FREAK) feature descriptors. SURF is a sparse descriptor whereas FREAK is a dense descriptor. Moreover, SURF is a scale and rotation-invariant descriptor that performs better in the case of repeatability, distinctiveness, and robustness. It is robust to noise, detection errors, geometric, and photometric deformations. It also performs better at low illumination within an image as compared to the FREAK descriptor. In contrast, FREAK is a retina-inspired speedy descriptor that performs better for classification-based problems as compared to the SURF descriptor. Experimental results show that the proposed technique based on the visual words fusion of SURF-FREAK descriptors combines the features of both descriptors and resolves the aforementioned issues. The qualitative and quantitative analysis performed on three image collections, namely Corel-1000, Corel-1500, and Caltech-256, shows that proposed technique based on visual words fusion significantly improved the performance of the CBIR as compared to the feature fusion of both descriptors and state-of-the-art image retrieval techniques.

  16. TU-CD-207-05: A Novel Digital Tomosynthesis System Using Orthogonal Scanning Technique: A Feasibility Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, J; Park, C; Kauweloa, K

    2015-06-15

    Purpose: As an alternative to full tomographic imaging technique such as cone-beam computed tomography (CBCT), there is growing interest to adopt digital tomosynthesis (DTS) for the use of diagnostic as well as therapeutic applications. The aim of this study is to propose a new DTS system using novel orthogonal scanning technique, which can provide superior image quality DTS images compared to the conventional DTS scanning system. Methods: Unlike conventional DTS scanning system, the proposed DTS is reconstructed with two sets of orthogonal patient scans. 1) X-ray projections that are acquired along transverse trajectory and 2) an additional sets of X-raymore » projections acquired along the vertical direction at the mid angle of the previous transverse scan. To reconstruct DTS, we have used modified filtered backprojection technique to account for the different scanning directions of each projection set. We have evaluated the performance of our method using numerical planning CT data of liver cancer patient and a physical pelvis phantom experiment. The results were compared with conventional DTS techniques with single transverse and vertical scanning. Results: The experiments on both numerical simulation as well as physical experiment showed that the resolution as well as contrast of anatomical structures was much clearer using our method. Specifically, the image quality comparing with transversely scanned DTS showed that the edge and contrast of anatomical structures along Left-Right (LR) directions was comparable however, considerable discrepancy and enhancement could be observed along Superior-Inferior (SI) direction using our method. The opposite was observed when vertically scanned DTS was compared. Conclusion: In this study, we propose a novel DTS system using orthogonal scanning technique. The results indicated that the image quality of our novel DTS system was superior compared to conventional DTS system. This makes our DTS system potentially useful in various on-line clinical applications.« less

  17. Field Calibration of Wind Direction Sensor to the True North and Its Application to the Daegwanryung Wind Turbine Test Sites

    PubMed Central

    Lee, Jeong Wan

    2008-01-01

    This paper proposes a field calibration technique for aligning a wind direction sensor to the true north. The proposed technique uses the synchronized measurements of captured images by a camera, and the output voltage of a wind direction sensor. The true wind direction was evaluated through image processing techniques using the captured picture of the sensor with the least square sense. Then, the evaluated true value was compared with the measured output voltage of the sensor. This technique solves the discordance problem of the wind direction sensor in the process of installing meteorological mast. For this proposed technique, some uncertainty analyses are presented and the calibration accuracy is discussed. Finally, the proposed technique was applied to the real meteorological mast at the Daegwanryung test site, and the statistical analysis of the experimental testing estimated the values of stable misalignment and uncertainty level. In a strict sense, it is confirmed that the error range of the misalignment from the exact north could be expected to decrease within the credibility level. PMID:27873957

  18. Determination of celestial bodies orbits and probabilities of their collisions with the Earth

    NASA Astrophysics Data System (ADS)

    Medvedev, Yuri; Vavilov, Dmitrii

    In this work we have developed a universal method to determine the small bodies orbits in the Solar System. In the method we consider different planes of body’s motion and pick up which is the most appropriate. Given an orbit plane we can calculate geocentric distances at time of observations and consequence determinate all orbital elements. Another technique that we propose here addresses the problem of estimation probability of collisions celestial bodies with the Earth. This technique uses the coordinate system associated with the nominal osculating orbit. We have compared proposed technique with the Monte-Carlo simulation. Results of these methods exhibit satisfactory agreement, whereas, proposed method is advantageous in time performance.

  19. Coherent Pound-Drever-Hall technique for high resolution fiber optic strain sensor at very low light power

    NASA Astrophysics Data System (ADS)

    Wu, Mengxin; Liu, Qingwen; Chen, Jiageng; He, Zuyuan

    2017-04-01

    Pound-Drever-Hall (PDH) technique has been widely adopted for ultrahigh resolution fiber-optic sensors, but its performance degenerates seriously as the light power drops. To solve this problem, we developed a coherent PDH technique for weak optical signal detection, with which the signal-to-noise ratio (SNR) of demodulated PDH signal is dramatically improved. In the demonstrational experiments, a high resolution fiber-optic sensor using the proposed technique is realized, and n"-order strain resolution at a low light power down to -43 dBm is achieved, which is about 15 dB lower compared with classical PDH technique. The proposed coherent PDH technique has great potentials in longer distance and larger scale sensor networks.

  20. Process techniques of charge transfer time reduction for high speed CMOS image sensors

    NASA Astrophysics Data System (ADS)

    Zhongxiang, Cao; Quanliang, Li; Ye, Han; Qi, Qin; Peng, Feng; Liyuan, Liu; Nanjian, Wu

    2014-11-01

    This paper proposes pixel process techniques to reduce the charge transfer time in high speed CMOS image sensors. These techniques increase the lateral conductivity of the photo-generated carriers in a pinned photodiode (PPD) and the voltage difference between the PPD and the floating diffusion (FD) node by controlling and optimizing the N doping concentration in the PPD and the threshold voltage of the reset transistor, respectively. The techniques shorten the charge transfer time from the PPD diode to the FD node effectively. The proposed process techniques do not need extra masks and do not cause harm to the fill factor. A sub array of 32 × 64 pixels was designed and implemented in the 0.18 μm CIS process with five implantation conditions splitting the N region in the PPD. The simulation and measured results demonstrate that the charge transfer time can be decreased by using the proposed techniques. Comparing the charge transfer time of the pixel with the different implantation conditions of the N region, the charge transfer time of 0.32 μs is achieved and 31% of image lag was reduced by using the proposed process techniques.

  1. Low Power LDPC Code Decoder Architecture Based on Intermediate Message Compression Technique

    NASA Astrophysics Data System (ADS)

    Shimizu, Kazunori; Togawa, Nozomu; Ikenaga, Takeshi; Goto, Satoshi

    Reducing the power dissipation for LDPC code decoder is a major challenging task to apply it to the practical digital communication systems. In this paper, we propose a low power LDPC code decoder architecture based on an intermediate message-compression technique which features as follows: (i) An intermediate message compression technique enables the decoder to reduce the required memory capacity and write power dissipation. (ii) A clock gated shift register based intermediate message memory architecture enables the decoder to decompress the compressed messages in a single clock cycle while reducing the read power dissipation. The combination of the above two techniques enables the decoder to reduce the power dissipation while keeping the decoding throughput. The simulation results show that the proposed architecture improves the power efficiency up to 52% and 18% compared to that of the decoder based on the overlapped schedule and the rapid convergence schedule without the proposed techniques respectively.

  2. Density-based empirical likelihood procedures for testing symmetry of data distributions and K-sample comparisons.

    PubMed

    Vexler, Albert; Tanajian, Hovig; Hutson, Alan D

    In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.

  3. Computer-Aided Diagnosis System for Alzheimer's Disease Using Different Discrete Transform Techniques.

    PubMed

    Dessouky, Mohamed M; Elrashidy, Mohamed A; Taha, Taha E; Abdelkader, Hatem M

    2016-05-01

    The different discrete transform techniques such as discrete cosine transform (DCT), discrete sine transform (DST), discrete wavelet transform (DWT), and mel-scale frequency cepstral coefficients (MFCCs) are powerful feature extraction techniques. This article presents a proposed computer-aided diagnosis (CAD) system for extracting the most effective and significant features of Alzheimer's disease (AD) using these different discrete transform techniques and MFCC techniques. Linear support vector machine has been used as a classifier in this article. Experimental results conclude that the proposed CAD system using MFCC technique for AD recognition has a great improvement for the system performance with small number of significant extracted features, as compared with the CAD system based on DCT, DST, DWT, and the hybrid combination methods of the different transform techniques. © The Author(s) 2015.

  4. Foveation: an alternative method to simultaneously preserve privacy and information in face images

    NASA Astrophysics Data System (ADS)

    Alonso, Víctor E.; Enríquez-Caldera, Rogerio; Sucar, Luis Enrique

    2017-03-01

    This paper presents a real-time foveation technique proposed as an alternative method for image obfuscation while simultaneously preserving privacy in face deidentification. Relevance of the proposed technique is discussed through a comparative study of the most common distortions methods in face images and an assessment on performance and effectiveness of privacy protection. All the different techniques presented here are evaluated when they go through a face recognition software. Evaluating the data utility preservation was carried out under gender and facial expression classification. Results on quantifying the tradeoff between privacy protection and image information preservation at different obfuscation levels are presented. Comparative results using the facial expression subset of the FERET database show that the technique achieves a good tradeoff between privacy and awareness with 30% of recognition rate and a classification accuracy as high as 88% obtained from the common figures of merit using the privacy-awareness map.

  5. NetCoDer: A Retransmission Mechanism for WSNs Based on Cooperative Relays and Network Coding

    PubMed Central

    Valle, Odilson T.; Montez, Carlos; Medeiros de Araujo, Gustavo; Vasques, Francisco; Moraes, Ricardo

    2016-01-01

    Some of the most difficult problems to deal with when using Wireless Sensor Networks (WSNs) are related to the unreliable nature of communication channels. In this context, the use of cooperative diversity techniques and the application of network coding concepts may be promising solutions to improve the communication reliability. In this paper, we propose the NetCoDer scheme to address this problem. Its design is based on merging cooperative diversity techniques and network coding concepts. We evaluate the effectiveness of the NetCoDer scheme through both an experimental setup with real WSN nodes and a simulation assessment, comparing NetCoDer performance against state-of-the-art TDMA-based (Time Division Multiple Access) retransmission techniques: BlockACK, Master/Slave and Redundant TDMA. The obtained results highlight that the proposed NetCoDer scheme clearly improves the network performance when compared with other retransmission techniques. PMID:27258280

  6. Spatiotemporal Interpolation for Environmental Modelling

    PubMed Central

    Susanto, Ferry; de Souza, Paulo; He, Jing

    2016-01-01

    A variation of the reduction-based approach to spatiotemporal interpolation (STI), in which time is treated independently from the spatial dimensions, is proposed in this paper. We reviewed and compared three widely-used spatial interpolation techniques: ordinary kriging, inverse distance weighting and the triangular irregular network. We also proposed a new distribution-based distance weighting (DDW) spatial interpolation method. In this study, we utilised one year of Tasmania’s South Esk Hydrology model developed by CSIRO. Root mean squared error statistical methods were performed for performance evaluations. Our results show that the proposed reduction approach is superior to the extension approach to STI. However, the proposed DDW provides little benefit compared to the conventional inverse distance weighting (IDW) method. We suggest that the improved IDW technique, with the reduction approach used for the temporal dimension, is the optimal combination for large-scale spatiotemporal interpolation within environmental modelling applications. PMID:27509497

  7. Guided SAR image despeckling with probabilistic non local weights

    NASA Astrophysics Data System (ADS)

    Gokul, Jithin; Nair, Madhu S.; Rajan, Jeny

    2017-12-01

    SAR images are generally corrupted by granular disturbances called speckle, which makes visual analysis and detail extraction a difficult task. Non Local despeckling techniques with probabilistic similarity has been a recent trend in SAR despeckling. To achieve effective speckle suppression without compromising detail preservation, we propose an improvement for the existing Generalized Guided Filter with Bayesian Non-Local Means (GGF-BNLM) method. The proposed method (Guided SAR Image Despeckling with Probabilistic Non Local Weights) replaces parametric constants based on heuristics in GGF-BNLM method with dynamically derived values based on the image statistics for weight computation. Proposed changes make GGF-BNLM method adaptive and as a result, significant improvement is achieved in terms of performance. Experimental analysis on SAR images shows excellent speckle reduction without compromising feature preservation when compared to GGF-BNLM method. Results are also compared with other state-of-the-art and classic SAR depseckling techniques to demonstrate the effectiveness of the proposed method.

  8. Design of two-channel filter bank using nature inspired optimization based fractional derivative constraints.

    PubMed

    Kuldeep, B; Singh, V K; Kumar, A; Singh, G K

    2015-01-01

    In this article, a novel approach for 2-channel linear phase quadrature mirror filter (QMF) bank design based on a hybrid of gradient based optimization and optimization of fractional derivative constraints is introduced. For the purpose of this work, recently proposed nature inspired optimization techniques such as cuckoo search (CS), modified cuckoo search (MCS) and wind driven optimization (WDO) are explored for the design of QMF bank. 2-Channel QMF is also designed with particle swarm optimization (PSO) and artificial bee colony (ABC) nature inspired optimization techniques. The design problem is formulated in frequency domain as sum of L2 norm of error in passband, stopband and transition band at quadrature frequency. The contribution of this work is the novel hybrid combination of gradient based optimization (Lagrange multiplier method) and nature inspired optimization (CS, MCS, WDO, PSO and ABC) and its usage for optimizing the design problem. Performance of the proposed method is evaluated by passband error (ϕp), stopband error (ϕs), transition band error (ϕt), peak reconstruction error (PRE), stopband attenuation (As) and computational time. The design examples illustrate the ingenuity of the proposed method. Results are also compared with the other existing algorithms, and it was found that the proposed method gives best result in terms of peak reconstruction error and transition band error while it is comparable in terms of passband and stopband error. Results show that the proposed method is successful for both lower and higher order 2-channel QMF bank design. A comparative study of various nature inspired optimization techniques is also presented, and the study singles out CS as a best QMF optimization technique. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Multiplexed absorption tomography with calibration-free wavelength modulation spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Weiwei; Kaminski, Clemens F., E-mail: cfk23@cam.ac.uk

    2014-04-14

    We propose a multiplexed absorption tomography technique, which uses calibration-free wavelength modulation spectroscopy with tunable semiconductor lasers for the simultaneous imaging of temperature and species concentration in harsh combustion environments. Compared with the commonly used direct absorption spectroscopy (DAS) counterpart, the present variant enjoys better signal-to-noise ratios and requires no baseline fitting, a particularly desirable feature for high-pressure applications, where adjacent absorption features overlap and interfere severely. We present proof-of-concept numerical demonstrations of the technique using realistic phantom models of harsh combustion environments and prove that the proposed techniques outperform currently available tomography techniques based on DAS.

  10. A robust star identification algorithm with star shortlisting

    NASA Astrophysics Data System (ADS)

    Mehta, Deval Samirbhai; Chen, Shoushun; Low, Kay Soon

    2018-05-01

    A star tracker provides the most accurate attitude solution in terms of arc seconds compared to the other existing attitude sensors. When no prior attitude information is available, it operates in "Lost-In-Space (LIS)" mode. Star pattern recognition, also known as star identification algorithm, forms the most crucial part of a star tracker in the LIS mode. Recognition reliability and speed are the two most important parameters of a star pattern recognition technique. In this paper, a novel star identification algorithm with star ID shortlisting is proposed. Firstly, the star IDs are shortlisted based on worst-case patch mismatch, and later stars are identified in the image by an initial match confirmed with a running sequential angular match technique. The proposed idea is tested on 16,200 simulated star images having magnitude uncertainty, noise stars, positional deviation, and varying size of the field of view. The proposed idea is also benchmarked with the state-of-the-art star pattern recognition techniques. Finally, the real-time performance of the proposed technique is tested on the 3104 real star images captured by a star tracker SST-20S currently mounted on a satellite. The proposed technique can achieve an identification accuracy of 98% and takes only 8.2 ms for identification on real images. Simulation and real-time results depict that the proposed technique is highly robust and achieves a high speed of identification suitable for actual space applications.

  11. Metaheuristic Algorithms for Convolution Neural Network

    PubMed Central

    Fanany, Mohamad Ivan; Arymurthy, Aniati Murni

    2016-01-01

    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). PMID:27375738

  12. Metaheuristic Algorithms for Convolution Neural Network.

    PubMed

    Rere, L M Rasdi; Fanany, Mohamad Ivan; Arymurthy, Aniati Murni

    2016-01-01

    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent).

  13. Procedures for Comparing Instructional Programs.

    ERIC Educational Resources Information Center

    Klein, Stephen

    This paper examines comparative educational program evaluation. Suggested evaluative criteria and evaluation techniques and their weaknesses are discussed. An evaluation formula is proposed, and an example of its operation is provided. (DG)

  14. Vision-based system identification technique for building structures using a motion capture system

    NASA Astrophysics Data System (ADS)

    Oh, Byung Kwan; Hwang, Jin Woo; Kim, Yousok; Cho, Tongjun; Park, Hyo Seon

    2015-11-01

    This paper presents a new vision-based system identification (SI) technique for building structures by using a motion capture system (MCS). The MCS with outstanding capabilities for dynamic response measurements can provide gage-free measurements of vibrations through the convenient installation of multiple markers. In this technique, from the dynamic displacement responses measured by MCS, the dynamic characteristics (natural frequency, mode shape, and damping ratio) of building structures are extracted after the processes of converting the displacement from MCS to acceleration and conducting SI by frequency domain decomposition. A free vibration experiment on a three-story shear frame was conducted to validate the proposed technique. The SI results from the conventional accelerometer-based method were compared with those from the proposed technique and showed good agreement, which confirms the validity and applicability of the proposed vision-based SI technique for building structures. Furthermore, SI directly employing MCS measured displacements to FDD was performed and showed identical results to those of conventional SI method.

  15. Random Process Simulation for stochastic fatigue analysis. Ph.D. Thesis - Rice Univ., Houston, Tex.

    NASA Technical Reports Server (NTRS)

    Larsen, Curtis E.

    1988-01-01

    A simulation technique is described which directly synthesizes the extrema of a random process and is more efficient than the Gaussian simulation method. Such a technique is particularly useful in stochastic fatigue analysis because the required stress range moment E(R sup m), is a function only of the extrema of the random stress process. The family of autoregressive moving average (ARMA) models is reviewed and an autoregressive model is presented for modeling the extrema of any random process which has a unimodal power spectral density (psd). The proposed autoregressive technique is found to produce rainflow stress range moments which compare favorably with those computed by the Gaussian technique and to average 11.7 times faster than the Gaussian technique. The autoregressive technique is also adapted for processes having bimodal psd's. The adaptation involves using two autoregressive processes to simulate the extrema due to each mode and the superposition of these two extrema sequences. The proposed autoregressive superposition technique is 9 to 13 times faster than the Gaussian technique and produces comparable values for E(R sup m) for bimodal psd's having the frequency of one mode at least 2.5 times that of the other mode.

  16. Statistical approach for selection of biologically informative genes.

    PubMed

    Das, Samarendra; Rai, Anil; Mishra, D C; Rai, Shesh N

    2018-05-20

    Selection of informative genes from high dimensional gene expression data has emerged as an important research area in genomics. Many gene selection techniques have been proposed so far are either based on relevancy or redundancy measure. Further, the performance of these techniques has been adjudged through post selection classification accuracy computed through a classifier using the selected genes. This performance metric may be statistically sound but may not be biologically relevant. A statistical approach, i.e. Boot-MRMR, was proposed based on a composite measure of maximum relevance and minimum redundancy, which is both statistically sound and biologically relevant for informative gene selection. For comparative evaluation of the proposed approach, we developed two biological sufficient criteria, i.e. Gene Set Enrichment with QTL (GSEQ) and biological similarity score based on Gene Ontology (GO). Further, a systematic and rigorous evaluation of the proposed technique with 12 existing gene selection techniques was carried out using five gene expression datasets. This evaluation was based on a broad spectrum of statistically sound (e.g. subject classification) and biological relevant (based on QTL and GO) criteria under a multiple criteria decision-making framework. The performance analysis showed that the proposed technique selects informative genes which are more biologically relevant. The proposed technique is also found to be quite competitive with the existing techniques with respect to subject classification and computational time. Our results also showed that under the multiple criteria decision-making setup, the proposed technique is best for informative gene selection over the available alternatives. Based on the proposed approach, an R Package, i.e. BootMRMR has been developed and available at https://cran.r-project.org/web/packages/BootMRMR. This study will provide a practical guide to select statistical techniques for selecting informative genes from high dimensional expression data for breeding and system biology studies. Published by Elsevier B.V.

  17. A Novel Technique to Detect Code for SAC-OCDMA System

    NASA Astrophysics Data System (ADS)

    Bharti, Manisha; Kumar, Manoj; Sharma, Ajay K.

    2018-04-01

    The main task of optical code division multiple access (OCDMA) system is the detection of code used by a user in presence of multiple access interference (MAI). In this paper, new method of detection known as XOR subtraction detection for spectral amplitude coding OCDMA (SAC-OCDMA) based on double weight codes has been proposed and presented. As MAI is the main source of performance deterioration in OCDMA system, therefore, SAC technique is used in this paper to eliminate the effect of MAI up to a large extent. A comparative analysis is then made between the proposed scheme and other conventional detection schemes used like complimentary subtraction detection, AND subtraction detection and NAND subtraction detection. The system performance is characterized by Q-factor, BER and received optical power (ROP) with respect to input laser power and fiber length. The theoretical and simulation investigations reveal that the proposed detection technique provides better quality factor, security and received power in comparison to other conventional techniques. The wide opening of eye in case of proposed technique also proves its robustness.

  18. A Bio Medical Waste Identification and Classification Algorithm Using Mltrp and Rvm.

    PubMed

    Achuthan, Aravindan; Ayyallu Madangopal, Vasumathi

    2016-10-01

    We aimed to extract the histogram features for text analysis and, to classify the types of Bio Medical Waste (BMW) for garbage disposal and management. The given BMW was preprocessed by using the median filtering technique that efficiently reduced the noise in the image. After that, the histogram features of the filtered image were extracted with the help of proposed Modified Local Tetra Pattern (MLTrP) technique. Finally, the Relevance Vector Machine (RVM) was used to classify the BMW into human body parts, plastics, cotton and liquids. The BMW image was collected from the garbage image dataset for analysis. The performance of the proposed BMW identification and classification system was evaluated in terms of sensitivity, specificity, classification rate and accuracy with the help of MATLAB. When compared to the existing techniques, the proposed techniques provided the better results. This work proposes a new texture analysis and classification technique for BMW management and disposal. It can be used in many real time applications such as hospital and healthcare management systems for proper BMW disposal.

  19. System identification through nonstationary data using Time-Frequency Blind Source Separation

    NASA Astrophysics Data System (ADS)

    Guo, Yanlin; Kareem, Ahsan

    2016-06-01

    Classical output-only system identification (SI) methods are based on the assumption of stationarity of the system response. However, measured response of buildings and bridges is usually non-stationary due to strong winds (e.g. typhoon, and thunder storm etc.), earthquakes and time-varying vehicle motions. Accordingly, the response data may have time-varying frequency contents and/or overlapping of modal frequencies due to non-stationary colored excitation. This renders traditional methods problematic for modal separation and identification. To address these challenges, a new SI technique based on Time-Frequency Blind Source Separation (TFBSS) is proposed. By selectively utilizing "effective" information in local regions of the time-frequency plane, where only one mode contributes to energy, the proposed technique can successfully identify mode shapes and recover modal responses from the non-stationary response where the traditional SI methods often encounter difficulties. This technique can also handle response with closely spaced modes which is a well-known challenge for the identification of large-scale structures. Based on the separated modal responses, frequency and damping can be easily identified using SI methods based on a single degree of freedom (SDOF) system. In addition to the exclusive advantage of handling non-stationary data and closely spaced modes, the proposed technique also benefits from the absence of the end effects and low sensitivity to noise in modal separation. The efficacy of the proposed technique is demonstrated using several simulation based studies, and compared to the popular Second-Order Blind Identification (SOBI) scheme. It is also noted that even some non-stationary response data can be analyzed by the stationary method SOBI. This paper also delineates non-stationary cases where SOBI and the proposed scheme perform comparably and highlights cases where the proposed approach is more advantageous. Finally, the performance of the proposed method is evaluated using a full-scale non-stationary response of a tall building during an earthquake and found it to perform satisfactorily.

  20. An element search ant colony technique for solving virtual machine placement problem

    NASA Astrophysics Data System (ADS)

    Srija, J.; Rani John, Rose; Kanaga, Grace Mary, Dr.

    2017-09-01

    The data centres in the cloud environment play a key role in providing infrastructure for ubiquitous computing, pervasive computing, mobile computing etc. This computing technique tries to utilize the available resources in order to provide services. Hence maintaining the resource utilization without wastage of power consumption has become a challenging task for the researchers. In this paper we propose the direct guidance ant colony system for effective mapping of virtual machines to the physical machine with maximal resource utilization and minimal power consumption. The proposed algorithm has been compared with the existing ant colony approach which is involved in solving virtual machine placement problem and thus the proposed algorithm proves to provide better result than the existing technique.

  1. Local dynamic range compensation for scanning electron microscope imaging system.

    PubMed

    Sim, K S; Huang, Y H

    2015-01-01

    This is the extended project by introducing the modified dynamic range histogram modification (MDRHM) and is presented in this paper. This technique is used to enhance the scanning electron microscope (SEM) imaging system. By comparing with the conventional histogram modification compensators, this technique utilizes histogram profiling by extending the dynamic range of each tile of an image to the limit of 0-255 range while retains its histogram shape. The proposed technique yields better image compensation compared to conventional methods. © Wiley Periodicals, Inc.

  2. Peak-to-average power ratio reduction in orthogonal frequency division multiplexing-based visible light communication systems using a modified partial transmit sequence technique

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Deng, Honggui; Ren, Shuang; Tang, Chengying; Qian, Xuewen

    2018-01-01

    We propose an efficient partial transmit sequence technique based on genetic algorithm and peak-value optimization algorithm (GAPOA) to reduce high peak-to-average power ratio (PAPR) in visible light communication systems based on orthogonal frequency division multiplexing (VLC-OFDM). By analysis of hill-climbing algorithm's pros and cons, we propose the POA with excellent local search ability to further process the signals whose PAPR is still over the threshold after processed by genetic algorithm (GA). To verify the effectiveness of the proposed technique and algorithm, we evaluate the PAPR performance and the bit error rate (BER) performance and compare them with partial transmit sequence (PTS) technique based on GA (GA-PTS), PTS technique based on genetic and hill-climbing algorithm (GH-PTS), and PTS based on shuffled frog leaping algorithm and hill-climbing algorithm (SFLAHC-PTS). The results show that our technique and algorithm have not only better PAPR performance but also lower computational complexity and BER than GA-PTS, GH-PTS, and SFLAHC-PTS technique.

  3. Identification of transformer fault based on dissolved gas analysis using hybrid support vector machine-modified evolutionary particle swarm optimisation

    PubMed Central

    2018-01-01

    Early detection of power transformer fault is important because it can reduce the maintenance cost of the transformer and it can ensure continuous electricity supply in power systems. Dissolved Gas Analysis (DGA) technique is commonly used to identify oil-filled power transformer fault type but utilisation of artificial intelligence method with optimisation methods has shown convincing results. In this work, a hybrid support vector machine (SVM) with modified evolutionary particle swarm optimisation (EPSO) algorithm was proposed to determine the transformer fault type. The superiority of the modified PSO technique with SVM was evaluated by comparing the results with the actual fault diagnosis, unoptimised SVM and previous reported works. Data reduction was also applied using stepwise regression prior to the training process of SVM to reduce the training time. It was found that the proposed hybrid SVM-Modified EPSO (MEPSO)-Time Varying Acceleration Coefficient (TVAC) technique results in the highest correct identification percentage of faults in a power transformer compared to other PSO algorithms. Thus, the proposed technique can be one of the potential solutions to identify the transformer fault type based on DGA data on site. PMID:29370230

  4. Identification of transformer fault based on dissolved gas analysis using hybrid support vector machine-modified evolutionary particle swarm optimisation.

    PubMed

    Illias, Hazlee Azil; Zhao Liang, Wee

    2018-01-01

    Early detection of power transformer fault is important because it can reduce the maintenance cost of the transformer and it can ensure continuous electricity supply in power systems. Dissolved Gas Analysis (DGA) technique is commonly used to identify oil-filled power transformer fault type but utilisation of artificial intelligence method with optimisation methods has shown convincing results. In this work, a hybrid support vector machine (SVM) with modified evolutionary particle swarm optimisation (EPSO) algorithm was proposed to determine the transformer fault type. The superiority of the modified PSO technique with SVM was evaluated by comparing the results with the actual fault diagnosis, unoptimised SVM and previous reported works. Data reduction was also applied using stepwise regression prior to the training process of SVM to reduce the training time. It was found that the proposed hybrid SVM-Modified EPSO (MEPSO)-Time Varying Acceleration Coefficient (TVAC) technique results in the highest correct identification percentage of faults in a power transformer compared to other PSO algorithms. Thus, the proposed technique can be one of the potential solutions to identify the transformer fault type based on DGA data on site.

  5. A 2D spiral turbo-spin-echo technique.

    PubMed

    Li, Zhiqiang; Karis, John P; Pipe, James G

    2018-03-09

    2D turbo-spin-echo (TSE) is widely used in the clinic for neuroimaging. However, the long refocusing radiofrequency pulse train leads to high specific absorption rate (SAR) and alters the contrast compared to conventional spin-echo. The purpose of this work is to develop a robust 2D spiral TSE technique for fast T 2 -weighted imaging with low SAR and improved contrast. A spiral-in/out readout is incorporated into 2D TSE to fully take advantage of the acquisition efficiency of spiral sampling while avoiding potential off-resonance-related artifacts compared to a typical spiral-out readout. A double encoding strategy and a signal demodulation method are proposed to mitigate the artifacts because of the T 2 -decay-induced signal variation. An adapted prescan phase correction as well as a concomitant phase compensation technique are implemented to minimize the phase errors. Phantom data demonstrate the efficacy of the proposed double encoding/signal demodulation, as well as the prescan phase correction and concomitant phase compensation. Volunteer data show that the proposed 2D spiral TSE achieves fast scan speed with high SNR, low SAR, and improved contrast compared to conventional Cartesian TSE. A robust 2D spiral TSE technique is feasible and provides a potential alternative to conventional 2D Cartesian TSE for T 2 -weighted neuroimaging. © 2018 International Society for Magnetic Resonance in Medicine.

  6. Robust volcano plot: identification of differential metabolites in the presence of outliers.

    PubMed

    Kumar, Nishith; Hoque, Md Aminul; Sugimoto, Masahiro

    2018-04-11

    The identification of differential metabolites in metabolomics is still a big challenge and plays a prominent role in metabolomics data analyses. Metabolomics datasets often contain outliers because of analytical, experimental, and biological ambiguity, but the currently available differential metabolite identification techniques are sensitive to outliers. We propose a kernel weight based outlier-robust volcano plot for identifying differential metabolites from noisy metabolomics datasets. Two numerical experiments are used to evaluate the performance of the proposed technique against nine existing techniques, including the t-test and the Kruskal-Wallis test. Artificially generated data with outliers reveal that the proposed method results in a lower misclassification error rate and a greater area under the receiver operating characteristic curve compared with existing methods. An experimentally measured breast cancer dataset to which outliers were artificially added reveals that our proposed method produces only two non-overlapping differential metabolites whereas the other nine methods produced between seven and 57 non-overlapping differential metabolites. Our data analyses show that the performance of the proposed differential metabolite identification technique is better than that of existing methods. Thus, the proposed method can contribute to analysis of metabolomics data with outliers. The R package and user manual of the proposed method are available at https://github.com/nishithkumarpaul/Rvolcano .

  7. Multiscale corner detection and classification using local properties and semantic patterns

    NASA Astrophysics Data System (ADS)

    Gallo, Giovanni; Giuoco, Alessandro L.

    2002-05-01

    A new technique to detect, localize and classify corners in digital closed curves is proposed. The technique is based on correct estimation of support regions for each point. We compute multiscale curvature to detect and to localize corners. As a further step, with the aid of some local features, it's possible to classify corners into seven distinct types. Classification is performed using a set of rules, which describe corners according to preset semantic patterns. Compared with existing techniques, the proposed approach inscribes itself into the family of algorithms that try to explain the curve, instead of simple labeling. Moreover, our technique works in manner similar to what is believed are typical mechanisms of human perception.

  8. Proposal for a new trajectory for subaxial cervical lateral mass screws.

    PubMed

    Amhaz-Escanlar, Samer; Jorge-Mora, Alberto; Jorge-Mora, Teresa; Febrero-Bande, Manuel; Diez-Ulloa, Maximo-Alberto

    2018-06-20

    Lateral mass screws combined with rods are the standard method for posterior cervical spine subaxial fixation. Several techniques have been described, among which the most used are Roy Camille, Magerl, Anderson and An. All of them are based on tridimensional angles. Reliability of freehand angle estimation remains poorly investigated. We propose a new technique based on on-site spatial references and compare it with previously described ones assessing screw length and neurovascular potential complications. Four different lateral mass screw insertion techniques (Magerl, Anderson, An and the new described technique) were performed bilaterally, from C3 to C6, in ten human spine specimens. A drill tip guide wire was inserted as originally described for each trajectory, and screw length was measured. Exit point was examined, and potential vertebral artery or nerve root injury was assessed. Mean screw length was 14.05 mm using Magerl's technique, 13.47 mm using Anderson's, 12.8 mm using An's and 17.03 mm using the new technique. Data analysis showed significantly longer lateral mass screw length using the new technique (p value < 0.00001). Nerve potential injury occurred 37 times using Magerl's technique, 28 using Anderson's, 13 using An's and twice using the new technique. Vertebral artery potential injury occurred once using Magerl's technique, 8 times using Anderson's and none using either An's or the new proposed technique. The risk of neurovascular complication was significantly lower using the new technique (p value < 0.01). The new proposed technique allows for longer screws, maximizing purchase and stability, while lowering the complication rate.

  9. Nature of the optical information recorded in speckles

    NASA Astrophysics Data System (ADS)

    Sciammarella, Cesar A.

    1998-09-01

    The process of encoding displacement information in electronic Holographic Interferometry is reviewed. Procedures to extend the applicability of this technique to large deformations are given. The proposed techniques are applied and results from these experiments are compared with results obtained by other means. The similarity between the two sets of results illustrates the validity for the new techniques.

  10. Porosity and hydraulic conductivity estimation of the basaltic aquifer in Southern Syria by using nuclear and electrical well logging techniques

    NASA Astrophysics Data System (ADS)

    Asfahani, Jamal

    2017-08-01

    An alternative approach using nuclear neutron-porosity and electrical resistivity well logging of long (64 inch) and short (16 inch) normal techniques is proposed to estimate the porosity and the hydraulic conductivity ( K) of the basaltic aquifers in Southern Syria. This method is applied on the available logs of Kodana well in Southern Syria. It has been found that the obtained K value by applying this technique seems to be reasonable and comparable with the hydraulic conductivity value of 3.09 m/day obtained by the pumping test carried out at Kodana well. The proposed alternative well logging methodology seems as promising and could be practiced in the basaltic environments for the estimation of hydraulic conductivity parameter. However, more detailed researches are still required to make this proposed technique very performed in basaltic environments.

  11. A preclustering-based ensemble learning technique for acute appendicitis diagnoses.

    PubMed

    Lee, Yen-Hsien; Hu, Paul Jen-Hwa; Cheng, Tsang-Hsiang; Huang, Te-Chia; Chuang, Wei-Yao

    2013-06-01

    Acute appendicitis is a common medical condition, whose effective, timely diagnosis can be difficult. A missed diagnosis not only puts the patient in danger but also requires additional resources for corrective treatments. An acute appendicitis diagnosis constitutes a classification problem, for which a further fundamental challenge pertains to the skewed outcome class distribution of instances in the training sample. A preclustering-based ensemble learning (PEL) technique aims to address the associated imbalanced sample learning problems and thereby support the timely, accurate diagnosis of acute appendicitis. The proposed PEL technique employs undersampling to reduce the number of majority-class instances in a training sample, uses preclustering to group similar majority-class instances into multiple groups, and selects from each group representative instances to create more balanced samples. The PEL technique thereby reduces potential information loss from random undersampling. It also takes advantage of ensemble learning to improve performance. We empirically evaluate this proposed technique with 574 clinical cases obtained from a comprehensive tertiary hospital in southern Taiwan, using several prevalent techniques and a salient scoring system as benchmarks. The comparative results show that PEL is more effective and less biased than any benchmarks. The proposed PEL technique seems more sensitive to identifying positive acute appendicitis than the commonly used Alvarado scoring system and exhibits higher specificity in identifying negative acute appendicitis. In addition, the sensitivity and specificity values of PEL appear higher than those of the investigated benchmarks that follow the resampling approach. Our analysis suggests PEL benefits from the more representative majority-class instances in the training sample. According to our overall evaluation results, PEL records the best overall performance, and its area under the curve measure reaches 0.619. The PEL technique is capable of addressing imbalanced sample learning associated with acute appendicitis diagnosis. Our evaluation results suggest PEL is less biased toward a positive or negative class than the investigated benchmark techniques. In addition, our results indicate the overall effectiveness of the proposed technique, compared with prevalent scoring systems or salient classification techniques that follow the resampling approach. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Improved approach for electric vehicle rapid charging station placement and sizing using Google maps and binary lightning search algorithm

    PubMed Central

    Shareef, Hussain; Mohamed, Azah

    2017-01-01

    The electric vehicle (EV) is considered a premium solution to global warming and various types of pollution. Nonetheless, a key concern is the recharging of EV batteries. Therefore, this study proposes a novel approach that considers the costs of transportation loss, buildup, and substation energy loss and that incorporates harmonic power loss into optimal rapid charging station (RCS) planning. A novel optimization technique, called binary lightning search algorithm (BLSA), is proposed to solve the optimization problem. BLSA is also applied to a conventional RCS planning method. A comprehensive analysis is conducted to assess the performance of the two RCS planning methods by using the IEEE 34-bus test system as the power grid. The comparative studies show that the proposed BLSA is better than other optimization techniques. The daily total cost in RCS planning of the proposed method, including harmonic power loss, decreases by 10% compared with that of the conventional method. PMID:29220396

  13. Improved approach for electric vehicle rapid charging station placement and sizing using Google maps and binary lightning search algorithm.

    PubMed

    Islam, Md Mainul; Shareef, Hussain; Mohamed, Azah

    2017-01-01

    The electric vehicle (EV) is considered a premium solution to global warming and various types of pollution. Nonetheless, a key concern is the recharging of EV batteries. Therefore, this study proposes a novel approach that considers the costs of transportation loss, buildup, and substation energy loss and that incorporates harmonic power loss into optimal rapid charging station (RCS) planning. A novel optimization technique, called binary lightning search algorithm (BLSA), is proposed to solve the optimization problem. BLSA is also applied to a conventional RCS planning method. A comprehensive analysis is conducted to assess the performance of the two RCS planning methods by using the IEEE 34-bus test system as the power grid. The comparative studies show that the proposed BLSA is better than other optimization techniques. The daily total cost in RCS planning of the proposed method, including harmonic power loss, decreases by 10% compared with that of the conventional method.

  14. Low-cost capacitor voltage inverter for outstanding performance in piezoelectric energy harvesting.

    PubMed

    Lallart, Mickaël; Garbuio, Lauric; Richard, Claude; Guyomar, Daniel

    2010-01-01

    The purpose of this paper is to propose a new scheme for piezoelectric energy harvesting optimization. The proposed enhancement relies on a new topology for inverting the voltage across a single capacitor with reduced losses. The increase of the inversion quality allows a much more effective energy harvesting process using the so-called synchronized switch harvesting on inductor (SSHI) nonlinear technique. It is shown that the proposed architecture, based on a 2-step inversion, increases the harvested power by a theoretical factor up to square root of 2 (i.e., 40% gain) compared with classical SSHI, allowing an increase of the harvested power by a factor greater than 1000% compared with the standard energy harvesting technique for realistic values of inversion components. The proposed circuit, using only 4 digital switches and an intermediate capacitor, is also ultra-low power, because the inversion circuit does not require any external energy and the command signals are very simple.

  15. Simple adaptive sparse representation based classification schemes for EEG based brain-computer interface applications.

    PubMed

    Shin, Younghak; Lee, Seungchan; Ahn, Minkyu; Cho, Hohyun; Jun, Sung Chan; Lee, Heung-No

    2015-11-01

    One of the main problems related to electroencephalogram (EEG) based brain-computer interface (BCI) systems is the non-stationarity of the underlying EEG signals. This results in the deterioration of the classification performance during experimental sessions. Therefore, adaptive classification techniques are required for EEG based BCI applications. In this paper, we propose simple adaptive sparse representation based classification (SRC) schemes. Supervised and unsupervised dictionary update techniques for new test data and a dictionary modification method by using the incoherence measure of the training data are investigated. The proposed methods are very simple and additional computation for the re-training of the classifier is not needed. The proposed adaptive SRC schemes are evaluated using two BCI experimental datasets. The proposed methods are assessed by comparing classification results with the conventional SRC and other adaptive classification methods. On the basis of the results, we find that the proposed adaptive schemes show relatively improved classification accuracy as compared to conventional methods without requiring additional computation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. New optical frequency domain differential mode delay measurement method for a multimode optical fiber.

    PubMed

    Ahn, T; Moon, S; Youk, Y; Jung, Y; Oh, K; Kim, D

    2005-05-30

    A novel mode analysis method and differential mode delay (DMD) measurement technique for a multimode optical fiber based on optical frequency domain reflectometry has been proposed for the first time. We have used a conventional OFDR with a tunable external cavity laser and a Michelson interferometer. A few-mode optical multimode fiber was prepared to test our proposed measurement technique. We have also compared the OFDR measurement results with those obtained using a traditional time-domain measurement method.

  17. Multi-Sectional Views Textural Based SVM for MS Lesion Segmentation in Multi-Channels MRIs

    PubMed Central

    Abdullah, Bassem A; Younis, Akmal A; John, Nigel M

    2012-01-01

    In this paper, a new technique is proposed for automatic segmentation of multiple sclerosis (MS) lesions from brain magnetic resonance imaging (MRI) data. The technique uses a trained support vector machine (SVM) to discriminate between the blocks in regions of MS lesions and the blocks in non-MS lesion regions mainly based on the textural features with aid of the other features. The classification is done on each of the axial, sagittal and coronal sectional brain view independently and the resultant segmentations are aggregated to provide more accurate output segmentation. The main contribution of the proposed technique described in this paper is the use of textural features to detect MS lesions in a fully automated approach that does not rely on manually delineating the MS lesions. In addition, the technique introduces the concept of the multi-sectional view segmentation to produce verified segmentation. The proposed textural-based SVM technique was evaluated using three simulated datasets and more than fifty real MRI datasets. The results were compared with state of the art methods. The obtained results indicate that the proposed method would be viable for use in clinical practice for the detection of MS lesions in MRI. PMID:22741026

  18. Measurement of total ultrasonic power using thermal expansion and change in buoyancy of an absorbing target

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubey, P. K., E-mail: premkdubey@gmail.com; Kumar, Yudhisther; Gupta, Reeta

    2014-05-15

    The Radiation Force Balance (RFB) technique is well established and most widely used for the measurement of total ultrasonic power radiated by ultrasonic transducer. The technique is used as a primary standard for calibration of ultrasonic transducers with relatively fair uncertainty in the low power (below 1 W) regime. In this technique, uncertainty comparatively increases in the range of few watts wherein the effects such as thermal heating of the target, cavitations, and acoustic streaming dominate. In addition, error in the measurement of ultrasonic power is also caused due to movement of absorber at relatively high radiated force which occursmore » at high power level. In this article a new technique is proposed which does not measure the balance output during transducer energized state as done in RFB. It utilizes the change in buoyancy of the absorbing target due to local thermal heating. The linear thermal expansion of the target changes the apparent mass in water due to buoyancy change. This forms the basis for the measurement of ultrasonic power particularly in watts range. The proposed method comparatively reduces uncertainty caused by various ultrasonic effects that occur at high power such as overshoot due to momentum of target at higher radiated force. The functionality of the technique has been tested and compared with the existing internationally recommended RFB technique.« less

  19. Measurement of total ultrasonic power using thermal expansion and change in buoyancy of an absorbing target

    NASA Astrophysics Data System (ADS)

    Dubey, P. K.; Kumar, Yudhisther; Gupta, Reeta; Jain, Anshul; Gohiya, Chandrashekhar

    2014-05-01

    The Radiation Force Balance (RFB) technique is well established and most widely used for the measurement of total ultrasonic power radiated by ultrasonic transducer. The technique is used as a primary standard for calibration of ultrasonic transducers with relatively fair uncertainty in the low power (below 1 W) regime. In this technique, uncertainty comparatively increases in the range of few watts wherein the effects such as thermal heating of the target, cavitations, and acoustic streaming dominate. In addition, error in the measurement of ultrasonic power is also caused due to movement of absorber at relatively high radiated force which occurs at high power level. In this article a new technique is proposed which does not measure the balance output during transducer energized state as done in RFB. It utilizes the change in buoyancy of the absorbing target due to local thermal heating. The linear thermal expansion of the target changes the apparent mass in water due to buoyancy change. This forms the basis for the measurement of ultrasonic power particularly in watts range. The proposed method comparatively reduces uncertainty caused by various ultrasonic effects that occur at high power such as overshoot due to momentum of target at higher radiated force. The functionality of the technique has been tested and compared with the existing internationally recommended RFB technique.

  20. Flow Injection Technique for Biochemical Analysis with Chemiluminescence Detection in Acidic Media

    PubMed Central

    Chen, Jing; Fang, Yanjun

    2007-01-01

    A review with 90 references is presented to show the development of acidic chemiluminescence methods for biochemical analysis by use of flow injection technique in the last 10 years. A brief discussion of both the chemiluminescence and flow injection technique is given. The proposed methods for biochemical analysis are described and compared according to the used chemiluminescence system.

  1. A new cooperative MIMO scheme based on SM for energy-efficiency improvement in wireless sensor network.

    PubMed

    Peng, Yuyang; Choi, Jaeho

    2014-01-01

    Improving the energy efficiency in wireless sensor networks (WSN) has attracted considerable attention nowadays. The multiple-input multiple-output (MIMO) technique has been proved as a good candidate for improving the energy efficiency, but it may not be feasible in WSN which is due to the size limitation of the sensor node. As a solution, the cooperative multiple-input multiple-output (CMIMO) technique overcomes this constraint and shows a dramatically good performance. In this paper, a new CMIMO scheme based on the spatial modulation (SM) technique named CMIMO-SM is proposed for energy-efficiency improvement. We first establish the system model of CMIMO-SM. Based on this model, the transmission approach is introduced graphically. In order to evaluate the performance of the proposed scheme, a detailed analysis in terms of energy consumption per bit of the proposed scheme compared with the conventional CMIMO is presented. Later, under the guide of this new scheme we extend our proposed CMIMO-SM to a multihop clustered WSN for further achieving energy efficiency by finding an optimal hop-length. Equidistant hop as the traditional scheme will be compared in this paper. Results from the simulations and numerical experiments indicate that by the use of the proposed scheme, significant savings in terms of total energy consumption can be achieved. Combining the proposed scheme with monitoring sensor node will provide a good performance in arbitrary deployed WSN such as forest fire detection system.

  2. An efficient interpolation technique for jump proposals in reversible-jump Markov chain Monte Carlo calculations

    PubMed Central

    Farr, W. M.; Mandel, I.; Stevens, D.

    2015-01-01

    Selection among alternative theoretical models given an observed dataset is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty and it requires jumps between model parameter spaces, but cannot efficiently explore both parameter spaces at once. Thus, a naive jump between parameter spaces is unlikely to be accepted in the Markov chain Monte Carlo (MCMC) algorithm and convergence is correspondingly slow. Here, we demonstrate an interpolation technique that uses samples from single-model MCMCs to propose intermodel jumps from an approximation to the single-model posterior of the target parameter space. The interpolation technique, based on a kD-tree data structure, is adaptive and efficient in modest dimensionality. We show that our technique leads to improved convergence over naive jumps in an RJMCMC, and compare it to other proposals in the literature to improve the convergence of RJMCMCs. We also demonstrate the use of the same interpolation technique as a way to construct efficient ‘global’ proposal distributions for single-model MCMCs without prior knowledge of the structure of the posterior distribution, and discuss improvements that permit the method to be used in higher dimensional spaces efficiently. PMID:26543580

  3. Double Density Dual Tree Discrete Wavelet Transform implementation for Degraded Image Enhancement

    NASA Astrophysics Data System (ADS)

    Vimala, C.; Aruna Priya, P.

    2018-04-01

    Wavelet transform is a main tool for image processing applications in modern existence. A Double Density Dual Tree Discrete Wavelet Transform is used and investigated for image denoising. Images are considered for the analysis and the performance is compared with discrete wavelet transform and the Double Density DWT. Peak Signal to Noise Ratio values and Root Means Square error are calculated in all the three wavelet techniques for denoised images and the performance has evaluated. The proposed techniques give the better performance when comparing other two wavelet techniques.

  4. mRMR-ABC: A Hybrid Gene Selection Algorithm for Cancer Classification Using Microarray Gene Expression Profiling

    PubMed Central

    Alshamlan, Hala; Badr, Ghada; Alohali, Yousef

    2015-01-01

    An artificial bee colony (ABC) is a relatively recent swarm intelligence optimization approach. In this paper, we propose the first attempt at applying ABC algorithm in analyzing a microarray gene expression profile. In addition, we propose an innovative feature selection algorithm, minimum redundancy maximum relevance (mRMR), and combine it with an ABC algorithm, mRMR-ABC, to select informative genes from microarray profile. The new approach is based on a support vector machine (SVM) algorithm to measure the classification accuracy for selected genes. We evaluate the performance of the proposed mRMR-ABC algorithm by conducting extensive experiments on six binary and multiclass gene expression microarray datasets. Furthermore, we compare our proposed mRMR-ABC algorithm with previously known techniques. We reimplemented two of these techniques for the sake of a fair comparison using the same parameters. These two techniques are mRMR when combined with a genetic algorithm (mRMR-GA) and mRMR when combined with a particle swarm optimization algorithm (mRMR-PSO). The experimental results prove that the proposed mRMR-ABC algorithm achieves accurate classification performance using small number of predictive genes when tested using both datasets and compared to previously suggested methods. This shows that mRMR-ABC is a promising approach for solving gene selection and cancer classification problems. PMID:25961028

  5. mRMR-ABC: A Hybrid Gene Selection Algorithm for Cancer Classification Using Microarray Gene Expression Profiling.

    PubMed

    Alshamlan, Hala; Badr, Ghada; Alohali, Yousef

    2015-01-01

    An artificial bee colony (ABC) is a relatively recent swarm intelligence optimization approach. In this paper, we propose the first attempt at applying ABC algorithm in analyzing a microarray gene expression profile. In addition, we propose an innovative feature selection algorithm, minimum redundancy maximum relevance (mRMR), and combine it with an ABC algorithm, mRMR-ABC, to select informative genes from microarray profile. The new approach is based on a support vector machine (SVM) algorithm to measure the classification accuracy for selected genes. We evaluate the performance of the proposed mRMR-ABC algorithm by conducting extensive experiments on six binary and multiclass gene expression microarray datasets. Furthermore, we compare our proposed mRMR-ABC algorithm with previously known techniques. We reimplemented two of these techniques for the sake of a fair comparison using the same parameters. These two techniques are mRMR when combined with a genetic algorithm (mRMR-GA) and mRMR when combined with a particle swarm optimization algorithm (mRMR-PSO). The experimental results prove that the proposed mRMR-ABC algorithm achieves accurate classification performance using small number of predictive genes when tested using both datasets and compared to previously suggested methods. This shows that mRMR-ABC is a promising approach for solving gene selection and cancer classification problems.

  6. Hologram repositioning by an interferometric technique.

    PubMed

    Soares, O D

    1979-11-15

    An interferometric technique for hologram repositioning is described where the hologram is compared with the interference pattern of the reference and object waves. Analytical expressions to evaluate the accuracy of the repositioning are presented for the method. Two applications of the method in metrology for micromovement measurements are proposed.

  7. Through-wall image enhancement using fuzzy and QR decomposition.

    PubMed

    Riaz, Muhammad Mohsin; Ghafoor, Abdul

    2014-01-01

    QR decomposition and fuzzy logic based scheme is proposed for through-wall image enhancement. QR decomposition is less complex compared to singular value decomposition. Fuzzy inference engine assigns weights to different overlapping subspaces. Quantitative measures and visual inspection are used to analyze existing and proposed techniques.

  8. Noise estimation for hyperspectral imagery using spectral unmixing and synthesis

    NASA Astrophysics Data System (ADS)

    Demirkesen, C.; Leloglu, Ugur M.

    2014-10-01

    Most hyperspectral image (HSI) processing algorithms assume a signal to noise ratio model in their formulation which makes them dependent on accurate noise estimation. Many techniques have been proposed to estimate the noise. A very comprehensive comparative study on the subject is done by Gao et al. [1]. In a nut-shell, most techniques are based on the idea of calculating standard deviation from assumed-to-be homogenous regions in the image. Some of these algorithms work on a regular grid parameterized with a window size w, while others make use of image segmentation in order to obtain homogenous regions. This study focuses not only to the statistics of the noise but to the estimation of the noise itself. A noise estimation technique motivated from a recent HSI de-noising approach [2] is proposed in this study. The denoising algorithm is based on estimation of the end-members and their fractional abundances using non-negative least squares method. The end-members are extracted using the well-known simplex volume optimization technique called NFINDR after manual selection of number of end-members and the image is reconstructed using the estimated endmembers and abundances. Actually, image de-noising and noise estimation are two sides of the same coin: Once we denoise an image, we can estimate the noise by calculating the difference of the de-noised image and the original noisy image. In this study, the noise is estimated as described above. To assess the accuracy of this method, the methodology in [1] is followed, i.e., synthetic images are created by mixing end-member spectra and noise. Since best performing method for noise estimation was spectral and spatial de-correlation (SSDC) originally proposed in [3], the proposed method is compared to SSDC. The results of the experiments conducted with synthetic HSIs suggest that the proposed noise estimation strategy outperforms the existing techniques in terms of mean and standard deviation of absolute error of the estimated noise. Finally, it is shown that the proposed technique demonstrated a robust behavior to the change of its single parameter, namely the number of end-members.

  9. Application of source biasing technique for energy efficient DECODER circuit design: memory array application

    NASA Astrophysics Data System (ADS)

    Gupta, Neha; Parihar, Priyanka; Neema, Vaibhav

    2018-04-01

    Researchers have proposed many circuit techniques to reduce leakage power dissipation in memory cells. If we want to reduce the overall power in the memory system, we have to work on the input circuitry of memory architecture i.e. row and column decoder. In this research work, low leakage power with a high speed row and column decoder for memory array application is designed and four new techniques are proposed. In this work, the comparison of cluster DECODER, body bias DECODER, source bias DECODER, and source coupling DECODER are designed and analyzed for memory array application. Simulation is performed for the comparative analysis of different DECODER design parameters at 180 nm GPDK technology file using the CADENCE tool. Simulation results show that the proposed source bias DECODER circuit technique decreases the leakage current by 99.92% and static energy by 99.92% at a supply voltage of 1.2 V. The proposed circuit also improves dynamic power dissipation by 5.69%, dynamic PDP/EDP 65.03% and delay 57.25% at 1.2 V supply voltage.

  10. Noble-TLBO MPPT Technique and its Comparative Analysis with Conventional methods implemented on Solar Photo Voltaic System

    NASA Astrophysics Data System (ADS)

    Patsariya, Ajay; Rai, Shiwani; Kumar, Yogendra, Dr.; Kirar, Mukesh, Dr.

    2017-08-01

    The energy crisis particularly with developing GDPs, has bring up to a new panorama of sustainable power source like solar energy, which has encountered huge development. Progressively high infiltration level of photovoltaic (PV) era emerges in keen matrix. Sunlight based power is irregular and variable, as the sun based source at the ground level is exceedingly subject to overcast cover inconstancy, environmental vaporized levels, and other climate parameters. The inalienable inconstancy of substantial scale sun based era acquaints huge difficulties with keen lattice vitality administration. Exact determining of sun powered power/irradiance is basic to secure financial operation of the shrewd framework. In this paper a noble TLBO-MPPT technique has been proposed to address the vitality of solar energy. A comparative analysis has been presented between conventional PO, IC and the proposed MPPT technique. The research has been done on Matlab Simulink software version 2013.

  11. Quantum memory with a controlled homogeneous splitting

    NASA Astrophysics Data System (ADS)

    Hétet, G.; Wilkowski, D.; Chanelière, T.

    2013-04-01

    We propose a quantum memory protocol where an input light field can be stored onto and released from a single ground state atomic ensemble by controlling dynamically the strength of an external static and homogeneous field. The technique relies on the adiabatic following of a polaritonic excitation onto a state for which the forward collective radiative emission is forbidden. The resemblance with the archetypal electromagnetically induced transparency is only formal because no ground state coherence-based slow-light propagation is considered here. As compared to the other grand category of protocols derived from the photon-echo technique, our approach only involves a homogeneous static field. We discuss two physical situations where the effect can be observed, and show that in the limit where the excited state lifetime is longer than the storage time; the protocols are perfectly efficient and noise free. We compare the technique with other quantum memories, and propose atomic systems where the experiment can be realized.

  12. Efficient Data Gathering in 3D Linear Underwater Wireless Sensor Networks Using Sink Mobility

    PubMed Central

    Akbar, Mariam; Javaid, Nadeem; Khan, Ayesha Hussain; Imran, Muhammad; Shoaib, Muhammad; Vasilakos, Athanasios

    2016-01-01

    Due to the unpleasant and unpredictable underwater environment, designing an energy-efficient routing protocol for underwater wireless sensor networks (UWSNs) demands more accuracy and extra computations. In the proposed scheme, we introduce a mobile sink (MS), i.e., an autonomous underwater vehicle (AUV), and also courier nodes (CNs), to minimize the energy consumption of nodes. MS and CNs stop at specific stops for data gathering; later on, CNs forward the received data to the MS for further transmission. By the mobility of CNs and MS, the overall energy consumption of nodes is minimized. We perform simulations to investigate the performance of the proposed scheme and compare it to preexisting techniques. Simulation results are compared in terms of network lifetime, throughput, path loss, transmission loss and packet drop ratio. The results show that the proposed technique performs better in terms of network lifetime, throughput, path loss and scalability. PMID:27007373

  13. Efficient Data Gathering in 3D Linear Underwater Wireless Sensor Networks Using Sink Mobility.

    PubMed

    Akbar, Mariam; Javaid, Nadeem; Khan, Ayesha Hussain; Imran, Muhammad; Shoaib, Muhammad; Vasilakos, Athanasios

    2016-03-19

    Due to the unpleasant and unpredictable underwater environment, designing an energy-efficient routing protocol for underwater wireless sensor networks (UWSNs) demands more accuracy and extra computations. In the proposed scheme, we introduce a mobile sink (MS), i.e., an autonomous underwater vehicle (AUV), and also courier nodes (CNs), to minimize the energy consumption of nodes. MS and CNs stop at specific stops for data gathering; later on, CNs forward the received data to the MS for further transmission. By the mobility of CNs and MS, the overall energy consumption of nodes is minimized. We perform simulations to investigate the performance of the proposed scheme and compare it to preexisting techniques. Simulation results are compared in terms of network lifetime, throughput, path loss, transmission loss and packet drop ratio. The results show that the proposed technique performs better in terms of network lifetime, throughput, path loss and scalability.

  14. Epileptic seizure classification of EEG time-series using rational discrete short-time fourier transform.

    PubMed

    Samiee, Kaveh; Kovács, Petér; Gabbouj, Moncef

    2015-02-01

    A system for epileptic seizure detection in electroencephalography (EEG) is described in this paper. One of the challenges is to distinguish rhythmic discharges from nonstationary patterns occurring during seizures. The proposed approach is based on an adaptive and localized time-frequency representation of EEG signals by means of rational functions. The corresponding rational discrete short-time Fourier transform (DSTFT) is a novel feature extraction technique for epileptic EEG data. A multilayer perceptron classifier is fed by the coefficients of the rational DSTFT in order to separate seizure epochs from seizure-free epochs. The effectiveness of the proposed method is compared with several state-of-art feature extraction algorithms used in offline epileptic seizure detection. The results of the comparative evaluations show that the proposed method outperforms competing techniques in terms of classification accuracy. In addition, it provides a compact representation of EEG time-series.

  15. An Improved Map-Matching Technique Based on the Fréchet Distance Approach for Pedestrian Navigation Services

    PubMed Central

    Bang, Yoonsik; Kim, Jiyoung; Yu, Kiyun

    2016-01-01

    Wearable and smartphone technology innovations have propelled the growth of Pedestrian Navigation Services (PNS). PNS need a map-matching process to project a user’s locations onto maps. Many map-matching techniques have been developed for vehicle navigation services. These techniques are inappropriate for PNS because pedestrians move, stop, and turn in different ways compared to vehicles. In addition, the base map data for pedestrians are more complicated than for vehicles. This article proposes a new map-matching method for locating Global Positioning System (GPS) trajectories of pedestrians onto road network datasets. The theory underlying this approach is based on the Fréchet distance, one of the measures of geometric similarity between two curves. The Fréchet distance approach can provide reasonable matching results because two linear trajectories are parameterized with the time variable. Then we improved the method to be adaptive to the positional error of the GPS signal. We used an adaptation coefficient to adjust the search range for every input signal, based on the assumption of auto-correlation between consecutive GPS points. To reduce errors in matching, the reliability index was evaluated in real time for each match. To test the proposed map-matching method, we applied it to GPS trajectories of pedestrians and the road network data. We then assessed the performance by comparing the results with reference datasets. Our proposed method performed better with test data when compared to a conventional map-matching technique for vehicles. PMID:27782091

  16. A reliable ground bounce noise reduction technique for nanoscale CMOS circuits

    NASA Astrophysics Data System (ADS)

    Sharma, Vijay Kumar; Pattanaik, Manisha

    2015-11-01

    Power gating is the most effective method to reduce the standby leakage power by adding header/footer high-VTH sleep transistors between actual and virtual power/ground rails. When a power gating circuit transitions from sleep mode to active mode, a large instantaneous charge current flows through the sleep transistors. Ground bounce noise (GBN) is the high voltage fluctuation on real ground rail during sleep mode to active mode transitions of power gating circuits. GBN disturbs the logic states of internal nodes of circuits. A novel and reliable power gating structure is proposed in this article to reduce the problem of GBN. The proposed structure contains low-VTH transistors in place of high-VTH footer. The proposed power gating structure not only reduces the GBN but also improves other performance metrics. A large mitigation of leakage power in both modes eliminates the need of high-VTH transistors. A comprehensive and comparative evaluation of proposed technique is presented in this article for a chain of 5-CMOS inverters. The simulation results are compared to other well-known GBN reduction circuit techniques at 22 nm predictive technology model (PTM) bulk CMOS model using HSPICE tool. Robustness against process, voltage and temperature (PVT) variations is estimated through Monte-Carlo simulations.

  17. Contrast-enhanced spectral mammography based on a photon-counting detector: quantitative accuracy and radiation dose

    NASA Astrophysics Data System (ADS)

    Lee, Seungwan; Kang, Sooncheol; Eom, Jisoo

    2017-03-01

    Contrast-enhanced mammography has been used to demonstrate functional information about a breast tumor by injecting contrast agents. However, a conventional technique with a single exposure degrades the efficiency of tumor detection due to structure overlapping. Dual-energy techniques with energy-integrating detectors (EIDs) also cause an increase of radiation dose and an inaccuracy of material decomposition due to the limitations of EIDs. On the other hands, spectral mammography with photon-counting detectors (PCDs) is able to resolve the issues induced by the conventional technique and EIDs using their energy-discrimination capabilities. In this study, the contrast-enhanced spectral mammography based on a PCD was implemented by using a polychromatic dual-energy model, and the proposed technique was compared with the dual-energy technique with an EID in terms of quantitative accuracy and radiation dose. The results showed that the proposed technique improved the quantitative accuracy as well as reduced radiation dose comparing to the dual-energy technique with an EID. The quantitative accuracy of the contrast-enhanced spectral mammography based on a PCD was slightly improved as a function of radiation dose. Therefore, the contrast-enhanced spectral mammography based on a PCD is able to provide useful information for detecting breast tumors and improving diagnostic accuracy.

  18. Deep learning ensemble with asymptotic techniques for oscillometric blood pressure estimation.

    PubMed

    Lee, Soojeong; Chang, Joon-Hyuk

    2017-11-01

    This paper proposes a deep learning based ensemble regression estimator with asymptotic techniques, and offers a method that can decrease uncertainty for oscillometric blood pressure (BP) measurements using the bootstrap and Monte-Carlo approach. While the former is used to estimate SBP and DBP, the latter attempts to determine confidence intervals (CIs) for SBP and DBP based on oscillometric BP measurements. This work originally employs deep belief networks (DBN)-deep neural networks (DNN) to effectively estimate BPs based on oscillometric measurements. However, there are some inherent problems with these methods. First, it is not easy to determine the best DBN-DNN estimator, and worthy information might be omitted when selecting one DBN-DNN estimator and discarding the others. Additionally, our input feature vectors, obtained from only five measurements per subject, represent a very small sample size; this is a critical weakness when using the DBN-DNN technique and can cause overfitting or underfitting, depending on the structure of the algorithm. To address these problems, an ensemble with an asymptotic approach (based on combining the bootstrap with the DBN-DNN technique) is utilized to generate the pseudo features needed to estimate the SBP and DBP. In the first stage, the bootstrap-aggregation technique is used to create ensemble parameters. Afterward, the AdaBoost approach is employed for the second-stage SBP and DBP estimation. We then use the bootstrap and Monte-Carlo techniques in order to determine the CIs based on the target BP estimated using the DBN-DNN ensemble regression estimator with the asymptotic technique in the third stage. The proposed method can mitigate the estimation uncertainty such as large the standard deviation of error (SDE) on comparing the proposed DBN-DNN ensemble regression estimator with the DBN-DNN single regression estimator, we identify that the SDEs of the SBP and DBP are reduced by 0.58 and 0.57  mmHg, respectively. These indicate that the proposed method actually enhances the performance by 9.18% and 10.88% compared with the DBN-DNN single estimator. The proposed methodology improves the accuracy of BP estimation and reduces the uncertainty for BP estimation. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Communicative Competence of the Fourth Year Students: Basis for Proposed English Language Program

    ERIC Educational Resources Information Center

    Tuan, Vu Van

    2017-01-01

    This study on level of communicative competence covering linguistic/grammatical and discourse has aimed at constructing a proposed English language program for 5 key universities in Vietnam. The descriptive method utilized was scientifically employed with comparative techniques and correlational analysis. The researcher treated the surveyed data…

  20. Correlation-coefficient-based fast template matching through partial elimination.

    PubMed

    Mahmood, Arif; Khan, Sohaib

    2012-04-01

    Partial computation elimination techniques are often used for fast template matching. At a particular search location, computations are prematurely terminated as soon as it is found that this location cannot compete with an already known best match location. Due to the nonmonotonic growth pattern of the correlation-based similarity measures, partial computation elimination techniques have been traditionally considered inapplicable to speed up these measures. In this paper, we show that partial elimination techniques may be applied to a correlation coefficient by using a monotonic formulation, and we propose basic-mode and extended-mode partial correlation elimination algorithms for fast template matching. The basic-mode algorithm is more efficient on small template sizes, whereas the extended mode is faster on medium and larger templates. We also propose a strategy to decide which algorithm to use for a given data set. To achieve a high speedup, elimination algorithms require an initial guess of the peak correlation value. We propose two initialization schemes including a coarse-to-fine scheme for larger templates and a two-stage technique for small- and medium-sized templates. Our proposed algorithms are exact, i.e., having exhaustive equivalent accuracy, and are compared with the existing fast techniques using real image data sets on a wide variety of template sizes. While the actual speedups are data dependent, in most cases, our proposed algorithms have been found to be significantly faster than the other algorithms.

  1. A combination of selected mapping and clipping to increase energy efficiency of OFDM systems

    PubMed Central

    Lee, Byung Moo; Rim, You Seung

    2017-01-01

    We propose an energy efficient combination design for OFDM systems based on selected mapping (SLM) and clipping peak-to-average power ratio (PAPR) reduction techniques, and show the related energy efficiency (EE) performance analysis. The combination of two different PAPR reduction techniques can provide a significant benefit in increasing EE, because it can take advantages of both techniques. For the combination, we choose the clipping and SLM techniques, since the former technique is quite simple and effective, and the latter technique does not cause any signal distortion. We provide the structure and the systematic operating method, and show the various analyzes to derive the EE gain based on the combined technique. Our analysis show that the combined technique increases the EE by 69% compared to no PAPR reduction, and by 19.34% compared to only using SLM technique. PMID:29023591

  2. Novel spectrophotometric methods for simultaneous determination of Amlodipine, Valsartan and Hydrochlorothiazide in their ternary mixture

    NASA Astrophysics Data System (ADS)

    Lotfy, Hayam M.; Hegazy, Maha A.; Mowaka, Shereen; Mohamed, Ekram Hany

    2015-04-01

    This work represents a comparative study of two smart spectrophotometric techniques namely; successive resolution and progressive resolution for the simultaneous determination of ternary mixtures of Amlodipine (AML), Hydrochlorothiazide (HCT) and Valsartan (VAL) without prior separation steps. These techniques consist of several consecutive steps utilizing zero and/or ratio and/or derivative spectra. By applying successive spectrum subtraction coupled with constant multiplication method, the proposed drugs were obtained in their zero order absorption spectra and determined at their maxima 237.6 nm, 270.5 nm and 250 nm for AML, HCT and VAL, respectively; while by applying successive derivative subtraction they were obtained in their first derivative spectra and determined at P230.8-246, P261.4-278.2, P233.7-246.8 for AML, HCT and VAL respectively. While in the progressive resolution, the concentrations of the components were determined progressively from the same zero order absorption spectrum using absorbance subtraction coupled with absorptivity factor methods or from the same ratio spectrum using only one divisor via amplitude modulation method can be used for the determination of ternary mixtures using only one divisor where the concentrations of the components are determined progressively. The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs. Moreover comparative study between spectrum addition technique as a novel enrichment technique and a well established one namely spiking technique was adopted for the analysis of pharmaceutical formulations containing low concentration of AML. The methods were validated as per ICH guidelines where accuracy, precision and specificity were found to be within their acceptable limits. The results obtained from the proposed methods were statistically compared with the reported one where no significant difference was observed.

  3. Combined Use of Terrestrial Laser Scanning and IR Thermography Applied to a Historical Building

    PubMed Central

    Costanzo, Antonio; Minasi, Mario; Casula, Giuseppe; Musacchio, Massimo; Buongiorno, Maria Fabrizia

    2015-01-01

    The conservation of architectural heritage usually requires a multidisciplinary approach involving a variety of specialist expertise and techniques. Nevertheless, destructive techniques should be avoided, wherever possible, in order to preserve the integrity of the historical buildings, therefore the development of non-destructive and non-contact techniques is extremely important. In this framework, a methodology for combining the terrestrial laser scanning and the infrared thermal images is proposed, in order to obtain a reconnaissance of the conservation state of a historical building. The proposed case study is represented by St. Augustine Monumental Compound, located in the historical centre of the town of Cosenza (Calabria, South Italy). Adopting the proposed methodology, the paper illustrates the main results obtained for the building test overlaying and comparing the collected data with both techniques, in order to outline the capabilities both to detect the anomalies and to improve the knowledge on health state of the masonry building. The 3D model, also, allows to provide a reference model, laying the groundwork for implementation of a monitoring multisensor system based on the use of non-destructive techniques. PMID:25609042

  4. Combined use of terrestrial laser scanning and IR thermography applied to a historical building.

    PubMed

    Costanzo, Antonio; Minasi, Mario; Casula, Giuseppe; Musacchio, Massimo; Buongiorno, Maria Fabrizia

    2014-12-24

    The conservation of architectural heritage usually requires a multidisciplinary approach involving a variety of specialist expertise and techniques. Nevertheless, destructive techniques should be avoided, wherever possible, in order to preserve the integrity of the historical buildings, therefore the development of non-destructive and non-contact techniques is extremely important. In this framework, a methodology for combining the terrestrial laser scanning and the infrared thermal images is proposed, in order to obtain a reconnaissance of the conservation state of a historical building. The proposed case study is represented by St. Augustine Monumental Compound, located in the historical centre of the town of Cosenza (Calabria, South Italy). Adopting the proposed methodology, the paper illustrates the main results obtained for the building test overlaying and comparing the collected data with both techniques, in order to outline the capabilities both to detect the anomalies and to improve the knowledge on health state of the masonry building. The 3D model, also, allows to provide a reference model, laying the groundwork for implementation of a monitoring multisensor system based on the use of non-destructive techniques.

  5. Experimental technique for simultaneous measurement of absorption-, emission cross-sections, and background loss coefficient in doped optical fibers

    NASA Astrophysics Data System (ADS)

    Karimi, M.; Seraji, F. E.

    2010-01-01

    We report a new simple technique for the simultaneous measurements of absorption-, emission cross-sections, background loss coefficient, and dopant density of doped optical fibers with low dopant concentration. Using our proposed technique, the experimental characterization of a sample Ge-Er-doped optical fiber is presented, and the results are analyzed and compared with other reports. This technique is suitable for production line of doped optical fibers.

  6. Performance optimization of spectral amplitude coding OCDMA system using new enhanced multi diagonal code

    NASA Astrophysics Data System (ADS)

    Imtiaz, Waqas A.; Ilyas, M.; Khan, Yousaf

    2016-11-01

    This paper propose a new code to optimize the performance of spectral amplitude coding-optical code division multiple access (SAC-OCDMA) system. The unique two-matrix structure of the proposed enhanced multi diagonal (EMD) code and effective correlation properties, between intended and interfering subscribers, significantly elevates the performance of SAC-OCDMA system by negating multiple access interference (MAI) and associated phase induce intensity noise (PIIN). Performance of SAC-OCDMA system based on the proposed code is thoroughly analyzed for two detection techniques through analytic and simulation analysis by referring to bit error rate (BER), signal to noise ratio (SNR) and eye patterns at the receiving end. It is shown that EMD code while using SDD technique provides high transmission capacity, reduces the receiver complexity, and provides better performance as compared to complementary subtraction detection (CSD) technique. Furthermore, analysis shows that, for a minimum acceptable BER of 10-9 , the proposed system supports 64 subscribers at data rates of up to 2 Gbps for both up-down link transmission.

  7. Charge plasma technique based dopingless accumulation mode junctionless cylindrical surrounding gate MOSFET: analog performance improvement

    NASA Astrophysics Data System (ADS)

    Trivedi, Nitin; Kumar, Manoj; Haldar, Subhasis; Deswal, S. S.; Gupta, Mridula; Gupta, R. S.

    2017-09-01

    A charge plasma technique based dopingless (DL) accumulation mode (AM) junctionless (JL) cylindrical surrounding gate (CSG) MOSFET has been proposed and extensively investigated. Proposed device has no physical junction at source to channel and channel to drain interface. The complete silicon pillar has been considered as undoped. The high free electron density or induced N+ region is designed by keeping the work function of source/drain metal contacts lower than the work function of undoped silicon. Thus, its fabrication complexity is drastically reduced by curbing the requirement of high temperature doping techniques. The electrical/analog characteristics for the proposed device has been extensively investigated using the numerical simulation and are compared with conventional junctionless cylindrical surrounding gate (JL-CSG) MOSFET with identical dimensions. For the numerical simulation purpose ATLAS-3D device simulator is used. The results show that the proposed device is more short channel immune to conventional JL-CSG MOSFET and suitable for faster switching applications due to higher I ON/ I OFF ratio.

  8. On regularization and error estimates for the backward heat conduction problem with time-dependent thermal diffusivity factor

    NASA Astrophysics Data System (ADS)

    Karimi, Milad; Moradlou, Fridoun; Hajipour, Mojtaba

    2018-10-01

    This paper is concerned with a backward heat conduction problem with time-dependent thermal diffusivity factor in an infinite "strip". This problem is drastically ill-posed which is caused by the amplified infinitely growth in the frequency components. A new regularization method based on the Meyer wavelet technique is developed to solve the considered problem. Using the Meyer wavelet technique, some new stable estimates are proposed in the Hölder and Logarithmic types which are optimal in the sense of given by Tautenhahn. The stability and convergence rate of the proposed regularization technique are proved. The good performance and the high-accuracy of this technique is demonstrated through various one and two dimensional examples. Numerical simulations and some comparative results are presented.

  9. Feed-Forward Neural Network Prediction of the Mechanical Properties of Sandcrete Materials

    PubMed Central

    Asteris, Panagiotis G.; Roussis, Panayiotis C.; Douvika, Maria G.

    2017-01-01

    This work presents a soft-sensor approach for estimating critical mechanical properties of sandcrete materials. Feed-forward (FF) artificial neural network (ANN) models are employed for building soft-sensors able to predict the 28-day compressive strength and the modulus of elasticity of sandcrete materials. To this end, a new normalization technique for the pre-processing of data is proposed. The comparison of the derived results with the available experimental data demonstrates the capability of FF ANNs to predict with pinpoint accuracy the mechanical properties of sandcrete materials. Furthermore, the proposed normalization technique has been proven effective and robust compared to other normalization techniques available in the literature. PMID:28598400

  10. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  11. Feasibility study of single photon emission coupled tomography imaging technique based on prompt gamma ray during antiproton therapy using boron particle

    NASA Astrophysics Data System (ADS)

    Shin, Han-Back; Jung, Joo-Young; Kim, Moo-Sub; Kim, Sunmi; Choi, Yong; Yoon, Do-Kun; Suh, Tae Suk

    2018-06-01

    In this study, we proposed an absorbed-dose monitoring technique using prompt gamma rays emitted from the reaction between an antiproton and a boron particle, and demonstrated the greater physical effect of the antiproton boron fusion therapy in comparison with proton beam using Monte Carlo simulation. The physical effect of the treatment, which was 3.5 times greater, was confirmed from the antiproton beam irradiation compared to the proton beam irradiation. Moreover, the prompt gamma ray image is acquired successfully during antiproton irradiation to boron regions. The results show the application feasibility of absorbed dose monitoring technique proposed in our study.

  12. High-Yield Method for Dispersing Simian Kidneys for Cell Cultures

    PubMed Central

    de Oca, H. Montes; Probst, P.; Grubbs, R.

    1971-01-01

    A technique for dispersion of animal tissue cells is described. The proposed technique is based on the concomitant use of trypsin and disodium ethylenediamine tetraacetate (EDTA). The use of the two dispersing agents (trypsin and disodium EDTA) markedly enhances cell yield as compared with the standard cell dispersion methods. Moreover, significant reduction in the amount of time required for complete tissue dispersal, presence of a very low number of nonviable cells, less cell clumping, and more uniform monolayer formation upon cultivation compare favorably with the results usually obtained with the standard trypsinization technique. Images PMID:4993235

  13. The low-frequency sound power measuring technique for an underwater source in a non-anechoic tank

    NASA Astrophysics Data System (ADS)

    Zhang, Yi-Ming; Tang, Rui; Li, Qi; Shang, Da-Jing

    2018-03-01

    In order to determine the radiated sound power of an underwater source below the Schroeder cut-off frequency in a non-anechoic tank, a low-frequency extension measuring technique is proposed. This technique is based on a unique relationship between the transmission characteristics of the enclosed field and those of the free field, which can be obtained as a correction term based on previous measurements of a known simple source. The radiated sound power of an unknown underwater source in the free field can thereby be obtained accurately from measurements in a non-anechoic tank. To verify the validity of the proposed technique, a mathematical model of the enclosed field is established using normal-mode theory, and the relationship between the transmission characteristics of the enclosed and free fields is obtained. The radiated sound power of an underwater transducer source is tested in a glass tank using the proposed low-frequency extension measuring technique. Compared with the free field, the radiated sound power level of the narrowband spectrum deviation is found to be less than 3 dB, and the 1/3 octave spectrum deviation is found to be less than 1 dB. The proposed testing technique can be used not only to extend the low-frequency applications of non-anechoic tanks, but also for measurement of radiated sound power from complicated sources in non-anechoic tanks.

  14. Steganography based on pixel intensity value decomposition

    NASA Astrophysics Data System (ADS)

    Abdulla, Alan Anwar; Sellahewa, Harin; Jassim, Sabah A.

    2014-05-01

    This paper focuses on steganography based on pixel intensity value decomposition. A number of existing schemes such as binary, Fibonacci, Prime, Natural, Lucas, and Catalan-Fibonacci (CF) are evaluated in terms of payload capacity and stego quality. A new technique based on a specific representation is proposed to decompose pixel intensity values into 16 (virtual) bit-planes suitable for embedding purposes. The proposed decomposition has a desirable property whereby the sum of all bit-planes does not exceed the maximum pixel intensity value, i.e. 255. Experimental results demonstrate that the proposed technique offers an effective compromise between payload capacity and stego quality of existing embedding techniques based on pixel intensity value decomposition. Its capacity is equal to that of binary and Lucas, while it offers a higher capacity than Fibonacci, Prime, Natural, and CF when the secret bits are embedded in 1st Least Significant Bit (LSB). When the secret bits are embedded in higher bit-planes, i.e., 2nd LSB to 8th Most Significant Bit (MSB), the proposed scheme has more capacity than Natural numbers based embedding. However, from the 6th bit-plane onwards, the proposed scheme offers better stego quality. In general, the proposed decomposition scheme has less effect in terms of quality on pixel value when compared to most existing pixel intensity value decomposition techniques when embedding messages in higher bit-planes.

  15. An analysis technique for testing log grades

    Treesearch

    Carl A. Newport; William G. O' Regan

    1963-01-01

    An analytical technique that may be used in evaluating log-grading systems is described. It also provides means of comparing two or more grading systems, or a proposed change with the system from which it was developed. The total volume and computed value of lumber from each sample log are the basic data used.

  16. A Comparison of Mean Phase Difference and Generalized Least Squares for Analyzing Single-Case Data

    ERIC Educational Resources Information Center

    Manolov, Rumen; Solanas, Antonio

    2013-01-01

    The present study focuses on single-case data analysis specifically on two procedures for quantifying differences between baseline and treatment measurements. The first technique tested is based on generalized least square regression analysis and is compared to a proposed non-regression technique, which allows obtaining similar information. The…

  17. Unsupervised Fault Diagnosis of a Gear Transmission Chain Using a Deep Belief Network

    PubMed Central

    He, Jun; Yang, Shixi; Gan, Chunbiao

    2017-01-01

    Artificial intelligence (AI) techniques, which can effectively analyze massive amounts of fault data and automatically provide accurate diagnosis results, have been widely applied to fault diagnosis of rotating machinery. Conventional AI methods are applied using features selected by a human operator, which are manually extracted based on diagnostic techniques and field expertise. However, developing robust features for each diagnostic purpose is often labour-intensive and time-consuming, and the features extracted for one specific task may be unsuitable for others. In this paper, a novel AI method based on a deep belief network (DBN) is proposed for the unsupervised fault diagnosis of a gear transmission chain, and the genetic algorithm is used to optimize the structural parameters of the network. Compared to the conventional AI methods, the proposed method can adaptively exploit robust features related to the faults by unsupervised feature learning, thus requires less prior knowledge about signal processing techniques and diagnostic expertise. Besides, it is more powerful at modelling complex structured data. The effectiveness of the proposed method is validated using datasets from rolling bearings and gearbox. To show the superiority of the proposed method, its performance is compared with two well-known classifiers, i.e., back propagation neural network (BPNN) and support vector machine (SVM). The fault classification accuracies are 99.26% for rolling bearings and 100% for gearbox when using the proposed method, which are much higher than that of the other two methods. PMID:28677638

  18. Unsupervised Fault Diagnosis of a Gear Transmission Chain Using a Deep Belief Network.

    PubMed

    He, Jun; Yang, Shixi; Gan, Chunbiao

    2017-07-04

    Artificial intelligence (AI) techniques, which can effectively analyze massive amounts of fault data and automatically provide accurate diagnosis results, have been widely applied to fault diagnosis of rotating machinery. Conventional AI methods are applied using features selected by a human operator, which are manually extracted based on diagnostic techniques and field expertise. However, developing robust features for each diagnostic purpose is often labour-intensive and time-consuming, and the features extracted for one specific task may be unsuitable for others. In this paper, a novel AI method based on a deep belief network (DBN) is proposed for the unsupervised fault diagnosis of a gear transmission chain, and the genetic algorithm is used to optimize the structural parameters of the network. Compared to the conventional AI methods, the proposed method can adaptively exploit robust features related to the faults by unsupervised feature learning, thus requires less prior knowledge about signal processing techniques and diagnostic expertise. Besides, it is more powerful at modelling complex structured data. The effectiveness of the proposed method is validated using datasets from rolling bearings and gearbox. To show the superiority of the proposed method, its performance is compared with two well-known classifiers, i.e., back propagation neural network (BPNN) and support vector machine (SVM). The fault classification accuracies are 99.26% for rolling bearings and 100% for gearbox when using the proposed method, which are much higher than that of the other two methods.

  19. Realisation and robustness evaluation of a blind spatial domain watermarking technique

    NASA Astrophysics Data System (ADS)

    Parah, Shabir A.; Sheikh, Javaid A.; Assad, Umer I.; Bhat, Ghulam M.

    2017-04-01

    A blind digital image watermarking scheme based on spatial domain is presented and investigated in this paper. The watermark has been embedded in intermediate significant bit planes besides the least significant bit plane at the address locations determined by pseudorandom address vector (PAV). The watermark embedding using PAV makes it difficult for an adversary to locate the watermark and hence adds to security of the system. The scheme has been evaluated to ascertain the spatial locations that are robust to various image processing and geometric attacks JPEG compression, additive white Gaussian noise, salt and pepper noise, filtering and rotation. The experimental results obtained, reveal an interesting fact, that, for all the above mentioned attacks, other than rotation, higher the bit plane in which watermark is embedded more robust the system. Further, the perceptual quality of the watermarked images obtained in the proposed system has been compared with some state-of-art watermarking techniques. The proposed technique outperforms the techniques under comparison, even if compared with the worst case peak signal-to-noise ratio obtained in our scheme.

  20. Shot boundary detection and label propagation for spatio-temporal video segmentation

    NASA Astrophysics Data System (ADS)

    Piramanayagam, Sankaranaryanan; Saber, Eli; Cahill, Nathan D.; Messinger, David

    2015-02-01

    This paper proposes a two stage algorithm for streaming video segmentation. In the first stage, shot boundaries are detected within a window of frames by comparing dissimilarity between 2-D segmentations of each frame. In the second stage, the 2-D segments are propagated across the window of frames in both spatial and temporal direction. The window is moved across the video to find all shot transitions and obtain spatio-temporal segments simultaneously. As opposed to techniques that operate on entire video, the proposed approach consumes significantly less memory and enables segmentation of lengthy videos. We tested our segmentation based shot detection method on the TRECVID 2007 video dataset and compared it with block-based technique. Cut detection results on the TRECVID 2007 dataset indicate that our algorithm has comparable results to the best of the block-based methods. The streaming video segmentation routine also achieves promising results on a challenging video segmentation benchmark database.

  1. Terahertz wave electro-optic measurements with optical spectral filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ilyakov, I. E., E-mail: igor-ilyakov@mail.ru; Shishkin, B. V.; Kitaeva, G. Kh.

    We propose electro-optic detection techniques based on variations of the laser pulse spectrum induced during pulse co-propagation with terahertz wave radiation in a nonlinear crystal. Quantitative comparison with two other detection methods is made. Substantial improvement of the sensitivity compared to the standard electro-optic detection technique (at high frequencies) and to the previously shown technique based on laser pulse energy changes is demonstrated in experiment.

  2. A New Data Representation Based on Training Data Characteristics to Extract Drug Name Entity in Medical Text

    PubMed Central

    Basaruddin, T.

    2016-01-01

    One essential task in information extraction from the medical corpus is drug name recognition. Compared with text sources come from other domains, the medical text mining poses more challenges, for example, more unstructured text, the fast growing of new terms addition, a wide range of name variation for the same drug, the lack of labeled dataset sources and external knowledge, and the multiple token representations for a single drug name. Although many approaches have been proposed to overwhelm the task, some problems remained with poor F-score performance (less than 0.75). This paper presents a new treatment in data representation techniques to overcome some of those challenges. We propose three data representation techniques based on the characteristics of word distribution and word similarities as a result of word embedding training. The first technique is evaluated with the standard NN model, that is, MLP. The second technique involves two deep network classifiers, that is, DBN and SAE. The third technique represents the sentence as a sequence that is evaluated with a recurrent NN model, that is, LSTM. In extracting the drug name entities, the third technique gives the best F-score performance compared to the state of the art, with its average F-score being 0.8645. PMID:27843447

  3. Design of a memory-access controller with 3.71-times-enhanced energy efficiency for Internet-of-Things-oriented nonvolatile microcontroller unit

    NASA Astrophysics Data System (ADS)

    Natsui, Masanori; Hanyu, Takahiro

    2018-04-01

    In realizing a nonvolatile microcontroller unit (MCU) for sensor nodes in Internet-of-Things (IoT) applications, it is important to solve the data-transfer bottleneck between the central processing unit (CPU) and the nonvolatile memory constituting the MCU. As one circuit-oriented approach to solving this problem, we propose a memory access minimization technique for magnetoresistive-random-access-memory (MRAM)-embedded nonvolatile MCUs. In addition to multiplexing and prefetching of memory access, the proposed technique realizes efficient instruction fetch by eliminating redundant memory access while considering the code length of the instruction to be fetched and the transition of the memory address to be accessed. As a result, the performance of the MCU can be improved while relaxing the performance requirement for the embedded MRAM, and compact and low-power implementation can be performed as compared with the conventional cache-based one. Through the evaluation using a system consisting of a general purpose 32-bit CPU and embedded MRAM, it is demonstrated that the proposed technique increases the peak efficiency of the system up to 3.71 times, while a 2.29-fold area reduction is achieved compared with the cache-based one.

  4. [Comparison of techniques for coliform bacteria extraction from sediment of Xochimilco Lake, Mexico].

    PubMed

    Fernández-Rendón, Carlos L; Barrera-Escorcia, Guadalupe

    2013-01-01

    The need to separate bacteria from sediment in order to appropriately count them has led to test the efficacy of different techniques. In this research, traditional techniques such as manual shaking, homogenization, ultrasonication, and surfactant are compared. Moreover, the possibility of using a set of enzymes (pancreatine) and an antibiotic (ampicillin) for sediment coliform extraction is proposed. Samples were obtained from Xochimilco Lake in Mexico City. The most probable number of coliform bacteria was determined after applying the appropriate separation procedure. Most of the techniques tested led to numbers similar to those of the control (manual shaking). Only with the use of ampicillin, a greater total coliform concentration was observed (Mann-Whitney, z = 2.09; p = 0.03). It is possible to propose the use of ampicillin as a technique for total coliform extraction; however, it is necessary to consider sensitivity of bacteria to the antibiotic.

  5. Developing a hybrid dictionary-based bio-entity recognition technique.

    PubMed

    Song, Min; Yu, Hwanjo; Han, Wook-Shin

    2015-01-01

    Bio-entity extraction is a pivotal component for information extraction from biomedical literature. The dictionary-based bio-entity extraction is the first generation of Named Entity Recognition (NER) techniques. This paper presents a hybrid dictionary-based bio-entity extraction technique. The approach expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. In addition, the proposed technique adopts text mining techniques in the merging stage of similar entities such as Part of Speech (POS) expansion, stemming, and the exploitation of the contextual cues to further improve the performance. The experimental results show that the proposed technique achieves the best or at least equivalent performance among compared techniques, GENIA, MESH, UMLS, and combinations of these three resources in F-measure. The results imply that the performance of dictionary-based extraction techniques is largely influenced by information resources used to build the dictionary. In addition, the edit distance algorithm shows steady performance with three different dictionaries in precision whereas the context-only technique achieves a high-end performance with three difference dictionaries in recall.

  6. Developing a hybrid dictionary-based bio-entity recognition technique

    PubMed Central

    2015-01-01

    Background Bio-entity extraction is a pivotal component for information extraction from biomedical literature. The dictionary-based bio-entity extraction is the first generation of Named Entity Recognition (NER) techniques. Methods This paper presents a hybrid dictionary-based bio-entity extraction technique. The approach expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. In addition, the proposed technique adopts text mining techniques in the merging stage of similar entities such as Part of Speech (POS) expansion, stemming, and the exploitation of the contextual cues to further improve the performance. Results The experimental results show that the proposed technique achieves the best or at least equivalent performance among compared techniques, GENIA, MESH, UMLS, and combinations of these three resources in F-measure. Conclusions The results imply that the performance of dictionary-based extraction techniques is largely influenced by information resources used to build the dictionary. In addition, the edit distance algorithm shows steady performance with three different dictionaries in precision whereas the context-only technique achieves a high-end performance with three difference dictionaries in recall. PMID:26043907

  7. Constructing and predicting solitary pattern solutions for nonlinear time-fractional dispersive partial differential equations

    NASA Astrophysics Data System (ADS)

    Arqub, Omar Abu; El-Ajou, Ahmad; Momani, Shaher

    2015-07-01

    Building fractional mathematical models for specific phenomena and developing numerical or analytical solutions for these fractional mathematical models are crucial issues in mathematics, physics, and engineering. In this work, a new analytical technique for constructing and predicting solitary pattern solutions of time-fractional dispersive partial differential equations is proposed based on the generalized Taylor series formula and residual error function. The new approach provides solutions in the form of a rapidly convergent series with easily computable components using symbolic computation software. For method evaluation and validation, the proposed technique was applied to three different models and compared with some of the well-known methods. The resultant simulations clearly demonstrate the superiority and potentiality of the proposed technique in terms of the quality performance and accuracy of substructure preservation in the construct, as well as the prediction of solitary pattern solutions for time-fractional dispersive partial differential equations.

  8. Motorcyclists safety system to avoid rear end collisions based on acoustic signatures

    NASA Astrophysics Data System (ADS)

    Muzammel, M.; Yusoff, M. Zuki; Malik, A. Saeed; Mohamad Saad, M. Naufal; Meriaudeau, F.

    2017-03-01

    In many Asian countries, motorcyclists have a higher fatality rate as compared to other vehicles. Among many other factors, rear end collisions are also contributing for these fatalities. Collision detection systems can be useful to minimize these accidents. However, the designing of efficient and cost effective collision detection system for motorcyclist is still a major challenge. In this paper, an acoustic information based, cost effective and efficient collision detection system is proposed for motorcycle applications. The proposed technique uses the Short time Fourier Transform (STFT) to extract the features from the audio signal and Principal component analysis (PCA) has been used to reduce the feature vector length. The reduction of feature length, further increases the performance of this technique. The proposed technique has been tested on self recorded dataset and gives accuracy of 97.87%. We believe that this method can help to reduce a significant number of motorcycle accidents.

  9. Topological Interference Management for K-User Downlink Massive MIMO Relay Network Channel.

    PubMed

    Selvaprabhu, Poongundran; Chinnadurai, Sunil; Li, Jun; Lee, Moon Ho

    2017-08-17

    In this paper, we study the emergence of topological interference alignment and the characterizing features of a multi-user broadcast interference relay channel. We propose an alternative transmission strategy named the relay space-time interference alignment (R-STIA) technique, in which a K -user multiple-input-multiple-output (MIMO) interference channel has massive antennas at the transmitter and relay. Severe interference from unknown transmitters affects the downlink relay network channel and degrades the system performance. An additional (unintended) receiver is introduced in the proposed R-STIA technique to overcome the above problem, since it has the ability to decode the desired signals for the intended receiver by considering cooperation between the receivers. The additional receiver also helps in recovering and reconstructing the interference signals with limited channel state information at the relay (CSIR). The Alamouti space-time transmission technique and minimum mean square error (MMSE) linear precoder are also used in the proposed scheme to detect the presence of interference signals. Numerical results show that the proposed R-STIA technique achieves a better performance in terms of the bit error rate (BER) and sum-rate compared to the existing broadcast channel schemes.

  10. Topological Interference Management for K-User Downlink Massive MIMO Relay Network Channel

    PubMed Central

    Li, Jun; Lee, Moon Ho

    2017-01-01

    In this paper, we study the emergence of topological interference alignment and the characterizing features of a multi-user broadcast interference relay channel. We propose an alternative transmission strategy named the relay space-time interference alignment (R-STIA) technique, in which a K-user multiple-input-multiple-output (MIMO) interference channel has massive antennas at the transmitter and relay. Severe interference from unknown transmitters affects the downlink relay network channel and degrades the system performance. An additional (unintended) receiver is introduced in the proposed R-STIA technique to overcome the above problem, since it has the ability to decode the desired signals for the intended receiver by considering cooperation between the receivers. The additional receiver also helps in recovering and reconstructing the interference signals with limited channel state information at the relay (CSIR). The Alamouti space-time transmission technique and minimum mean square error (MMSE) linear precoder are also used in the proposed scheme to detect the presence of interference signals. Numerical results show that the proposed R-STIA technique achieves a better performance in terms of the bit error rate (BER) and sum-rate compared to the existing broadcast channel schemes. PMID:28817071

  11. A Multiagent-based Intrusion Detection System with the Support of Multi-Class Supervised Classification

    NASA Astrophysics Data System (ADS)

    Shyu, Mei-Ling; Sainani, Varsha

    The increasing number of network security related incidents have made it necessary for the organizations to actively protect their sensitive data with network intrusion detection systems (IDSs). IDSs are expected to analyze a large volume of data while not placing a significantly added load on the monitoring systems and networks. This requires good data mining strategies which take less time and give accurate results. In this study, a novel data mining assisted multiagent-based intrusion detection system (DMAS-IDS) is proposed, particularly with the support of multiclass supervised classification. These agents can detect and take predefined actions against malicious activities, and data mining techniques can help detect them. Our proposed DMAS-IDS shows superior performance compared to central sniffing IDS techniques, and saves network resources compared to other distributed IDS with mobile agents that activate too many sniffers causing bottlenecks in the network. This is one of the major motivations to use a distributed model based on multiagent platform along with a supervised classification technique.

  12. Analytical Model of Large Data Transactions in CoAP Networks

    PubMed Central

    Ludovici, Alessandro; Di Marco, Piergiuseppe; Calveras, Anna; Johansson, Karl H.

    2014-01-01

    We propose a novel analytical model to study fragmentation methods in wireless sensor networks adopting the Constrained Application Protocol (CoAP) and the IEEE 802.15.4 standard for medium access control (MAC). The blockwise transfer technique proposed in CoAP and the 6LoWPAN fragmentation are included in the analysis. The two techniques are compared in terms of reliability and delay, depending on the traffic, the number of nodes and the parameters of the IEEE 802.15.4 MAC. The results are validated trough Monte Carlo simulations. To the best of our knowledge this is the first study that evaluates and compares analytically the performance of CoAP blockwise transfer and 6LoWPAN fragmentation. A major contribution is the possibility to understand the behavior of both techniques with different network conditions. Our results show that 6LoWPAN fragmentation is preferable for delay-constrained applications. For highly congested networks, the blockwise transfer slightly outperforms 6LoWPAN fragmentation in terms of reliability. PMID:25153143

  13. 3D surface pressure measurement with single light-field camera and pressure-sensitive paint

    NASA Astrophysics Data System (ADS)

    Shi, Shengxian; Xu, Shengming; Zhao, Zhou; Niu, Xiaofu; Quinn, Mark Kenneth

    2018-05-01

    A novel technique that simultaneously measures three-dimensional model geometry, as well as surface pressure distribution, with single camera is demonstrated in this study. The technique takes the advantage of light-field photography which can capture three-dimensional information with single light-field camera, and combines it with the intensity-based pressure-sensitive paint method. The proposed single camera light-field three-dimensional pressure measurement technique (LF-3DPSP) utilises a similar hardware setup to the traditional two-dimensional pressure measurement technique, with exception that the wind-on, wind-off and model geometry images are captured via an in-house-constructed light-field camera. The proposed LF-3DPSP technique was validated with a Mach 5 flared cone model test. Results show that the technique is capable of measuring three-dimensional geometry with high accuracy for relatively large curvature models, and the pressure results compare well with the Schlieren tests, analytical calculations, and numerical simulations.

  14. A new simple technique for improving the random properties of chaos-based cryptosystems

    NASA Astrophysics Data System (ADS)

    Garcia-Bosque, M.; Pérez-Resa, A.; Sánchez-Azqueta, C.; Celma, S.

    2018-03-01

    A new technique for improving the security of chaos-based stream ciphers has been proposed and tested experimentally. This technique manages to improve the randomness properties of the generated keystream by preventing the system to fall into short period cycles due to digitation. In order to test this technique, a stream cipher based on a Skew Tent Map algorithm has been implemented on a Virtex 7 FPGA. The randomness of the keystream generated by this system has been compared to the randomness of the keystream generated by the same system with the proposed randomness-enhancement technique. By subjecting both keystreams to the National Institute of Standards and Technology (NIST) tests, we have proved that our method can considerably improve the randomness of the generated keystreams. In order to incorporate our randomness-enhancement technique, only 41 extra slices have been needed, proving that, apart from effective, this method is also efficient in terms of area and hardware resources.

  15. Bandwidth compression of multispectral satellite imagery

    NASA Technical Reports Server (NTRS)

    Habibi, A.

    1978-01-01

    The results of two studies aimed at developing efficient adaptive and nonadaptive techniques for compressing the bandwidth of multispectral images are summarized. These techniques are evaluated and compared using various optimality criteria including MSE, SNR, and recognition accuracy of the bandwidth compressed images. As an example of future requirements, the bandwidth requirements for the proposed Landsat-D Thematic Mapper are considered.

  16. An Exact Model-Based Method for Near-Field Sources Localization with Bistatic MIMO System.

    PubMed

    Singh, Parth Raj; Wang, Yide; Chargé, Pascal

    2017-03-30

    In this paper, we propose an exact model-based method for near-field sources localization with a bistatic multiple input, multiple output (MIMO) radar system, and compare it with an approximated model-based method. The aim of this paper is to propose an efficient way to use the exact model of the received signals of near-field sources in order to eliminate the systematic error introduced by the use of approximated model in most existing near-field sources localization techniques. The proposed method uses parallel factor (PARAFAC) decomposition to deal with the exact model. Thanks to the exact model, the proposed method has better precision and resolution than the compared approximated model-based method. The simulation results show the performance of the proposed method.

  17. An unconditionally stable staggered algorithm for transient finite element analysis of coupled thermoelastic problems

    NASA Technical Reports Server (NTRS)

    Farhat, C.; Park, K. C.; Dubois-Pelerin, Y.

    1991-01-01

    An unconditionally stable second order accurate implicit-implicit staggered procedure for the finite element solution of fully coupled thermoelasticity transient problems is proposed. The procedure is stabilized with a semi-algebraic augmentation technique. A comparative cost analysis reveals the superiority of the proposed computational strategy to other conventional staggered procedures. Numerical examples of one and two-dimensional thermomechanical coupled problems demonstrate the accuracy of the proposed numerical solution algorithm.

  18. Readout circuit with novel background suppression for long wavelength infrared focal plane arrays

    NASA Astrophysics Data System (ADS)

    Xie, L.; Xia, X. J.; Zhou, Y. F.; Wen, Y.; Sun, W. F.; Shi, L. X.

    2011-02-01

    In this article, a novel pixel readout circuit using a switched-capacitor integrator mode background suppression technique is presented for long wavelength infrared focal plane arrays. This circuit can improve dynamic range and signal-to-noise ratio by suppressing the large background current during integration. Compared with other background suppression techniques, the new background suppression technique is less sensitive to the process mismatch and has no additional shot noise. The proposed circuit is theoretically analysed and simulated while taking into account the non-ideal characteristics. The result shows that the background suppression non-uniformity is ultra-low even for a large process mismatch. The background suppression non-uniformity of the proposed circuit can also remain very small with technology scaling.

  19. n-SIFT: n-dimensional scale invariant feature transform.

    PubMed

    Cheung, Warren; Hamarneh, Ghassan

    2009-09-01

    We propose the n-dimensional scale invariant feature transform (n-SIFT) method for extracting and matching salient features from scalar images of arbitrary dimensionality, and compare this method's performance to other related features. The proposed features extend the concepts used for 2-D scalar images in the computer vision SIFT technique for extracting and matching distinctive scale invariant features. We apply the features to images of arbitrary dimensionality through the use of hyperspherical coordinates for gradients and multidimensional histograms to create the feature vectors. We analyze the performance of a fully automated multimodal medical image matching technique based on these features, and successfully apply the technique to determine accurate feature point correspondence between pairs of 3-D MRI images and dynamic 3D + time CT data.

  20. Techniques of laparoscopic cholecystectomy: Nomenclature and selection.

    PubMed

    Haribhakti, Sanjiv P; Mistry, Jitendra H

    2015-01-01

    There are more than 50 different techniques of laparoscopic cholecystectomy (LC) available in literature mainly due to modifications by surgeons in aim to improve postoperative outcome and cosmesis. These modifications include reduction in port size and/or number than what is used in standard LC. There is no uniform nomenclature to describe these different techniques so that it is not possible to compare the outcomes of different techniques. We brief the advantages and disadvantages of each of these techniques and suggest the situation where particular technique would be useful. We also propose a nomenclature which is easy to remember and apply, so that any future comparison will be possible between the techniques.

  1. A fuzzy optimal threshold technique for medical images

    NASA Astrophysics Data System (ADS)

    Thirupathi Kannan, Balaji; Krishnasamy, Krishnaveni; Pradeep Kumar Kenny, S.

    2012-01-01

    A new fuzzy based thresholding method for medical images especially cervical cytology images having blob and mosaic structures is proposed in this paper. Many existing thresholding algorithms may segment either blob or mosaic images but there aren't any single algorithm that can do both. In this paper, an input cervical cytology image is binarized, preprocessed and the pixel value with minimum Fuzzy Gaussian Index is identified as an optimal threshold value and used for segmentation. The proposed technique is tested on various cervical cytology images having blob or mosaic structures, compared with various existing algorithms and proved better than the existing algorithms.

  2. Abatement of PAPR for ACO-OFDM deployed in VLC systems by frequency modulation of the baseband signal forming a constant envelope

    NASA Astrophysics Data System (ADS)

    Kumar Singh, Vinay; Dalal, U. D.

    2017-06-01

    To inhibit the effect of non-linearity of the LEDs leading to a significant increase in the peak to average power ratio (PAPR) of the OFDM signals in the Visible light communication (VLC) we propose a frequency modulated constant envelope OFDM (FM CE-OFDM) technique. The abrupt amplitude variations in the OFDM signal are frequency modulated before being applied to the LED for electro-optical conversion resulting in a constant envelope signal. The LED is maintained in the linear region of operation by this constant envelope signal at sufficient DC bias. The proposed technique reduces the PAPR to the least possible value ≈0 dB. We theoretically analyze and perform numerical simulations to assess the enhancement of the proposed system. The optimal modulation index is found to be 0.3. The metrics pertaining to the evaluation of the phase discontinuity is derived and is found to be lesser for the FM CE-OFDM as compared to the phase modulated (PM) CE-OFDM. The receiver sensitivity is improved by 1.6 dB for a transmission distance of 2 m for the FM CE-OFDM as compared to the PM CE-OFDM at the FEC threshold. We compare the BER performance of the ideal OFDM (without the non linearity of LED), power back-off OFDM, PM CE-OFDM and FM CE-OFDM in an optical wireless channel (OWC) scenario. The FM CE-OFDM has an improvement of 2.1 dB SNR at the FEC threshold as compared to the PM CE-OFDM. It also shows an improvement of 11 dB when compared with the power back-off technique used in the VLC systems for 10 dB power back-off.

  3. Facial recognition using multisensor images based on localized kernel eigen spaces.

    PubMed

    Gundimada, Satyanadh; Asari, Vijayan K

    2009-06-01

    A feature selection technique along with an information fusion procedure for improving the recognition accuracy of a visual and thermal image-based facial recognition system is presented in this paper. A novel modular kernel eigenspaces approach is developed and implemented on the phase congruency feature maps extracted from the visual and thermal images individually. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The proposed localized nonlinear feature selection procedure helps to overcome the bottlenecks of illumination variations, partial occlusions, expression variations and variations due to temperature changes that affect the visual and thermal face recognition techniques. AR and Equinox databases are used for experimentation and evaluation of the proposed technique. The proposed feature selection procedure has greatly improved the recognition accuracy for both the visual and thermal images when compared to conventional techniques. Also, a decision level fusion methodology is presented which along with the feature selection procedure has outperformed various other face recognition techniques in terms of recognition accuracy.

  4. Pressure ulcer image segmentation technique through synthetic frequencies generation and contrast variation using toroidal geometry.

    PubMed

    David, Ortiz P; Sierra-Sosa, Daniel; Zapirain, Begoña García

    2017-01-06

    Pressure ulcers have become subject of study in recent years due to the treatment high costs and decreased life quality from patients. These chronic wounds are related to the global life expectancy increment, being the geriatric and physical disable patients the principal affected by this condition. Injuries diagnosis and treatment usually takes weeks or even months by medical personel. Using non-invasive techniques, such as image processing techniques, it is possible to conduct an analysis from ulcers and aid in its diagnosis. This paper proposes a novel technique for image segmentation based on contrast changes by using synthetic frequencies obtained from the grayscale value available in each pixel of the image. These synthetic frequencies are calculated using the model of energy density over an electric field to describe a relation between a constant density and the image amplitude in a pixel. A toroidal geometry is used to decompose the image into different contrast levels by variating the synthetic frequencies. Then, the decomposed image is binarized applying Otsu's threshold allowing for obtaining the contours that describe the contrast variations. Morphological operations are used to obtain the desired segment of the image. The proposed technique is evaluated by synthesizing a Data Base with 51 images of pressure ulcers, provided by the Centre IGURCO. With the segmentation of these pressure ulcer images it is possible to aid in its diagnosis and treatment. To provide evidences of technique performance, digital image correlation was used as a measure, where the segments obtained using the methodology are compared with the real segments. The proposed technique is compared with two benchmarked algorithms. The results over the technique present an average correlation of 0.89 with a variation of ±0.1 and a computational time of 9.04 seconds. The methodology presents better segmentation results than the benchmarked algorithms using less computational time and without the need of an initial condition.

  5. Arterial Mechanical Motion Estimation Based on a Semi-Rigid Body Deformation Approach

    PubMed Central

    Guzman, Pablo; Hamarneh, Ghassan; Ros, Rafael; Ros, Eduardo

    2014-01-01

    Arterial motion estimation in ultrasound (US) sequences is a hard task due to noise and discontinuities in the signal derived from US artifacts. Characterizing the mechanical properties of the artery is a promising novel imaging technique to diagnose various cardiovascular pathologies and a new way of obtaining relevant clinical information, such as determining the absence of dicrotic peak, estimating the Augmentation Index (AIx), the arterial pressure or the arterial stiffness. One of the advantages of using US imaging is the non-invasive nature of the technique unlike Intra Vascular Ultra Sound (IVUS) or angiography invasive techniques, plus the relative low cost of the US units. In this paper, we propose a semi rigid deformable method based on Soft Bodies dynamics realized by a hybrid motion approach based on cross-correlation and optical flow methods to quantify the elasticity of the artery. We evaluate and compare different techniques (for instance optical flow methods) on which our approach is based. The goal of this comparative study is to identify the best model to be used and the impact of the accuracy of these different stages in the proposed method. To this end, an exhaustive assessment has been conducted in order to decide which model is the most appropriate for registering the variation of the arterial diameter over time. Our experiments involved a total of 1620 evaluations within nine simulated sequences of 84 frames each and the estimation of four error metrics. We conclude that our proposed approach obtains approximately 2.5 times higher accuracy than conventional state-of-the-art techniques. PMID:24871987

  6. Efficient live face detection to counter spoof attack in face recognition systems

    NASA Astrophysics Data System (ADS)

    Biswas, Bikram Kumar; Alam, Mohammad S.

    2015-03-01

    Face recognition is a critical tool used in almost all major biometrics based security systems. But recognition, authentication and liveness detection of the face of an actual user is a major challenge because an imposter or a non-live face of the actual user can be used to spoof the security system. In this research, a robust technique is proposed which detects liveness of faces in order to counter spoof attacks. The proposed technique uses a three-dimensional (3D) fast Fourier transform to compare spectral energies of a live face and a fake face in a mathematically selective manner. The mathematical model involves evaluation of energies of selective high frequency bands of average power spectra of both live and non-live faces. It also carries out proper recognition and authentication of the face of the actual user using the fringe-adjusted joint transform correlation technique, which has been found to yield the highest correlation output for a match. Experimental tests show that the proposed technique yields excellent results for identifying live faces.

  7. Wireless ultrasonic wavefield imaging via laser for hidden damage detection inside a steel box girder bridge

    NASA Astrophysics Data System (ADS)

    An, Yun-Kyu; Song, Homin; Sohn, Hoon

    2014-09-01

    This paper presents a wireless ultrasonic wavefield imaging (WUWI) technique for detecting hidden damage inside a steel box girder bridge. The proposed technique allows (1) complete wireless excitation of piezoelectric transducers and noncontact sensing of the corresponding responses using laser beams, (2) autonomous damage visualization without comparing against baseline data previously accumulated from the pristine condition of a target structure and (3) robust damage diagnosis even for real structures with complex structural geometries. First, a new WUWI hardware system was developed by integrating optoelectronic-based signal transmitting and receiving devices and a scanning laser Doppler vibrometer. Next, a damage visualization algorithm, self-referencing f-k filter (SRF), was introduced to isolate and visualize only crack-induced ultrasonic modes from measured ultrasonic wavefield images. Finally, the performance of the proposed technique was validated through hidden crack visualization at a decommissioned Ramp-G Bridge in South Korea. The experimental results reveal that the proposed technique instantaneously detects and successfully visualizes hidden cracks even in the complex structure of a real bridge.

  8. Combining Acceleration Techniques for Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction.

    PubMed

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2017-01-01

    Over the past decade, image quality in low-dose computed tomography has been greatly improved by various compressive sensing- (CS-) based reconstruction methods. However, these methods have some disadvantages including high computational cost and slow convergence rate. Many different speed-up techniques for CS-based reconstruction algorithms have been developed. The purpose of this paper is to propose a fast reconstruction framework that combines a CS-based reconstruction algorithm with several speed-up techniques. First, total difference minimization (TDM) was implemented using the soft-threshold filtering (STF). Second, we combined TDM-STF with the ordered subsets transmission (OSTR) algorithm for accelerating the convergence. To further speed up the convergence of the proposed method, we applied the power factor and the fast iterative shrinkage thresholding algorithm to OSTR and TDM-STF, respectively. Results obtained from simulation and phantom studies showed that many speed-up techniques could be combined to greatly improve the convergence speed of a CS-based reconstruction algorithm. More importantly, the increased computation time (≤10%) was minor as compared to the acceleration provided by the proposed method. In this paper, we have presented a CS-based reconstruction framework that combines several acceleration techniques. Both simulation and phantom studies provide evidence that the proposed method has the potential to satisfy the requirement of fast image reconstruction in practical CT.

  9. CLAss-Specific Subspace Kernel Representations and Adaptive Margin Slack Minimization for Large Scale Classification.

    PubMed

    Yu, Yinan; Diamantaras, Konstantinos I; McKelvey, Tomas; Kung, Sun-Yuan

    2018-02-01

    In kernel-based classification models, given limited computational power and storage capacity, operations over the full kernel matrix becomes prohibitive. In this paper, we propose a new supervised learning framework using kernel models for sequential data processing. The framework is based on two components that both aim at enhancing the classification capability with a subset selection scheme. The first part is a subspace projection technique in the reproducing kernel Hilbert space using a CLAss-specific Subspace Kernel representation for kernel approximation. In the second part, we propose a novel structural risk minimization algorithm called the adaptive margin slack minimization to iteratively improve the classification accuracy by an adaptive data selection. We motivate each part separately, and then integrate them into learning frameworks for large scale data. We propose two such frameworks: the memory efficient sequential processing for sequential data processing and the parallelized sequential processing for distributed computing with sequential data acquisition. We test our methods on several benchmark data sets and compared with the state-of-the-art techniques to verify the validity of the proposed techniques.

  10. Evaluation of Ultrasonic Fiber Structure Extraction Technique Using Autopsy Specimens of Liver

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Tadashi; Hirai, Kazuki; Yamada, Hiroyuki; Ebara, Masaaki; Hachiya, Hiroyuki

    2005-06-01

    It is very important to diagnose liver cirrhosis noninvasively and correctly. In our previous studies, we proposed a processing technique to detect changes in liver tissue in vivo. In this paper, we propose the evaluation of the relationship between liver disease and echo information using autopsy specimens of a human liver in vitro. It is possible to verify the function of a processing parameter clearly and to compare the processing result and the actual human liver tissue structure by in vitro experiment. In the results of our processing technique, information that did not obey a Rayleigh distribution from the echo signal of the autopsy liver specimens was extracted depending on changes in a particular processing parameter. The fiber tissue structure of the same specimen was extracted from a number of histological images of stained tissue. We constructed 3D structures using the information extracted from the echo signal and the fiber structure of the stained tissue and compared the two. By comparing the 3D structures, it is possible to evaluate the relationship between the information that does not obey a Rayleigh distribution of the echo signal and the fibrosis structure.

  11. Information extraction and transmission techniques for spaceborne synthetic aperture radar images

    NASA Technical Reports Server (NTRS)

    Frost, V. S.; Yurovsky, L.; Watson, E.; Townsend, K.; Gardner, S.; Boberg, D.; Watson, J.; Minden, G. J.; Shanmugan, K. S.

    1984-01-01

    Information extraction and transmission techniques for synthetic aperture radar (SAR) imagery were investigated. Four interrelated problems were addressed. An optimal tonal SAR image classification algorithm was developed and evaluated. A data compression technique was developed for SAR imagery which is simple and provides a 5:1 compression with acceptable image quality. An optimal textural edge detector was developed. Several SAR image enhancement algorithms have been proposed. The effectiveness of each algorithm was compared quantitatively.

  12. Sparsity-aware tight frame learning with adaptive subspace recognition for multiple fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Yang, Boyuan

    2017-09-01

    It is a challenging problem to design excellent dictionaries to sparsely represent diverse fault information and simultaneously discriminate different fault sources. Therefore, this paper describes and analyzes a novel multiple feature recognition framework which incorporates the tight frame learning technique with an adaptive subspace recognition strategy. The proposed framework consists of four stages. Firstly, by introducing the tight frame constraint into the popular dictionary learning model, the proposed tight frame learning model could be formulated as a nonconvex optimization problem which can be solved by alternatively implementing hard thresholding operation and singular value decomposition. Secondly, the noises are effectively eliminated through transform sparse coding techniques. Thirdly, the denoised signal is decoupled into discriminative feature subspaces by each tight frame filter. Finally, in guidance of elaborately designed fault related sensitive indexes, latent fault feature subspaces can be adaptively recognized and multiple faults are diagnosed simultaneously. Extensive numerical experiments are sequently implemented to investigate the sparsifying capability of the learned tight frame as well as its comprehensive denoising performance. Most importantly, the feasibility and superiority of the proposed framework is verified through performing multiple fault diagnosis of motor bearings. Compared with the state-of-the-art fault detection techniques, some important advantages have been observed: firstly, the proposed framework incorporates the physical prior with the data-driven strategy and naturally multiple fault feature with similar oscillation morphology can be adaptively decoupled. Secondly, the tight frame dictionary directly learned from the noisy observation can significantly promote the sparsity of fault features compared to analytical tight frames. Thirdly, a satisfactory complete signal space description property is guaranteed and thus weak feature leakage problem is avoided compared to typical learning methods.

  13. Reinforcing the role of the conventional C-arm--a novel method for simplified distal interlocking.

    PubMed

    Windolf, Markus; Schroeder, Josh; Fliri, Ladina; Dicht, Benno; Liebergall, Meir; Richards, R Geoff

    2012-01-25

    The common practice for insertion of distal locking screws of intramedullary nails is a freehand technique under fluoroscopic control. The process is technically demanding, time-consuming and afflicted to considerable radiation exposure of the patient and the surgical personnel. A new concept is introduced utilizing information from within conventional radiographic images to help accurately guide the surgeon to place the interlocking bolt into the interlocking hole. The newly developed technique was compared to conventional freehand in an operating room (OR) like setting on human cadaveric lower legs in terms of operating time and radiation exposure. The proposed concept (guided freehand), generally based on the freehand gold standard, additionally guides the surgeon by means of visible landmarks projected into the C-arm image. A computer program plans the correct drilling trajectory by processing the lens-shaped hole projections of the interlocking holes from a single image. Holes can be drilled by visually aligning the drill to the planned trajectory. Besides a conventional C-arm, no additional tracking or navigation equipment is required.Ten fresh frozen human below-knee specimens were instrumented with an Expert Tibial Nail (Synthes GmbH, Switzerland). The implants were distally locked by performing the newly proposed technique as well as the conventional freehand technique on each specimen. An orthopedic resident surgeon inserted four distal screws per procedure. Operating time, number of images and radiation time were recorded and statistically compared between interlocking techniques using non-parametric tests. A 58% reduction in number of taken images per screw was found for the guided freehand technique (7.4 ± 3.4) (mean ± SD) compared to the freehand technique (17.6 ± 10.3) (p < 0.001). Total radiation time (all 4 screws) was 55% lower for the guided freehand technique compared to conventional freehand (p = 0.001). Operating time per screw (from first shot to screw tightened) was on average 22% reduced by guided freehand (p = 0.018). In an experimental setting, the newly developed guided freehand technique for distal interlocking has proven to markedly reduce radiation exposure when compared to the conventional freehand technique. The method utilizes established clinical workflows and does not require cost intensive add-on devices or extensive training. The underlying principle carries potential to assist implant positioning in numerous other applications within orthopedics and trauma from screw insertions to placement of plates, nails or prostheses.

  14. Technique for calibrating angular measurement devices when calibration standards are unavailable

    NASA Technical Reports Server (NTRS)

    Finley, Tom D.

    1991-01-01

    A calibration technique is proposed that will allow the calibration of certain angular measurement devices without requiring the use of absolute standard. The technique assumes that the device to be calibrated has deterministic bias errors. A comparison device must be available that meets the same requirements. The two devices are compared; one device is then rotated with respect to the other, and a second comparison is performed. If the data are reduced using the described technique, the individual errors of the two devices can be determined.

  15. Time-frequency and advanced frequency estimation techniques for the investigation of bat echolocation calls.

    PubMed

    Kopsinis, Yannis; Aboutanios, Elias; Waters, Dean A; McLaughlin, Steve

    2010-02-01

    In this paper, techniques for time-frequency analysis and investigation of bat echolocation calls are studied. Particularly, enhanced resolution techniques are developed and/or used in this specific context for the first time. When compared to traditional time-frequency representation methods, the proposed techniques are more capable of showing previously unseen features in the structure of bat echolocation calls. It should be emphasized that although the study is focused on bat echolocation recordings, the results are more general and applicable to many other types of signal.

  16. Phase recovery in temporal speckle pattern interferometry using the generalized S-transform.

    PubMed

    Federico, Alejandro; Kaufmann, Guillermo H

    2008-04-15

    We propose a novel approach based on the generalized S-transform to retrieve optical phase distributions in temporal speckle pattern interferometry. The performance of the proposed approach is compared with those given by well-known techniques based on the continuous wavelet, the Hilbert transforms, and a smoothed time-frequency distribution by analyzing interferometric data degraded by noise, nonmodulating pixels, and modulation loss. The advantages and limitations of the proposed phase retrieval approach are discussed.

  17. Extension of electronic speckle correlation interferometry to large deformations

    NASA Astrophysics Data System (ADS)

    Sciammarella, Cesar A.; Sciammarella, Federico M.

    1998-07-01

    The process of fringe formation under simultaneous illumination in two orthogonal directions is analyzed. Procedures to extend the applicability of this technique to large deformation and high density of fringes are introduced. The proposed techniques are applied to a number of technical problems. Good agreement is obtained when the experimental results are compared with results obtained by other methods.

  18. RRW: repeated random walks on genome-scale protein networks for local cluster discovery

    PubMed Central

    Macropol, Kathy; Can, Tolga; Singh, Ambuj K

    2009-01-01

    Background We propose an efficient and biologically sensitive algorithm based on repeated random walks (RRW) for discovering functional modules, e.g., complexes and pathways, within large-scale protein networks. Compared to existing cluster identification techniques, RRW implicitly makes use of network topology, edge weights, and long range interactions between proteins. Results We apply the proposed technique on a functional network of yeast genes and accurately identify statistically significant clusters of proteins. We validate the biological significance of the results using known complexes in the MIPS complex catalogue database and well-characterized biological processes. We find that 90% of the created clusters have the majority of their catalogued proteins belonging to the same MIPS complex, and about 80% have the majority of their proteins involved in the same biological process. We compare our method to various other clustering techniques, such as the Markov Clustering Algorithm (MCL), and find a significant improvement in the RRW clusters' precision and accuracy values. Conclusion RRW, which is a technique that exploits the topology of the network, is more precise and robust in finding local clusters. In addition, it has the added flexibility of being able to find multi-functional proteins by allowing overlapping clusters. PMID:19740439

  19. A reference protocol for comparing the biocidal properties of gas plasma generating devices

    NASA Astrophysics Data System (ADS)

    Shaw, A.; Seri, P.; Borghi, C. A.; Shama, G.; Iza, F.

    2015-12-01

    Growing interest in the use of non-thermal, atmospheric pressure gas plasmas for decontamination purposes has resulted in a multiplicity of plasma-generating devices. There is currently no universally approved method of comparing the biocidal performance of such devices and in the work described here spores of the Gram positive bacterium Bacillus subtilis (ATCC 6633) are proposed as a suitable reference biological agent. In order to achieve consistency in the form in which the biological agent in question is presented to the plasma, a polycarbonate membrane loaded with a monolayer of spores is proposed. The advantages of the proposed protocol are evaluated by comparing inactivation tests in which an alternative microorganism (methicillin resistant Staphylococcus aureus—MRSA) and the widely-used sample preparation technique of directly pipetting cell suspensions onto membranes are employed. In all cases, inactivation tests with either UV irradiation or plasma exposure were more reproducible when the proposed protocol was followed.

  20. Wavelet-based de-noising algorithm for images acquired with parallel magnetic resonance imaging (MRI).

    PubMed

    Delakis, Ioannis; Hammad, Omer; Kitney, Richard I

    2007-07-07

    Wavelet-based de-noising has been shown to improve image signal-to-noise ratio in magnetic resonance imaging (MRI) while maintaining spatial resolution. Wavelet-based de-noising techniques typically implemented in MRI require that noise displays uniform spatial distribution. However, images acquired with parallel MRI have spatially varying noise levels. In this work, a new algorithm for filtering images with parallel MRI is presented. The proposed algorithm extracts the edges from the original image and then generates a noise map from the wavelet coefficients at finer scales. The noise map is zeroed at locations where edges have been detected and directional analysis is also used to calculate noise in regions of low-contrast edges that may not have been detected. The new methodology was applied on phantom and brain images and compared with other applicable de-noising techniques. The performance of the proposed algorithm was shown to be comparable with other techniques in central areas of the images, where noise levels are high. In addition, finer details and edges were maintained in peripheral areas, where noise levels are low. The proposed methodology is fully automated and can be applied on final reconstructed images without requiring sensitivity profiles or noise matrices of the receiver coils, therefore making it suitable for implementation in a clinical MRI setting.

  1. Pseudo-steady-state non-Gaussian Einstein-Podolsky-Rosen steering of massive particles in pumped and damped Bose-Hubbard dimers

    NASA Astrophysics Data System (ADS)

    Olsen, M. K.

    2017-02-01

    We propose and analyze a pumped and damped Bose-Hubbard dimer as a source of continuous-variable Einstein-Podolsky-Rosen (EPR) steering with non-Gaussian statistics. We use and compare the results of the approximate truncated Wigner and the exact positive-P representation to calculate and compare the predictions for intensities, second-order quantum correlations, and third- and fourth-order cumulants. We find agreement for intensities and the products of inferred quadrature variances, which indicate that states demonstrating the EPR paradox are present. We find clear signals of non-Gaussianity in the quantum states of the modes from both the approximate and exact techniques, with quantitative differences in their predictions. Our proposed experimental configuration is extrapolated from current experimental techniques and adds another apparatus to the current toolbox of quantum atom optics.

  2. Empirical likelihood-based confidence intervals for mean medical cost with censored data.

    PubMed

    Jeyarajah, Jenny; Qin, Gengsheng

    2017-11-10

    In this paper, we propose empirical likelihood methods based on influence function and jackknife techniques for constructing confidence intervals for mean medical cost with censored data. We conduct a simulation study to compare the coverage probabilities and interval lengths of our proposed confidence intervals with that of the existing normal approximation-based confidence intervals and bootstrap confidence intervals. The proposed methods have better finite-sample performances than existing methods. Finally, we illustrate our proposed methods with a relevant example. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Robust Approach for Nonuniformity Correction in Infrared Focal Plane Array.

    PubMed

    Boutemedjet, Ayoub; Deng, Chenwei; Zhao, Baojun

    2016-11-10

    In this paper, we propose a new scene-based nonuniformity correction technique for infrared focal plane arrays. Our work is based on the use of two well-known scene-based methods, namely, adaptive and interframe registration-based exploiting pure translation motion model between frames. The two approaches have their benefits and drawbacks, which make them extremely effective in certain conditions and not adapted for others. Following on that, we developed a method robust to various conditions, which may slow or affect the correction process by elaborating a decision criterion that adapts the process to the most effective technique to ensure fast and reliable correction. In addition to that, problems such as bad pixels and ghosting artifacts are also dealt with to enhance the overall quality of the correction. The performance of the proposed technique is investigated and compared to the two state-of-the-art techniques cited above.

  4. Robust Approach for Nonuniformity Correction in Infrared Focal Plane Array

    PubMed Central

    Boutemedjet, Ayoub; Deng, Chenwei; Zhao, Baojun

    2016-01-01

    In this paper, we propose a new scene-based nonuniformity correction technique for infrared focal plane arrays. Our work is based on the use of two well-known scene-based methods, namely, adaptive and interframe registration-based exploiting pure translation motion model between frames. The two approaches have their benefits and drawbacks, which make them extremely effective in certain conditions and not adapted for others. Following on that, we developed a method robust to various conditions, which may slow or affect the correction process by elaborating a decision criterion that adapts the process to the most effective technique to ensure fast and reliable correction. In addition to that, problems such as bad pixels and ghosting artifacts are also dealt with to enhance the overall quality of the correction. The performance of the proposed technique is investigated and compared to the two state-of-the-art techniques cited above. PMID:27834893

  5. Signal-to-noise ratio estimation using adaptive tuning on the piecewise cubic Hermite interpolation model for images.

    PubMed

    Sim, K S; Yeap, Z X; Tso, C P

    2016-11-01

    An improvement to the existing technique of quantifying signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images using piecewise cubic Hermite interpolation (PCHIP) technique is proposed. The new technique uses an adaptive tuning onto the PCHIP, and is thus named as ATPCHIP. To test its accuracy, 70 images are corrupted with noise and their autocorrelation functions are then plotted. The ATPCHIP technique is applied to estimate the uncorrupted noise-free zero offset point from a corrupted image. Three existing methods, the nearest neighborhood, first order interpolation and original PCHIP, are used to compare with the performance of the proposed ATPCHIP method, with respect to their calculated SNR values. Results show that ATPCHIP is an accurate and reliable method to estimate SNR values from SEM images. SCANNING 38:502-514, 2016. © 2015 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.

  6. Dynamic frame resizing with convolutional neural network for efficient video compression

    NASA Astrophysics Data System (ADS)

    Kim, Jaehwan; Park, Youngo; Choi, Kwang Pyo; Lee, JongSeok; Jeon, Sunyoung; Park, JeongHoon

    2017-09-01

    In the past, video codecs such as vc-1 and H.263 used a technique to encode reduced-resolution video and restore original resolution from the decoder for improvement of coding efficiency. The techniques of vc-1 and H.263 Annex Q are called dynamic frame resizing and reduced-resolution update mode, respectively. However, these techniques have not been widely used due to limited performance improvements that operate well only under specific conditions. In this paper, video frame resizing (reduced/restore) technique based on machine learning is proposed for improvement of coding efficiency. The proposed method features video of low resolution made by convolutional neural network (CNN) in encoder and reconstruction of original resolution using CNN in decoder. The proposed method shows improved subjective performance over all the high resolution videos which are dominantly consumed recently. In order to assess subjective quality of the proposed method, Video Multi-method Assessment Fusion (VMAF) which showed high reliability among many subjective measurement tools was used as subjective metric. Moreover, to assess general performance, diverse bitrates are tested. Experimental results showed that BD-rate based on VMAF was improved by about 51% compare to conventional HEVC. Especially, VMAF values were significantly improved in low bitrate. Also, when the method is subjectively tested, it had better subjective visual quality in similar bit rate.

  7. The Optimization of In-Memory Space Partitioning Trees for Cache Utilization

    NASA Astrophysics Data System (ADS)

    Yeo, Myung Ho; Min, Young Soo; Bok, Kyoung Soo; Yoo, Jae Soo

    In this paper, a novel cache conscious indexing technique based on space partitioning trees is proposed. Many researchers investigated efficient cache conscious indexing techniques which improve retrieval performance of in-memory database management system recently. However, most studies considered data partitioning and targeted fast information retrieval. Existing data partitioning-based index structures significantly degrade performance due to the redundant accesses of overlapped spaces. Specially, R-tree-based index structures suffer from the propagation of MBR (Minimum Bounding Rectangle) information by updating data frequently. In this paper, we propose an in-memory space partitioning index structure for optimal cache utilization. The proposed index structure is compared with the existing index structures in terms of update performance, insertion performance and cache-utilization rate in a variety of environments. The results demonstrate that the proposed index structure offers better performance than existing index structures.

  8. Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter.

    PubMed

    Choi, Jihoon; Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il

    2017-09-13

    This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected.

  9. Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter

    PubMed Central

    Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il

    2017-01-01

    This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected. PMID:28902154

  10. Nonrigid Autofocus Motion Correction for Coronary MR Angiography with a 3D Cones Trajectory

    PubMed Central

    Ingle, R. Reeve; Wu, Holden H.; Addy, Nii Okai; Cheng, Joseph Y.; Yang, Phillip C.; Hu, Bob S.; Nishimura, Dwight G.

    2014-01-01

    Purpose: To implement a nonrigid autofocus motion correction technique to improve respiratory motion correction of free-breathing whole-heart coronary magnetic resonance angiography (CMRA) acquisitions using an image-navigated 3D cones sequence. Methods: 2D image navigators acquired every heartbeat are used to measure superior-inferior, anterior-posterior, and right-left translation of the heart during a free-breathing CMRA scan using a 3D cones readout trajectory. Various tidal respiratory motion patterns are modeled by independently scaling the three measured displacement trajectories. These scaled motion trajectories are used for 3D translational compensation of the acquired data, and a bank of motion-compensated images is reconstructed. From this bank, a gradient entropy focusing metric is used to generate a nonrigid motion-corrected image on a pixel-by-pixel basis. The performance of the autofocus motion correction technique is compared with rigid-body translational correction and no correction in phantom, volunteer, and patient studies. Results: Nonrigid autofocus motion correction yields improved image quality compared to rigid-body-corrected images and uncorrected images. Quantitative vessel sharpness measurements indicate superiority of the proposed technique in 14 out of 15 coronary segments from three patient and two volunteer studies. Conclusion: The proposed technique corrects nonrigid motion artifacts in free-breathing 3D cones acquisitions, improving image quality compared to rigid-body motion correction. PMID:24006292

  11. Surgical treatment of chronic pancreatitis and its complications. Comparative analysis of results in 91 patients.

    PubMed

    Marinov, V; Draganov, K; Gaydarski, R; Katev, N N

    2013-01-01

    There is a large variety of proposed conservative, invasive, endoscopic and surgical methods for treatment of chronic pancreatitis and its complications. This study presents a comparative analysis of the results from each group of patients subjected to drainage, resection, denervation and other operative techniques for a total of 91 patients with chronic pancreatitis and its complications. Drainage and resection operative techniques yield comparable results in terms of postoperative pain control 93.1% and 100%, perioperative mortality--3.17% and 5.8%, perioperative morbidity--7.9% and 11.7%, respectively. There is a significant increase in the instances of diabetes in the resection group. Right-side semilunar ganglionectomy is a good method for pain control as an accompanying procedure in the course of another main operative technique.

  12. Grid artifact reduction for direct digital radiography detectors based on rotated stationary grids with homomorphic filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Dong Sik; Lee, Sanggyun

    2013-06-15

    Purpose: Grid artifacts are caused when using the antiscatter grid in obtaining digital x-ray images. In this paper, research on grid artifact reduction techniques is conducted especially for the direct detectors, which are based on amorphous selenium. Methods: In order to analyze and reduce the grid artifacts, the authors consider a multiplicative grid image model and propose a homomorphic filtering technique. For minimal damage due to filters, which are used to suppress the grid artifacts, rotated grids with respect to the sampling direction are employed, and min-max optimization problems for searching optimal grid frequencies and angles for given sampling frequenciesmore » are established. The authors then propose algorithms for the grid artifact reduction based on the band-stop filters as well as low-pass filters. Results: The proposed algorithms are experimentally tested for digital x-ray images, which are obtained from direct detectors with the rotated grids, and are compared with other algorithms. It is shown that the proposed algorithms can successfully reduce the grid artifacts for direct detectors. Conclusions: By employing the homomorphic filtering technique, the authors can considerably suppress the strong grid artifacts with relatively narrow-bandwidth filters compared to the normal filtering case. Using rotated grids also significantly reduces the ringing artifact. Furthermore, for specific grid frequencies and angles, the authors can use simple homomorphic low-pass filters in the spatial domain, and thus alleviate the grid artifacts with very low implementation complexity.« less

  13. Stochastic porous media modeling and high-resolution schemes for numerical simulation of subsurface immiscible fluid flow transport

    NASA Astrophysics Data System (ADS)

    Brantson, Eric Thompson; Ju, Binshan; Wu, Dan; Gyan, Patricia Semwaah

    2018-04-01

    This paper proposes stochastic petroleum porous media modeling for immiscible fluid flow simulation using Dykstra-Parson coefficient (V DP) and autocorrelation lengths to generate 2D stochastic permeability values which were also used to generate porosity fields through a linear interpolation technique based on Carman-Kozeny equation. The proposed method of permeability field generation in this study was compared to turning bands method (TBM) and uniform sampling randomization method (USRM). On the other hand, many studies have also reported that, upstream mobility weighting schemes, commonly used in conventional numerical reservoir simulators do not accurately capture immiscible displacement shocks and discontinuities through stochastically generated porous media. This can be attributed to high level of numerical smearing in first-order schemes, oftentimes misinterpreted as subsurface geological features. Therefore, this work employs high-resolution schemes of SUPERBEE flux limiter, weighted essentially non-oscillatory scheme (WENO), and monotone upstream-centered schemes for conservation laws (MUSCL) to accurately capture immiscible fluid flow transport in stochastic porous media. The high-order schemes results match well with Buckley Leverett (BL) analytical solution without any non-oscillatory solutions. The governing fluid flow equations were solved numerically using simultaneous solution (SS) technique, sequential solution (SEQ) technique and iterative implicit pressure and explicit saturation (IMPES) technique which produce acceptable numerical stability and convergence rate. A comparative and numerical examples study of flow transport through the proposed method, TBM and USRM permeability fields revealed detailed subsurface instabilities with their corresponding ultimate recovery factors. Also, the impact of autocorrelation lengths on immiscible fluid flow transport were analyzed and quantified. A finite number of lines used in the TBM resulted into visual artifact banding phenomenon unlike the proposed method and USRM. In all, the proposed permeability and porosity fields generation coupled with the numerical simulator developed will aid in developing efficient mobility control schemes to improve on poor volumetric sweep efficiency in porous media.

  14. A Quantum Hybrid PSO Combined with Fuzzy k-NN Approach to Feature Selection and Cell Classification in Cervical Cancer Detection.

    PubMed

    Iliyasu, Abdullah M; Fatichah, Chastine

    2017-12-19

    A quantum hybrid (QH) intelligent approach that blends the adaptive search capability of the quantum-behaved particle swarm optimisation (QPSO) method with the intuitionistic rationality of traditional fuzzy k -nearest neighbours (Fuzzy k -NN) algorithm (known simply as the Q-Fuzzy approach) is proposed for efficient feature selection and classification of cells in cervical smeared (CS) images. From an initial multitude of 17 features describing the geometry, colour, and texture of the CS images, the QPSO stage of our proposed technique is used to select the best subset features (i.e., global best particles) that represent a pruned down collection of seven features. Using a dataset of almost 1000 images, performance evaluation of our proposed Q-Fuzzy approach assesses the impact of our feature selection on classification accuracy by way of three experimental scenarios that are compared alongside two other approaches: the All-features (i.e., classification without prior feature selection) and another hybrid technique combining the standard PSO algorithm with the Fuzzy k -NN technique (P-Fuzzy approach). In the first and second scenarios, we further divided the assessment criteria in terms of classification accuracy based on the choice of best features and those in terms of the different categories of the cervical cells. In the third scenario, we introduced new QH hybrid techniques, i.e., QPSO combined with other supervised learning methods, and compared the classification accuracy alongside our proposed Q-Fuzzy approach. Furthermore, we employed statistical approaches to establish qualitative agreement with regards to the feature selection in the experimental scenarios 1 and 3. The synergy between the QPSO and Fuzzy k -NN in the proposed Q-Fuzzy approach improves classification accuracy as manifest in the reduction in number cell features, which is crucial for effective cervical cancer detection and diagnosis.

  15. Techniques of laparoscopic cholecystectomy: Nomenclature and selection

    PubMed Central

    Haribhakti, Sanjiv P.; Mistry, Jitendra H.

    2015-01-01

    There are more than 50 different techniques of laparoscopic cholecystectomy (LC) available in literature mainly due to modifications by surgeons in aim to improve postoperative outcome and cosmesis. These modifications include reduction in port size and/or number than what is used in standard LC. There is no uniform nomenclature to describe these different techniques so that it is not possible to compare the outcomes of different techniques. We brief the advantages and disadvantages of each of these techniques and suggest the situation where particular technique would be useful. We also propose a nomenclature which is easy to remember and apply, so that any future comparison will be possible between the techniques. PMID:25883450

  16. Denoising in digital speckle pattern interferometry using wave atoms.

    PubMed

    Federico, Alejandro; Kaufmann, Guillermo H

    2007-05-15

    We present an effective method for speckle noise removal in digital speckle pattern interferometry, which is based on a wave-atom thresholding technique. Wave atoms are a variant of 2D wavelet packets with a parabolic scaling relation and improve the sparse representation of fringe patterns when compared with traditional expansions. The performance of the denoising method is analyzed by using computer-simulated fringes, and the results are compared with those produced by wavelet and curvelet thresholding techniques. An application of the proposed method to reduce speckle noise in experimental data is also presented.

  17. Knee cartilage segmentation using active shape models and local binary patterns

    NASA Astrophysics Data System (ADS)

    González, Germán.; Escalante-Ramírez, Boris

    2014-05-01

    Segmentation of knee cartilage has been useful for opportune diagnosis and treatment of osteoarthritis (OA). This paper presents a semiautomatic segmentation technique based on Active Shape Models (ASM) combined with Local Binary Patterns (LBP) and its approaches to describe the surrounding texture of femoral cartilage. The proposed technique is tested on a 16-image database of different patients and it is validated through Leave- One-Out method. We compare different segmentation techniques: ASM-LBP, ASM-medianLBP, and ASM proposed by Cootes. The ASM-LBP approaches are tested with different ratios to decide which of them describes the cartilage texture better. The results show that ASM-medianLBP has better performance than ASM-LBP and ASM. Furthermore, we add a routine which improves the robustness versus two principal problems: oversegmentation and initialization.

  18. Fuzzy logic controller versus classical logic controller for residential hybrid solar-wind-storage energy system

    NASA Astrophysics Data System (ADS)

    Derrouazin, A.; Aillerie, M.; Mekkakia-Maaza, N.; Charles, J. P.

    2016-07-01

    Several researches for management of diverse hybrid energy systems and many techniques have been proposed for robustness, savings and environmental purpose. In this work we aim to make a comparative study between two supervision and control techniques: fuzzy and classic logics to manage the hybrid energy system applied for typical housing fed by solar and wind power, with rack of batteries for storage. The system is assisted by the electric grid during energy drop moments. A hydrogen production device is integrated into the system to retrieve surplus energy production from renewable sources for the household purposes, intending the maximum exploitation of these sources over years. The models have been achieved and generated signals for electronic switches command of proposed both techniques are presented and discussed in this paper.

  19. Sparse Solutions for Single Class SVMs: A Bi-Criterion Approach

    NASA Technical Reports Server (NTRS)

    Das, Santanu; Oza, Nikunj C.

    2011-01-01

    In this paper we propose an innovative learning algorithm - a variation of One-class nu Support Vector Machines (SVMs) learning algorithm to produce sparser solutions with much reduced computational complexities. The proposed technique returns an approximate solution, nearly as good as the solution set obtained by the classical approach, by minimizing the original risk function along with a regularization term. We introduce a bi-criterion optimization that helps guide the search towards the optimal set in much reduced time. The outcome of the proposed learning technique was compared with the benchmark one-class Support Vector machines algorithm which more often leads to solutions with redundant support vectors. Through out the analysis, the problem size for both optimization routines was kept consistent. We have tested the proposed algorithm on a variety of data sources under different conditions to demonstrate the effectiveness. In all cases the proposed algorithm closely preserves the accuracy of standard one-class nu SVMs while reducing both training time and test time by several factors.

  20. Development of neural network techniques for finger-vein pattern classification

    NASA Astrophysics Data System (ADS)

    Wu, Jian-Da; Liu, Chiung-Tsiung; Tsai, Yi-Jang; Liu, Jun-Ching; Chang, Ya-Wen

    2010-02-01

    A personal identification system using finger-vein patterns and neural network techniques is proposed in the present study. In the proposed system, the finger-vein patterns are captured by a device that can transmit near infrared through the finger and record the patterns for signal analysis and classification. The biometric system for verification consists of a combination of feature extraction using principal component analysis and pattern classification using both back-propagation network and adaptive neuro-fuzzy inference systems. Finger-vein features are first extracted by principal component analysis method to reduce the computational burden and removes noise residing in the discarded dimensions. The features are then used in pattern classification and identification. To verify the effect of the proposed adaptive neuro-fuzzy inference system in the pattern classification, the back-propagation network is compared with the proposed system. The experimental results indicated the proposed system using adaptive neuro-fuzzy inference system demonstrated a better performance than the back-propagation network for personal identification using the finger-vein patterns.

  1. A low-rank matrix recovery approach for energy efficient EEG acquisition for a wireless body area network.

    PubMed

    Majumdar, Angshul; Gogna, Anupriya; Ward, Rabab

    2014-08-25

    We address the problem of acquiring and transmitting EEG signals in Wireless Body Area Networks (WBAN) in an energy efficient fashion. In WBANs, the energy is consumed by three operations: sensing (sampling), processing and transmission. Previous studies only addressed the problem of reducing the transmission energy. For the first time, in this work, we propose a technique to reduce sensing and processing energy as well: this is achieved by randomly under-sampling the EEG signal. We depart from previous Compressed Sensing based approaches and formulate signal recovery (from under-sampled measurements) as a matrix completion problem. A new algorithm to solve the matrix completion problem is derived here. We test our proposed method and find that the reconstruction accuracy of our method is significantly better than state-of-the-art techniques; and we achieve this while saving sensing, processing and transmission energy. Simple power analysis shows that our proposed methodology consumes considerably less power compared to previous CS based techniques.

  2. Image steganalysis using Artificial Bee Colony algorithm

    NASA Astrophysics Data System (ADS)

    Sajedi, Hedieh

    2017-09-01

    Steganography is the science of secure communication where the presence of the communication cannot be detected while steganalysis is the art of discovering the existence of the secret communication. Processing a huge amount of information takes extensive execution time and computational sources most of the time. As a result, it is needed to employ a phase of preprocessing, which can moderate the execution time and computational sources. In this paper, we propose a new feature-based blind steganalysis method for detecting stego images from the cover (clean) images with JPEG format. In this regard, we present a feature selection technique based on an improved Artificial Bee Colony (ABC). ABC algorithm is inspired by honeybees' social behaviour in their search for perfect food sources. In the proposed method, classifier performance and the dimension of the selected feature vector depend on using wrapper-based methods. The experiments are performed using two large data-sets of JPEG images. Experimental results demonstrate the effectiveness of the proposed steganalysis technique compared to the other existing techniques.

  3. Fabrication of a highly sensitive penicillin sensor based on charge transfer techniques.

    PubMed

    Lee, Seung-Ro; Rahman, M M; Sawada, Kazuaki; Ishida, Makoto

    2009-03-15

    A highly sensitive penicillin biosensor based on a charge-transfer technique (CTTPS) has been fabricated and demonstrated in this paper. CTTPS comprised a charge accumulation technique for penicilloic acid and H(+) ions perception system. With the proposed CTTPS, it is possible to amplify the sensing signals without external amplifier by using the charge accumulation cycles. The fabricated CTTPS exhibits excellent performance for penicillin detection and exhibit a high-sensitivity (47.852 mV/mM), high signal-to-noise ratio (SNR), large span (1445 mV), wide linear range (0-25 mM), fast response time (<3s), and very good reproducibility. A very lower detection limit of about 0.01 mM was observed from the proposed sensor. Under optimum conditions, the proposed CTTPS outstripped the performance of the widely used ISFET penicillin sensor and exhibited almost eight times greater sensitivity as compared to ISFET (6.56 mV/mM). The sensor system is implemented for the measurement of the penicillin concentration in penicillin fermentation broth.

  4. A Data-Driven Response Virtual Sensor Technique with Partial Vibration Measurements Using Convolutional Neural Network.

    PubMed

    Sun, Shan-Bin; He, Yuan-Yuan; Zhou, Si-Da; Yue, Zhen-Jiang

    2017-12-12

    Measurement of dynamic responses plays an important role in structural health monitoring, damage detection and other fields of research. However, in aerospace engineering, the physical sensors are limited in the operational conditions of spacecraft, due to the severe environment in outer space. This paper proposes a virtual sensor model with partial vibration measurements using a convolutional neural network. The transmissibility function is employed as prior knowledge. A four-layer neural network with two convolutional layers, one fully connected layer, and an output layer is proposed as the predicting model. Numerical examples of two different structural dynamic systems demonstrate the performance of the proposed approach. The excellence of the novel technique is further indicated using a simply supported beam experiment comparing to a modal-model-based virtual sensor, which uses modal parameters, such as mode shapes, for estimating the responses of the faulty sensors. The results show that the presented data-driven response virtual sensor technique can predict structural response with high accuracy.

  5. Behavior Knowledge Space-Based Fusion for Copy-Move Forgery Detection.

    PubMed

    Ferreira, Anselmo; Felipussi, Siovani C; Alfaro, Carlos; Fonseca, Pablo; Vargas-Munoz, John E; Dos Santos, Jefersson A; Rocha, Anderson

    2016-07-20

    The detection of copy-move image tampering is of paramount importance nowadays, mainly due to its potential use for misleading the opinion forming process of the general public. In this paper, we go beyond traditional forgery detectors and aim at combining different properties of copy-move detection approaches by modeling the problem on a multiscale behavior knowledge space, which encodes the output combinations of different techniques as a priori probabilities considering multiple scales of the training data. Afterwards, the conditional probabilities missing entries are properly estimated through generative models applied on the existing training data. Finally, we propose different techniques that exploit the multi-directionality of the data to generate the final outcome detection map in a machine learning decision-making fashion. Experimental results on complex datasets, comparing the proposed techniques with a gamut of copy-move detection approaches and other fusion methodologies in the literature show the effectiveness of the proposed method and its suitability for real-world applications.

  6. A Data-Driven Response Virtual Sensor Technique with Partial Vibration Measurements Using Convolutional Neural Network

    PubMed Central

    Sun, Shan-Bin; He, Yuan-Yuan; Zhou, Si-Da; Yue, Zhen-Jiang

    2017-01-01

    Measurement of dynamic responses plays an important role in structural health monitoring, damage detection and other fields of research. However, in aerospace engineering, the physical sensors are limited in the operational conditions of spacecraft, due to the severe environment in outer space. This paper proposes a virtual sensor model with partial vibration measurements using a convolutional neural network. The transmissibility function is employed as prior knowledge. A four-layer neural network with two convolutional layers, one fully connected layer, and an output layer is proposed as the predicting model. Numerical examples of two different structural dynamic systems demonstrate the performance of the proposed approach. The excellence of the novel technique is further indicated using a simply supported beam experiment comparing to a modal-model-based virtual sensor, which uses modal parameters, such as mode shapes, for estimating the responses of the faulty sensors. The results show that the presented data-driven response virtual sensor technique can predict structural response with high accuracy. PMID:29231868

  7. Extracting TSK-type Neuro-Fuzzy model using the Hunting search algorithm

    NASA Astrophysics Data System (ADS)

    Bouzaida, Sana; Sakly, Anis; M'Sahli, Faouzi

    2014-01-01

    This paper proposes a Takagi-Sugeno-Kang (TSK) type Neuro-Fuzzy model tuned by a novel metaheuristic optimization algorithm called Hunting Search (HuS). The HuS algorithm is derived based on a model of group hunting of animals such as lions, wolves, and dolphins when looking for a prey. In this study, the structure and parameters of the fuzzy model are encoded into a particle. Thus, the optimal structure and parameters are achieved simultaneously. The proposed method was demonstrated through modeling and control problems, and the results have been compared with other optimization techniques. The comparisons indicate that the proposed method represents a powerful search approach and an effective optimization technique as it can extract the accurate TSK fuzzy model with an appropriate number of rules.

  8. Global image registration using a symmetric block-matching approach

    PubMed Central

    Modat, Marc; Cash, David M.; Daga, Pankaj; Winston, Gavin P.; Duncan, John S.; Ourselin, Sébastien

    2014-01-01

    Abstract. Most medical image registration algorithms suffer from a directionality bias that has been shown to largely impact subsequent analyses. Several approaches have been proposed in the literature to address this bias in the context of nonlinear registration, but little work has been done for global registration. We propose a symmetric approach based on a block-matching technique and least-trimmed square regression. The proposed method is suitable for multimodal registration and is robust to outliers in the input images. The symmetric framework is compared with the original asymmetric block-matching technique and is shown to outperform it in terms of accuracy and robustness. The methodology presented in this article has been made available to the community as part of the NiftyReg open-source package. PMID:26158035

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saha, K; Barbarits, J; Humenik, R

    Purpose: Chang’s mathematical formulation is a common method of attenuation correction applied on reconstructed Jaszczak phantom images. Though Chang’s attenuation correction method has been used for 360° angle acquisition, its applicability for 180° angle acquisition remains a question with one vendor’s camera software producing artifacts. The objective of this work is to ensure that Chang’s attenuation correction technique can be applied for reconstructed Jaszczak phantom images acquired in both 360° and 180° mode. Methods: The Jaszczak phantom filled with 20 mCi of diluted Tc-99m was placed on the patient table of Siemens e.cam™ (n = 2) and Siemens Symbia™ (nmore » = 1) dual head gamma cameras centered both in lateral and axial directions. A total of 3 scans were done at 180° and 2 scans at 360° orbit acquisition modes. Thirty two million counts were acquired for both modes. Reconstruction of the projection data was performed using filtered back projection smoothed with pre reconstruction Butterworth filter (order: 6, cutoff: 0.55). Reconstructed transaxial slices were attenuation corrected by Chang’s attenuation correction technique as implemented in the camera software. Corrections were also done using a modified technique where photon path lengths for all possible attenuation paths through a pixel in the image space were added to estimate the corresponding attenuation factor. The inverse of the attenuation factor was utilized to correct the attenuated pixel counts. Results: Comparable uniformity and noise were observed for 360° acquired phantom images attenuation corrected by the vendor technique (28.3% and 7.9%) and the proposed technique (26.8% and 8.4%). The difference in uniformity for 180° acquisition between the proposed technique (22.6% and 6.8%) and the vendor technique (57.6% and 30.1%) was more substantial. Conclusion: Assessment of attenuation correction performance by phantom uniformity analysis illustrated improved uniformity with the proposed algorithm compared to the camera software.« less

  10. Feature selection through validation and un-censoring of endovascular repair survival data for predicting the risk of re-intervention.

    PubMed

    Attallah, Omneya; Karthikesalingam, Alan; Holt, Peter J E; Thompson, Matthew M; Sayers, Rob; Bown, Matthew J; Choke, Eddie C; Ma, Xianghong

    2017-08-03

    Feature selection (FS) process is essential in the medical area as it reduces the effort and time needed for physicians to measure unnecessary features. Choosing useful variables is a difficult task with the presence of censoring which is the unique characteristic in survival analysis. Most survival FS methods depend on Cox's proportional hazard model; however, machine learning techniques (MLT) are preferred but not commonly used due to censoring. Techniques that have been proposed to adopt MLT to perform FS with survival data cannot be used with the high level of censoring. The researcher's previous publications proposed a technique to deal with the high level of censoring. It also used existing FS techniques to reduce dataset dimension. However, in this paper a new FS technique was proposed and combined with feature transformation and the proposed uncensoring approaches to select a reduced set of features and produce a stable predictive model. In this paper, a FS technique based on artificial neural network (ANN) MLT is proposed to deal with highly censored Endovascular Aortic Repair (EVAR). Survival data EVAR datasets were collected during 2004 to 2010 from two vascular centers in order to produce a final stable model. They contain almost 91% of censored patients. The proposed approach used a wrapper FS method with ANN to select a reduced subset of features that predict the risk of EVAR re-intervention after 5 years to patients from two different centers located in the United Kingdom, to allow it to be potentially applied to cross-centers predictions. The proposed model is compared with the two popular FS techniques; Akaike and Bayesian information criteria (AIC, BIC) that are used with Cox's model. The final model outperforms other methods in distinguishing the high and low risk groups; as they both have concordance index and estimated AUC better than the Cox's model based on AIC, BIC, Lasso, and SCAD approaches. These models have p-values lower than 0.05, meaning that patients with different risk groups can be separated significantly and those who would need re-intervention can be correctly predicted. The proposed approach will save time and effort made by physicians to collect unnecessary variables. The final reduced model was able to predict the long-term risk of aortic complications after EVAR. This predictive model can help clinicians decide patients' future observation plan.

  11. Hyperspectral imaging using the single-pixel Fourier transform technique

    NASA Astrophysics Data System (ADS)

    Jin, Senlin; Hui, Wangwei; Wang, Yunlong; Huang, Kaicheng; Shi, Qiushuai; Ying, Cuifeng; Liu, Dongqi; Ye, Qing; Zhou, Wenyuan; Tian, Jianguo

    2017-03-01

    Hyperspectral imaging technology is playing an increasingly important role in the fields of food analysis, medicine and biotechnology. To improve the speed of operation and increase the light throughput in a compact equipment structure, a Fourier transform hyperspectral imaging system based on a single-pixel technique is proposed in this study. Compared with current imaging spectrometry approaches, the proposed system has a wider spectral range (400-1100 nm), a better spectral resolution (1 nm) and requires fewer measurement data (a sample rate of 6.25%). The performance of this system was verified by its application to the non-destructive testing of potatoes.

  12. A Simple Ensemble Simulation Technique for Assessment of Future Variations in Specific High-Impact Weather Events

    NASA Astrophysics Data System (ADS)

    Taniguchi, Kenji

    2018-04-01

    To investigate future variations in high-impact weather events, numerous samples are required. For the detailed assessment in a specific region, a high spatial resolution is also required. A simple ensemble simulation technique is proposed in this paper. In the proposed technique, new ensemble members were generated from one basic state vector and two perturbation vectors, which were obtained by lagged average forecasting simulations. Sensitivity experiments with different numbers of ensemble members, different simulation lengths, and different perturbation magnitudes were performed. Experimental application to a global warming study was also implemented for a typhoon event. Ensemble-mean results and ensemble spreads of total precipitation, atmospheric conditions showed similar characteristics across the sensitivity experiments. The frequencies of the maximum total and hourly precipitation also showed similar distributions. These results indicate the robustness of the proposed technique. On the other hand, considerable ensemble spread was found in each ensemble experiment. In addition, the results of the application to a global warming study showed possible variations in the future. These results indicate that the proposed technique is useful for investigating various meteorological phenomena and the impacts of global warming. The results of the ensemble simulations also enable the stochastic evaluation of differences in high-impact weather events. In addition, the impacts of a spectral nudging technique were also examined. The tracks of a typhoon were quite different between cases with and without spectral nudging; however, the ranges of the tracks among ensemble members were comparable. It indicates that spectral nudging does not necessarily suppress ensemble spread.

  13. High-Accuracy Ultrasound Contrast Agent Detection Method for Diagnostic Ultrasound Imaging Systems.

    PubMed

    Ito, Koichi; Noro, Kazumasa; Yanagisawa, Yukari; Sakamoto, Maya; Mori, Shiro; Shiga, Kiyoto; Kodama, Tetsuya; Aoki, Takafumi

    2015-12-01

    An accurate method for detecting contrast agents using diagnostic ultrasound imaging systems is proposed. Contrast agents, such as microbubbles, passing through a blood vessel during ultrasound imaging are detected as blinking signals in the temporal axis, because their intensity value is constantly in motion. Ultrasound contrast agents are detected by evaluating the intensity variation of a pixel in the temporal axis. Conventional methods are based on simple subtraction of ultrasound images to detect ultrasound contrast agents. Even if the subject moves only slightly, a conventional detection method will introduce significant error. In contrast, the proposed technique employs spatiotemporal analysis of the pixel intensity variation over several frames. Experiments visualizing blood vessels in the mouse tail illustrated that the proposed method performs efficiently compared with conventional approaches. We also report that the new technique is useful for observing temporal changes in microvessel density in subiliac lymph nodes containing tumors. The results are compared with those of contrast-enhanced computed tomography. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  14. Optimized MLAA for quantitative non-TOF PET/MR of the brain

    NASA Astrophysics Data System (ADS)

    Benoit, Didier; Ladefoged, Claes N.; Rezaei, Ahmadreza; Keller, Sune H.; Andersen, Flemming L.; Højgaard, Liselotte; Hansen, Adam E.; Holm, Søren; Nuyts, Johan

    2016-12-01

    For quantitative tracer distribution in positron emission tomography, attenuation correction is essential. In a hybrid PET/CT system the CT images serve as a basis for generation of the attenuation map, but in PET/MR, the MR images do not have a similarly simple relationship with the attenuation map. Hence attenuation correction in PET/MR systems is more challenging. Typically either of two MR sequences are used: the Dixon or the ultra-short time echo (UTE) techniques. However these sequences have some well-known limitations. In this study, a reconstruction technique based on a modified and optimized non-TOF MLAA is proposed for PET/MR brain imaging. The idea is to tune the parameters of the MLTR applying some information from an attenuation image computed from the UTE sequences and a T1w MR image. In this MLTR algorithm, an {αj} parameter is introduced and optimized in order to drive the algorithm to a final attenuation map most consistent with the emission data. Because the non-TOF MLAA is used, a technique to reduce the cross-talk effect is proposed. In this study, the proposed algorithm is compared to the common reconstruction methods such as OSEM using a CT attenuation map, considered as the reference, and OSEM using the Dixon and UTE attenuation maps. To show the robustness and the reproducibility of the proposed algorithm, a set of 204 [18F]FDG patients, 35 [11C]PiB patients and 1 [18F]FET patient are used. The results show that by choosing an optimized value of {αj} in MLTR, the proposed algorithm improves the results compared to the standard MR-based attenuation correction methods (i.e. OSEM using the Dixon or the UTE attenuation maps), and the cross-talk and the scale problem are limited.

  15. An adaptive incremental approach to constructing ensemble classifiers: application in an information-theoretic computer-aided decision system for detection of masses in mammograms.

    PubMed

    Mazurowski, Maciej A; Zurada, Jacek M; Tourassi, Georgia D

    2009-07-01

    Ensemble classifiers have been shown efficient in multiple applications. In this article, the authors explore the effectiveness of ensemble classifiers in a case-based computer-aided diagnosis system for detection of masses in mammograms. They evaluate two general ways of constructing subclassifiers by resampling of the available development dataset: Random division and random selection. Furthermore, they discuss the problem of selecting the ensemble size and propose two adaptive incremental techniques that automatically select the size for the problem at hand. All the techniques are evaluated with respect to a previously proposed information-theoretic CAD system (IT-CAD). The experimental results show that the examined ensemble techniques provide a statistically significant improvement (AUC = 0.905 +/- 0.024) in performance as compared to the original IT-CAD system (AUC = 0.865 +/- 0.029). Some of the techniques allow for a notable reduction in the total number of examples stored in the case base (to 1.3% of the original size), which, in turn, results in lower storage requirements and a shorter response time of the system. Among the methods examined in this article, the two proposed adaptive techniques are by far the most effective for this purpose. Furthermore, the authors provide some discussion and guidance for choosing the ensemble parameters.

  16. Fabrication of thermal-resistant gratings for high-temperature measurements using geometric phase analysis.

    PubMed

    Zhang, Q; Liu, Z; Xie, H; Ma, K; Wu, L

    2016-12-01

    Grating fabrication techniques are crucial to the success of grating-based deformation measurement methods because the quality of the grating will directly affect the measurement results. Deformation measurements at high temperatures entail heating and, perhaps, oxidize the grating. The contrast of the grating lines may change during the heating process. Thus, the thermal-resistant capability of the grating becomes a point of great concern before taking measurements. This study proposes a method that combines a laser-engraving technique with the processes of particle spraying and sintering for fabricating thermal-resistant gratings. The grating fabrication technique is introduced and discussed in detail. A numerical simulation with a geometric phase analysis (GPA) is performed for a homogeneous deformation case. Then, the selection scheme of the grating pitch is suggested. The validity of the proposed technique is verified by fabricating a thermal-resistant grating on a ZrO 2 specimen and measuring its thermal strain at high temperatures (up to 1300 °C). Images of the grating before and after deformation are used to obtain the thermal-strain field by GPA and to compare the results with well-established reference data. The experimental results indicate that this proposed technique is feasible and will offer good prospects for further applications.

  17. Weighted image de-fogging using luminance dark prior

    NASA Astrophysics Data System (ADS)

    Kansal, Isha; Kasana, Singara Singh

    2017-10-01

    In this work, the weighted image de-fogging process based upon dark channel prior is modified by using luminance dark prior. Dark channel prior estimates the transmission by using three colour channels whereas luminance dark prior does the same by making use of only Y component of YUV colour space. For each pixel in a patch of ? size, the luminance dark prior uses ? pixels, rather than ? pixels used in DCP technique, which speeds up the de-fogging process. To estimate the transmission map, weighted approach based upon difference prior is used which mitigates halo artefacts at the time of transmission estimation. The major drawback of weighted technique is that it does not maintain the constancy of the transmission in a local patch even if there are no significant depth disruptions, due to which the de-fogged image looks over smooth and has low contrast. Apart from this, in some images, weighted transmission still carries less visible halo artefacts. Therefore, Gaussian filter is used to blur the estimated weighted transmission map which enhances the contrast of de-fogged images. In addition to this, a novel approach is proposed to remove the pixels belonging to bright light source(s) during the atmospheric light estimation process based upon histogram of YUV colour space. To show the effectiveness, the proposed technique is compared with existing techniques. This comparison shows that the proposed technique performs better than the existing techniques.

  18. Stochastic Leader Gravitational Search Algorithm for Enhanced Adaptive Beamforming Technique

    PubMed Central

    Darzi, Soodabeh; Islam, Mohammad Tariqul; Tiong, Sieh Kiong; Kibria, Salehin; Singh, Mandeep

    2015-01-01

    In this paper, stochastic leader gravitational search algorithm (SL-GSA) based on randomized k is proposed. Standard GSA (SGSA) utilizes the best agents without any randomization, thus it is more prone to converge at suboptimal results. Initially, the new approach randomly choses k agents from the set of all agents to improve the global search ability. Gradually, the set of agents is reduced by eliminating the agents with the poorest performances to allow rapid convergence. The performance of the SL-GSA was analyzed for six well-known benchmark functions, and the results are compared with SGSA and some of its variants. Furthermore, the SL-GSA is applied to minimum variance distortionless response (MVDR) beamforming technique to ensure compatibility with real world optimization problems. The proposed algorithm demonstrates superior convergence rate and quality of solution for both real world problems and benchmark functions compared to original algorithm and other recent variants of SGSA. PMID:26552032

  19. Prediction of monthly rainfall in Victoria, Australia: Clusterwise linear regression approach

    NASA Astrophysics Data System (ADS)

    Bagirov, Adil M.; Mahmood, Arshad; Barton, Andrew

    2017-05-01

    This paper develops the Clusterwise Linear Regression (CLR) technique for prediction of monthly rainfall. The CLR is a combination of clustering and regression techniques. It is formulated as an optimization problem and an incremental algorithm is designed to solve it. The algorithm is applied to predict monthly rainfall in Victoria, Australia using rainfall data with five input meteorological variables over the period of 1889-2014 from eight geographically diverse weather stations. The prediction performance of the CLR method is evaluated by comparing observed and predicted rainfall values using four measures of forecast accuracy. The proposed method is also compared with the CLR using the maximum likelihood framework by the expectation-maximization algorithm, multiple linear regression, artificial neural networks and the support vector machines for regression models using computational results. The results demonstrate that the proposed algorithm outperforms other methods in most locations.

  20. Artificially intelligent recognition of Arabic speaker using voice print-based local features

    NASA Astrophysics Data System (ADS)

    Mahmood, Awais; Alsulaiman, Mansour; Muhammad, Ghulam; Akram, Sheeraz

    2016-11-01

    Local features for any pattern recognition system are based on the information extracted locally. In this paper, a local feature extraction technique was developed. This feature was extracted in the time-frequency plain by taking the moving average on the diagonal directions of the time-frequency plane. This feature captured the time-frequency events producing a unique pattern for each speaker that can be viewed as a voice print of the speaker. Hence, we referred to this technique as voice print-based local feature. The proposed feature was compared to other features including mel-frequency cepstral coefficient (MFCC) for speaker recognition using two different databases. One of the databases used in the comparison is a subset of an LDC database that consisted of two short sentences uttered by 182 speakers. The proposed feature attained 98.35% recognition rate compared to 96.7% for MFCC using the LDC subset.

  1. ECG-derived respiration based on iterated Hilbert transform and Hilbert vibration decomposition.

    PubMed

    Sharma, Hemant; Sharma, K K

    2018-06-01

    Monitoring of the respiration using the electrocardiogram (ECG) is desirable for the simultaneous study of cardiac activities and the respiration in the aspects of comfort, mobility, and cost of the healthcare system. This paper proposes a new approach for deriving the respiration from single-lead ECG based on the iterated Hilbert transform (IHT) and the Hilbert vibration decomposition (HVD). The ECG signal is first decomposed into the multicomponent sinusoidal signals using the IHT technique. Afterward, the lower order amplitude components obtained from the IHT are filtered using the HVD to extract the respiration information. Experiments are performed on the Fantasia and Apnea-ECG datasets. The performance of the proposed ECG-derived respiration (EDR) approach is compared with the existing techniques including the principal component analysis (PCA), R-peak amplitudes (RPA), respiratory sinus arrhythmia (RSA), slopes of the QRS complex, and R-wave angle. The proposed technique showed the higher median values of correlation (first and third quartile) for both the Fantasia and Apnea-ECG datasets as 0.699 (0.55, 0.82) and 0.57 (0.40, 0.73), respectively. Also, the proposed algorithm provided the lowest values of the mean absolute error and the average percentage error computed from the EDR and reference (recorded) respiration signals for both the Fantasia and Apnea-ECG datasets as 1.27 and 9.3%, and 1.35 and 10.2%, respectively. In the experiments performed over different age group subjects of the Fantasia dataset, the proposed algorithm provided effective results in the younger population but outperformed the existing techniques in the case of elderly subjects. The proposed EDR technique has the advantages over existing techniques in terms of the better agreement in the respiratory rates and specifically, it reduces the need for an extra step required for the detection of fiducial points in the ECG for the estimation of respiration which makes the process effective and less-complex. The above performance results obtained from two different datasets validate that the proposed approach can be used for monitoring of the respiration using single-lead ECG.

  2. Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization

    PubMed Central

    Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali

    2014-01-01

    Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584

  3. A secure 3-way routing protocols for intermittently connected mobile ad hoc networks.

    PubMed

    Sekaran, Ramesh; Parasuraman, Ganesh Kumar

    2014-01-01

    The mobile ad hoc network may be partially connected or it may be disconnected in nature and these forms of networks are termed intermittently connected mobile ad hoc network (ICMANET). The routing in such disconnected network is commonly an arduous task. Many routing protocols have been proposed for routing in ICMANET since decades. The routing techniques in existence for ICMANET are, namely, flooding, epidemic, probabilistic, copy case, spray and wait, and so forth. These techniques achieve an effective routing with minimum latency, higher delivery ratio, lesser overhead, and so forth. Though these techniques generate effective results, in this paper, we propose novel routing algorithms grounded on agent and cryptographic techniques, namely, location dissemination service (LoDiS) routing with agent AES, A-LoDiS with agent AES routing, and B-LoDiS with agent AES routing, ensuring optimal results with respect to various network routing parameters. The algorithm along with efficient routing ensures higher degree of security. The security level is cited testing with respect to possibility of malicious nodes into the network. This paper also aids, with the comparative results of proposed algorithms, for secure routing in ICMANET.

  4. Defogging of road images using gain coefficient-based trilateral filter

    NASA Astrophysics Data System (ADS)

    Singh, Dilbag; Kumar, Vijay

    2018-01-01

    Poor weather conditions are responsible for most of the road accidents year in and year out. Poor weather conditions, such as fog, degrade the visibility of objects. Thus, it becomes difficult for drivers to identify the vehicles in a foggy environment. The dark channel prior (DCP)-based defogging techniques have been found to be an efficient way to remove fog from road images. However, it produces poor results when image objects are inherently similar to airlight and no shadow is cast on them. To eliminate this problem, a modified restoration model-based DCP is developed to remove the fog from road images. The transmission map is also refined by developing a gain coefficient-based trilateral filter. Thus, the proposed technique has an ability to remove fog from road images in an effective manner. The proposed technique is compared with seven well-known defogging techniques on two benchmark foggy images datasets and five real-time foggy images. The experimental results demonstrate that the proposed approach is able to remove the different types of fog from roadside images as well as significantly improve the image's visibility. It also reveals that the restored image has little or no artifacts.

  5. A Secure 3-Way Routing Protocols for Intermittently Connected Mobile Ad Hoc Networks

    PubMed Central

    Parasuraman, Ganesh Kumar

    2014-01-01

    The mobile ad hoc network may be partially connected or it may be disconnected in nature and these forms of networks are termed intermittently connected mobile ad hoc network (ICMANET). The routing in such disconnected network is commonly an arduous task. Many routing protocols have been proposed for routing in ICMANET since decades. The routing techniques in existence for ICMANET are, namely, flooding, epidemic, probabilistic, copy case, spray and wait, and so forth. These techniques achieve an effective routing with minimum latency, higher delivery ratio, lesser overhead, and so forth. Though these techniques generate effective results, in this paper, we propose novel routing algorithms grounded on agent and cryptographic techniques, namely, location dissemination service (LoDiS) routing with agent AES, A-LoDiS with agent AES routing, and B-LoDiS with agent AES routing, ensuring optimal results with respect to various network routing parameters. The algorithm along with efficient routing ensures higher degree of security. The security level is cited testing with respect to possibility of malicious nodes into the network. This paper also aids, with the comparative results of proposed algorithms, for secure routing in ICMANET. PMID:25136697

  6. Discrete classification technique applied to TV advertisements liking recognition system based on low-cost EEG headsets.

    PubMed

    Soria Morillo, Luis M; Alvarez-Garcia, Juan A; Gonzalez-Abril, Luis; Ortega Ramírez, Juan A

    2016-07-15

    In this paper a new approach is applied to the area of marketing research. The aim of this paper is to recognize how brain activity responds during the visualization of short video advertisements using discrete classification techniques. By means of low cost electroencephalography devices (EEG), the activation level of some brain regions have been studied while the ads are shown to users. We may wonder about how useful is the use of neuroscience knowledge in marketing, or what could provide neuroscience to marketing sector, or why this approach can improve the accuracy and the final user acceptance compared to other works. By using discrete techniques over EEG frequency bands of a generated dataset, C4.5, ANN and the new recognition system based on Ameva, a discretization algorithm, is applied to obtain the score given by subjects to each TV ad. The proposed technique allows to reach more than 75 % of accuracy, which is an excellent result taking into account the typology of EEG sensors used in this work. Furthermore, the time consumption of the algorithm proposed is reduced up to 30 % compared to other techniques presented in this paper. This bring about a battery lifetime improvement on the devices where the algorithm is running, extending the experience in the ubiquitous context where the new approach has been tested.

  7. Simultaneous F 0-F 1 modifications of Arabic for the improvement of natural-sounding

    NASA Astrophysics Data System (ADS)

    Ykhlef, F.; Bensebti, M.

    2013-03-01

    Pitch (F 0) modification is one of the most important problems in the area of speech synthesis. Several techniques have been developed in the literature to achieve this goal. The main restrictions of these techniques are in the modification range and the synthesised speech quality, intelligibility and naturalness. The control of formants in a spoken language can significantly improve the naturalness of the synthesised speech. This improvement is mainly dependent on the control of the first formant (F 1). Inspired by this observation, this article proposes a new approach that modifies both F 0 and F 1 of Arabic voiced sounds in order to improve the naturalness of the pitch shifted speech. The developed strategy takes a parallel processing approach, in which the analysis segments are decomposed into sub-bands in the wavelet domain, modified in the desired sub-band by using a resampling technique and reconstructed without affecting the remained sub-bands. Pitch marking and voicing detection are performed in the frequency decomposition step based on the comparison of the multi-level approximation and detail signals. The performance of the proposed technique is evaluated by listening tests and compared to the pitch synchronous overlap and add (PSOLA) technique in the third approximation level. Experimental results have shown that the manipulation in the wavelet domain of F 0 in conjunction with F 1 guarantees natural-sounding of the synthesised speech compared to the classical pitch modification technique. This improvement was appropriate for high pitch modifications.

  8. Weighted spline based integration for reconstruction of freeform wavefront.

    PubMed

    Pant, Kamal K; Burada, Dali R; Bichra, Mohamed; Ghosh, Amitava; Khan, Gufran S; Sinzinger, Stefan; Shakher, Chandra

    2018-02-10

    In the present work, a spline-based integration technique for the reconstruction of a freeform wavefront from the slope data has been implemented. The slope data of a freeform surface contain noise due to their machining process and that introduces reconstruction error. We have proposed a weighted cubic spline based least square integration method (WCSLI) for the faithful reconstruction of a wavefront from noisy slope data. In the proposed method, the measured slope data are fitted into a piecewise polynomial. The fitted coefficients are determined by using a smoothing cubic spline fitting method. The smoothing parameter locally assigns relative weight to the fitted slope data. The fitted slope data are then integrated using the standard least squares technique to reconstruct the freeform wavefront. Simulation studies show the improved result using the proposed technique as compared to the existing cubic spline-based integration (CSLI) and the Southwell methods. The proposed reconstruction method has been experimentally implemented to a subaperture stitching-based measurement of a freeform wavefront using a scanning Shack-Hartmann sensor. The boundary artifacts are minimal in WCSLI which improves the subaperture stitching accuracy and demonstrates an improved Shack-Hartmann sensor for freeform metrology application.

  9. A Hybrid Neural Network Model for Sales Forecasting Based on ARIMA and Search Popularity of Article Titles.

    PubMed

    Omar, Hani; Hoang, Van Hai; Liu, Duen-Ren

    2016-01-01

    Enhancing sales and operations planning through forecasting analysis and business intelligence is demanded in many industries and enterprises. Publishing industries usually pick attractive titles and headlines for their stories to increase sales, since popular article titles and headlines can attract readers to buy magazines. In this paper, information retrieval techniques are adopted to extract words from article titles. The popularity measures of article titles are then analyzed by using the search indexes obtained from Google search engine. Backpropagation Neural Networks (BPNNs) have successfully been used to develop prediction models for sales forecasting. In this study, we propose a novel hybrid neural network model for sales forecasting based on the prediction result of time series forecasting and the popularity of article titles. The proposed model uses the historical sales data, popularity of article titles, and the prediction result of a time series, Autoregressive Integrated Moving Average (ARIMA) forecasting method to learn a BPNN-based forecasting model. Our proposed forecasting model is experimentally evaluated by comparing with conventional sales prediction techniques. The experimental result shows that our proposed forecasting method outperforms conventional techniques which do not consider the popularity of title words.

  10. A Hybrid Neural Network Model for Sales Forecasting Based on ARIMA and Search Popularity of Article Titles

    PubMed Central

    Omar, Hani; Hoang, Van Hai; Liu, Duen-Ren

    2016-01-01

    Enhancing sales and operations planning through forecasting analysis and business intelligence is demanded in many industries and enterprises. Publishing industries usually pick attractive titles and headlines for their stories to increase sales, since popular article titles and headlines can attract readers to buy magazines. In this paper, information retrieval techniques are adopted to extract words from article titles. The popularity measures of article titles are then analyzed by using the search indexes obtained from Google search engine. Backpropagation Neural Networks (BPNNs) have successfully been used to develop prediction models for sales forecasting. In this study, we propose a novel hybrid neural network model for sales forecasting based on the prediction result of time series forecasting and the popularity of article titles. The proposed model uses the historical sales data, popularity of article titles, and the prediction result of a time series, Autoregressive Integrated Moving Average (ARIMA) forecasting method to learn a BPNN-based forecasting model. Our proposed forecasting model is experimentally evaluated by comparing with conventional sales prediction techniques. The experimental result shows that our proposed forecasting method outperforms conventional techniques which do not consider the popularity of title words. PMID:27313605

  11. Improved Variable Selection Algorithm Using a LASSO-Type Penalty, with an Application to Assessing Hepatitis B Infection Relevant Factors in Community Residents

    PubMed Central

    Guo, Pi; Zeng, Fangfang; Hu, Xiaomin; Zhang, Dingmei; Zhu, Shuming; Deng, Yu; Hao, Yuantao

    2015-01-01

    Objectives In epidemiological studies, it is important to identify independent associations between collective exposures and a health outcome. The current stepwise selection technique ignores stochastic errors and suffers from a lack of stability. The alternative LASSO-penalized regression model can be applied to detect significant predictors from a pool of candidate variables. However, this technique is prone to false positives and tends to create excessive biases. It remains challenging to develop robust variable selection methods and enhance predictability. Material and methods Two improved algorithms denoted the two-stage hybrid and bootstrap ranking procedures, both using a LASSO-type penalty, were developed for epidemiological association analysis. The performance of the proposed procedures and other methods including conventional LASSO, Bolasso, stepwise and stability selection models were evaluated using intensive simulation. In addition, methods were compared by using an empirical analysis based on large-scale survey data of hepatitis B infection-relevant factors among Guangdong residents. Results The proposed procedures produced comparable or less biased selection results when compared to conventional variable selection models. In total, the two newly proposed procedures were stable with respect to various scenarios of simulation, demonstrating a higher power and a lower false positive rate during variable selection than the compared methods. In empirical analysis, the proposed procedures yielding a sparse set of hepatitis B infection-relevant factors gave the best predictive performance and showed that the procedures were able to select a more stringent set of factors. The individual history of hepatitis B vaccination, family and individual history of hepatitis B infection were associated with hepatitis B infection in the studied residents according to the proposed procedures. Conclusions The newly proposed procedures improve the identification of significant variables and enable us to derive a new insight into epidemiological association analysis. PMID:26214802

  12. Dual energy approach for cone beam artifacts correction

    NASA Astrophysics Data System (ADS)

    Han, Chulhee; Choi, Shinkook; Lee, Changwoo; Baek, Jongduk

    2017-03-01

    Cone beam computed tomography systems generate 3D volumetric images, which provide further morphological information compared to radiography and tomosynthesis systems. However, reconstructed images by FDK algorithm contain cone beam artifacts when a cone angle is large. To reduce the cone beam artifacts, two-pass algorithm has been proposed. The two-pass algorithm considers the cone beam artifacts are mainly caused by high density materials, and proposes an effective method to estimate error images (i.e., cone beam artifacts images) by the high density materials. While this approach is simple and effective with a small cone angle (i.e., 5 - 7 degree), the correction performance is degraded as the cone angle increases. In this work, we propose a new method to reduce the cone beam artifacts using a dual energy technique. The basic idea of the proposed method is to estimate the error images generated by the high density materials more reliably. To do this, projection data of the high density materials are extracted from dual energy CT projection data using a material decomposition technique, and then reconstructed by iterative reconstruction using total-variation regularization. The reconstructed high density materials are used to estimate the error images from the original FDK images. The performance of the proposed method is compared with the two-pass algorithm using root mean square errors. The results show that the proposed method reduces the cone beam artifacts more effectively, especially with a large cone angle.

  13. Concentric Rings K-Space Trajectory for Hyperpolarized 13C MR Spectroscopic Imaging

    PubMed Central

    Jiang, Wenwen; Lustig, Michael; Larson, Peder E.Z.

    2014-01-01

    Purpose To develop a robust and rapid imaging technique for hyperpolarized 13C MR Spectroscopic Imaging (MRSI) and investigate its performance. Methods A concentric rings readout trajectory with constant angular velocity is proposed for hyperpolarized 13C spectroscopic imaging and its properties are analyzed. Quantitative analyses of design tradeoffs are presented for several imaging scenarios. The first application of concentric rings on 13C phantoms and in vivo animal hyperpolarized 13C MRSI studies were performed to demonstrate the feasibility of the proposed method. Finally, a parallel imaging accelerated concentric rings study is presented. Results The concentric rings MRSI trajectory has the advantages of acquisition timesaving compared to echo-planar spectroscopic imaging (EPSI). It provides sufficient spectral bandwidth with relatively high SNR efficiency compared to EPSI and spiral techniques. Phantom and in vivo animal studies showed good image quality with half the scan time and reduced pulsatile flow artifacts compared to EPSI. Parallel imaging accelerated concentric rings showed advantages over Cartesian sampling in g-factor simulations and demonstrated aliasing-free image quality in a hyperpolarized 13C in vivo study. Conclusion The concentric rings trajectory is a robust and rapid imaging technique that fits very well with the speed, bandwidth, and resolution requirements of hyperpolarized 13C MRSI. PMID:25533653

  14. Evaluation of image features and classification methods for Barrett's cancer detection using VLE imaging

    NASA Astrophysics Data System (ADS)

    Klomp, Sander; van der Sommen, Fons; Swager, Anne-Fré; Zinger, Svitlana; Schoon, Erik J.; Curvers, Wouter L.; Bergman, Jacques J.; de With, Peter H. N.

    2017-03-01

    Volumetric Laser Endomicroscopy (VLE) is a promising technique for the detection of early neoplasia in Barrett's Esophagus (BE). VLE generates hundreds of high resolution, grayscale, cross-sectional images of the esophagus. However, at present, classifying these images is a time consuming and cumbersome effort performed by an expert using a clinical prediction model. This paper explores the feasibility of using computer vision techniques to accurately predict the presence of dysplastic tissue in VLE BE images. Our contribution is threefold. First, a benchmarking is performed for widely applied machine learning techniques and feature extraction methods. Second, three new features based on the clinical detection model are proposed, having superior classification accuracy and speed, compared to earlier work. Third, we evaluate automated parameter tuning by applying simple grid search and feature selection methods. The results are evaluated on a clinically validated dataset of 30 dysplastic and 30 non-dysplastic VLE images. Optimal classification accuracy is obtained by applying a support vector machine and using our modified Haralick features and optimal image cropping, obtaining an area under the receiver operating characteristic of 0.95 compared to the clinical prediction model at 0.81. Optimal execution time is achieved using a proposed mean and median feature, which is extracted at least factor 2.5 faster than alternative features with comparable performance.

  15. Time-Domain Fluorescence Lifetime Imaging Techniques Suitable for Solid-State Imaging Sensor Arrays

    PubMed Central

    Li, David Day-Uei; Ameer-Beg, Simon; Arlt, Jochen; Tyndall, David; Walker, Richard; Matthews, Daniel R.; Visitkul, Viput; Richardson, Justin; Henderson, Robert K.

    2012-01-01

    We have successfully demonstrated video-rate CMOS single-photon avalanche diode (SPAD)-based cameras for fluorescence lifetime imaging microscopy (FLIM) by applying innovative FLIM algorithms. We also review and compare several time-domain techniques and solid-state FLIM systems, and adapt the proposed algorithms for massive CMOS SPAD-based arrays and hardware implementations. The theoretical error equations are derived and their performances are demonstrated on the data obtained from 0.13 μm CMOS SPAD arrays and the multiple-decay data obtained from scanning PMT systems. In vivo two photon fluorescence lifetime imaging data of FITC-albumin labeled vasculature of a P22 rat carcinosarcoma (BD9 rat window chamber) are used to test how different algorithms perform on bi-decay data. The proposed techniques are capable of producing lifetime images with enough contrast. PMID:22778606

  16. Fuzzy logic controller versus classical logic controller for residential hybrid solar-wind-storage energy system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Derrouazin, A., E-mail: derrsid@gmail.com; Université de Lorraine, LMOPS, EA 4423, 57070 Metz; CentraleSupélec, LMOPS, 57070 Metz

    Several researches for management of diverse hybrid energy systems and many techniques have been proposed for robustness, savings and environmental purpose. In this work we aim to make a comparative study between two supervision and control techniques: fuzzy and classic logics to manage the hybrid energy system applied for typical housing fed by solar and wind power, with rack of batteries for storage. The system is assisted by the electric grid during energy drop moments. A hydrogen production device is integrated into the system to retrieve surplus energy production from renewable sources for the household purposes, intending the maximum exploitationmore » of these sources over years. The models have been achieved and generated signals for electronic switches command of proposed both techniques are presented and discussed in this paper.« less

  17. Content based image retrieval using local binary pattern operator and data mining techniques.

    PubMed

    Vatamanu, Oana Astrid; Frandeş, Mirela; Lungeanu, Diana; Mihalaş, Gheorghe-Ioan

    2015-01-01

    Content based image retrieval (CBIR) concerns the retrieval of similar images from image databases, using feature vectors extracted from images. These feature vectors globally define the visual content present in an image, defined by e.g., texture, colour, shape, and spatial relations between vectors. Herein, we propose the definition of feature vectors using the Local Binary Pattern (LBP) operator. A study was performed in order to determine the optimum LBP variant for the general definition of image feature vectors. The chosen LBP variant is then subsequently used to build an ultrasound image database, and a database with images obtained from Wireless Capsule Endoscopy. The image indexing process is optimized using data clustering techniques for images belonging to the same class. Finally, the proposed indexing method is compared to the classical indexing technique, which is nowadays widely used.

  18. [Percutaneous lung thermo-ablation].

    PubMed

    Palussière, Jean; Catena, Vittorio; Gaubert, Jean-Yves; Buy, Xavier; de Baere, Thierry

    2017-05-01

    Percutaneous lung thermo-ablation has steadily been developed over the past 15years. Main indications are early stage non-small cell lung carcinoma (NSCLC) for non-surgical patients and slow evolving localized metastatic disease, either spontaneous or following a general treatment. Radiofrequency, being the most evaluated technique, offers a local control rate of about 80-90% for tumors <3 cm in diameter. With excellent tolerance and very few complications, radiofrequency may be proposed for patients with a chronic disease. Other ablation techniques under investigation such as microwaves and cryotherapy could allow overcoming radiofrequency limits. Furthermore, stereotactic radiotherapy proposed for the same indications is efficient. Comparative studies are warranted to differentiate these techniques in terms of efficacy, tolerance and cost-effectiveness. Copyright © 2017 Société Française du Cancer. Published by Elsevier Masson SAS. All rights reserved.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Honorio, J.; Goldstein, R.; Honorio, J.

    We propose a simple, well grounded classification technique which is suited for group classification on brain fMRI data sets that have high dimensionality, small number of subjects, high noise level, high subject variability, imperfect registration and capture subtle cognitive effects. We propose threshold-split region as a new feature selection method and majority voteas the classification technique. Our method does not require a predefined set of regions of interest. We use average acros ssessions, only one feature perexperimental condition, feature independence assumption, and simple classifiers. The seeming counter-intuitive approach of using a simple design is supported by signal processing and statisticalmore » theory. Experimental results in two block design data sets that capture brain function under distinct monetary rewards for cocaine addicted and control subjects, show that our method exhibits increased generalization accuracy compared to commonly used feature selection and classification techniques.« less

  20. Definition of Exclusion Zones Using Seismic Data

    NASA Astrophysics Data System (ADS)

    Bartal, Y.; Villagran, M.; Ben Horin, Y.; Leonard, G.; Joswig, M.

    - In verifying compliance with the Comprehensive Nuclear-Test-Ban Treaty (CTBT), there is a motivation to be effective, efficient and economical and to prevent abuse of the right to conduct an On-site Inspection (OSI) in the territory of a challenged State Party. In particular, it is in the interest of a State Party to avoid irrelevant search in specific areas. In this study we propose several techniques to determine `exclusion zones', which are defined as areas where an event could not have possibly occurred. All techniques are based on simple ideas of arrival time differences between seismic stations and thus are less prone to modeling errors compared to standard event location methods. The techniques proposed are: angular sector exclusion based on a tripartite micro array, half-space exclusion based on a station pair, and closed area exclusion based on circumferential networks.

  1. A Novel Approach with Time-Splitting Spectral Technique for the Coupled Schrödinger-Boussinesq Equations Involving Riesz Fractional Derivative

    NASA Astrophysics Data System (ADS)

    Saha Ray, S.

    2017-09-01

    In the present paper the Riesz fractional coupled Schrödinger-Boussinesq (S-B) equations have been solved by the time-splitting Fourier spectral (TSFS) method. This proposed technique is utilized for discretizing the Schrödinger like equation and further, a pseudospectral discretization has been employed for the Boussinesq-like equation. Apart from that an implicit finite difference approach has also been proposed to compare the results with the solutions obtained from the time-splitting technique. Furthermore, the time-splitting method is proved to be unconditionally stable. The error norms along with the graphical solutions have also been presented here. Supported by NBHM, Mumbai, under Department of Atomic Energy, Government of India vide Grant No. 2/48(7)/2015/NBHM (R.P.)/R&D II/11403

  2. Combined empirical mode decomposition and texture features for skin lesion classification using quadratic support vector machine.

    PubMed

    Wahba, Maram A; Ashour, Amira S; Napoleon, Sameh A; Abd Elnaby, Mustafa M; Guo, Yanhui

    2017-12-01

    Basal cell carcinoma is one of the most common malignant skin lesions. Automated lesion identification and classification using image processing techniques is highly required to reduce the diagnosis errors. In this study, a novel technique is applied to classify skin lesion images into two classes, namely the malignant Basal cell carcinoma and the benign nevus. A hybrid combination of bi-dimensional empirical mode decomposition and gray-level difference method features is proposed after hair removal. The combined features are further classified using quadratic support vector machine (Q-SVM). The proposed system has achieved outstanding performance of 100% accuracy, sensitivity and specificity compared to other support vector machine procedures as well as with different extracted features. Basal Cell Carcinoma is effectively classified using Q-SVM with the proposed combined features.

  3. Tumor or abnormality identification from magnetic resonance images using statistical region fusion based segmentation.

    PubMed

    Subudhi, Badri Narayan; Thangaraj, Veerakumar; Sankaralingam, Esakkirajan; Ghosh, Ashish

    2016-11-01

    In this article, a statistical fusion based segmentation technique is proposed to identify different abnormality in magnetic resonance images (MRI). The proposed scheme follows seed selection, region growing-merging and fusion of multiple image segments. In this process initially, an image is divided into a number of blocks and for each block we compute the phase component of the Fourier transform. The phase component of each block reflects the gray level variation among the block but contains a large correlation among them. Hence a singular value decomposition (SVD) technique is adhered to generate a singular value of each block. Then a thresholding procedure is applied on these singular values to identify edgy and smooth regions and some seed points are selected for segmentation. By considering each seed point we perform a binary segmentation of the complete MRI and hence with all seed points we get an equal number of binary images. A parcel based statistical fusion process is used to fuse all the binary images into multiple segments. Effectiveness of the proposed scheme is tested on identifying different abnormalities: prostatic carcinoma detection, tuberculous granulomas identification and intracranial neoplasm or brain tumor detection. The proposed technique is established by comparing its results against seven state-of-the-art techniques with six performance evaluation measures. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Linearized image reconstruction method for ultrasound modulated electrical impedance tomography based on power density distribution

    NASA Astrophysics Data System (ADS)

    Song, Xizi; Xu, Yanbin; Dong, Feng

    2017-04-01

    Electrical resistance tomography (ERT) is a promising measurement technique with important industrial and clinical applications. However, with limited effective measurements, it suffers from poor spatial resolution due to the ill-posedness of the inverse problem. Recently, there has been an increasing research interest in hybrid imaging techniques, utilizing couplings of physical modalities, because these techniques obtain much more effective measurement information and promise high resolution. Ultrasound modulated electrical impedance tomography (UMEIT) is one of the newly developed hybrid imaging techniques, which combines electric and acoustic modalities. A linearized image reconstruction method based on power density is proposed for UMEIT. The interior data, power density distribution, is adopted to reconstruct the conductivity distribution with the proposed image reconstruction method. At the same time, relating the power density change to the change in conductivity, the Jacobian matrix is employed to make the nonlinear problem into a linear one. The analytic formulation of this Jacobian matrix is derived and its effectiveness is also verified. In addition, different excitation patterns are tested and analyzed, and opposite excitation provides the best performance with the proposed method. Also, multiple power density distributions are combined to implement image reconstruction. Finally, image reconstruction is implemented with the linear back-projection (LBP) algorithm. Compared with ERT, with the proposed image reconstruction method, UMEIT can produce reconstructed images with higher quality and better quantitative evaluation results.

  5. Methods for Multiloop Identification of Visual and Neuromuscular Pilot Responses.

    PubMed

    Olivari, Mario; Nieuwenhuizen, Frank M; Venrooij, Joost; Bülthoff, Heinrich H; Pollini, Lorenzo

    2015-12-01

    In this paper, identification methods are proposed to estimate the neuromuscular and visual responses of a multiloop pilot model. A conventional and widely used technique for simultaneous identification of the neuromuscular and visual systems makes use of cross-spectral density estimates. This paper shows that this technique requires a specific noninterference hypothesis, often implicitly assumed, that may be difficult to meet during actual experimental designs. A mathematical justification of the necessity of the noninterference hypothesis is given. Furthermore, two methods are proposed that do not have the same limitations. The first method is based on autoregressive models with exogenous inputs, whereas the second one combines cross-spectral estimators with interpolation in the frequency domain. The two identification methods are validated by offline simulations and contrasted to the classic method. The results reveal that the classic method fails when the noninterference hypothesis is not fulfilled; on the contrary, the two proposed techniques give reliable estimates. Finally, the three identification methods are applied to experimental data from a closed-loop control task with pilots. The two proposed techniques give comparable estimates, different from those obtained by the classic method. The differences match those found with the simulations. Thus, the two identification methods provide a good alternative to the classic method and make it possible to simultaneously estimate human's neuromuscular and visual responses in cases where the classic method fails.

  6. Epileptic seizure detection in EEG signal using machine learning techniques.

    PubMed

    Jaiswal, Abeg Kumar; Banka, Haider

    2018-03-01

    Epilepsy is a well-known nervous system disorder characterized by seizures. Electroencephalograms (EEGs), which capture brain neural activity, can detect epilepsy. Traditional methods for analyzing an EEG signal for epileptic seizure detection are time-consuming. Recently, several automated seizure detection frameworks using machine learning technique have been proposed to replace these traditional methods. The two basic steps involved in machine learning are feature extraction and classification. Feature extraction reduces the input pattern space by keeping informative features and the classifier assigns the appropriate class label. In this paper, we propose two effective approaches involving subpattern based PCA (SpPCA) and cross-subpattern correlation-based PCA (SubXPCA) with Support Vector Machine (SVM) for automated seizure detection in EEG signals. Feature extraction was performed using SpPCA and SubXPCA. Both techniques explore the subpattern correlation of EEG signals, which helps in decision-making process. SVM is used for classification of seizure and non-seizure EEG signals. The SVM was trained with radial basis kernel. All the experiments have been carried out on the benchmark epilepsy EEG dataset. The entire dataset consists of 500 EEG signals recorded under different scenarios. Seven different experimental cases for classification have been conducted. The classification accuracy was evaluated using tenfold cross validation. The classification results of the proposed approaches have been compared with the results of some of existing techniques proposed in the literature to establish the claim.

  7. Audio Watermark Embedding Technique Applying Auditory Stream Segregation: "G-encoder Mark" Able to Be Extracted by Mobile Phone

    NASA Astrophysics Data System (ADS)

    Modegi, Toshio

    We are developing audio watermarking techniques which enable extraction of embedded data by cell phones. For that we have to embed data onto frequency ranges, where our auditory response is prominent, therefore data embedding will cause much auditory noises. Previously we have proposed applying a two-channel stereo play-back feature, where noises generated by a data embedded left-channel signal will be reduced by the other right-channel signal. However, this proposal has practical problems of restricting extracting terminal location. In this paper, we propose synthesizing the noise reducing right-channel signal with the left-signal and reduces noises completely by generating an auditory stream segregation phenomenon to users. This newly proposed makes the noise reducing right-channel signal unnecessary and supports monaural play-back operations. Moreover, we propose a wide-band embedding method causing dual auditory stream segregation phenomena, which enables data embedding on whole public phone frequency ranges and stable extractions with 3-G mobile phones. From these proposals, extraction precisions become higher than those by the previously proposed method whereas the quality damages of embedded signals become smaller. In this paper we present an abstract of our newly proposed method and experimental results comparing with those by the previously proposed method.

  8. Privacy-preserving search for chemical compound databases.

    PubMed

    Shimizu, Kana; Nuida, Koji; Arai, Hiromi; Mitsunari, Shigeo; Attrapadung, Nuttapong; Hamada, Michiaki; Tsuda, Koji; Hirokawa, Takatsugu; Sakuma, Jun; Hanaoka, Goichiro; Asai, Kiyoshi

    2015-01-01

    Searching for similar compounds in a database is the most important process for in-silico drug screening. Since a query compound is an important starting point for the new drug, a query holder, who is afraid of the query being monitored by the database server, usually downloads all the records in the database and uses them in a closed network. However, a serious dilemma arises when the database holder also wants to output no information except for the search results, and such a dilemma prevents the use of many important data resources. In order to overcome this dilemma, we developed a novel cryptographic protocol that enables database searching while keeping both the query holder's privacy and database holder's privacy. Generally, the application of cryptographic techniques to practical problems is difficult because versatile techniques are computationally expensive while computationally inexpensive techniques can perform only trivial computation tasks. In this study, our protocol is successfully built only from an additive-homomorphic cryptosystem, which allows only addition performed on encrypted values but is computationally efficient compared with versatile techniques such as general purpose multi-party computation. In an experiment searching ChEMBL, which consists of more than 1,200,000 compounds, the proposed method was 36,900 times faster in CPU time and 12,000 times as efficient in communication size compared with general purpose multi-party computation. We proposed a novel privacy-preserving protocol for searching chemical compound databases. The proposed method, easily scaling for large-scale databases, may help to accelerate drug discovery research by making full use of unused but valuable data that includes sensitive information.

  9. Privacy-preserving search for chemical compound databases

    PubMed Central

    2015-01-01

    Background Searching for similar compounds in a database is the most important process for in-silico drug screening. Since a query compound is an important starting point for the new drug, a query holder, who is afraid of the query being monitored by the database server, usually downloads all the records in the database and uses them in a closed network. However, a serious dilemma arises when the database holder also wants to output no information except for the search results, and such a dilemma prevents the use of many important data resources. Results In order to overcome this dilemma, we developed a novel cryptographic protocol that enables database searching while keeping both the query holder's privacy and database holder's privacy. Generally, the application of cryptographic techniques to practical problems is difficult because versatile techniques are computationally expensive while computationally inexpensive techniques can perform only trivial computation tasks. In this study, our protocol is successfully built only from an additive-homomorphic cryptosystem, which allows only addition performed on encrypted values but is computationally efficient compared with versatile techniques such as general purpose multi-party computation. In an experiment searching ChEMBL, which consists of more than 1,200,000 compounds, the proposed method was 36,900 times faster in CPU time and 12,000 times as efficient in communication size compared with general purpose multi-party computation. Conclusion We proposed a novel privacy-preserving protocol for searching chemical compound databases. The proposed method, easily scaling for large-scale databases, may help to accelerate drug discovery research by making full use of unused but valuable data that includes sensitive information. PMID:26678650

  10. Reinforcing the role of the conventional C-arm - a novel method for simplified distal interlocking

    PubMed Central

    2012-01-01

    Background The common practice for insertion of distal locking screws of intramedullary nails is a freehand technique under fluoroscopic control. The process is technically demanding, time-consuming and afflicted to considerable radiation exposure of the patient and the surgical personnel. A new concept is introduced utilizing information from within conventional radiographic images to help accurately guide the surgeon to place the interlocking bolt into the interlocking hole. The newly developed technique was compared to conventional freehand in an operating room (OR) like setting on human cadaveric lower legs in terms of operating time and radiation exposure. Methods The proposed concept (guided freehand), generally based on the freehand gold standard, additionally guides the surgeon by means of visible landmarks projected into the C-arm image. A computer program plans the correct drilling trajectory by processing the lens-shaped hole projections of the interlocking holes from a single image. Holes can be drilled by visually aligning the drill to the planned trajectory. Besides a conventional C-arm, no additional tracking or navigation equipment is required. Ten fresh frozen human below-knee specimens were instrumented with an Expert Tibial Nail (Synthes GmbH, Switzerland). The implants were distally locked by performing the newly proposed technique as well as the conventional freehand technique on each specimen. An orthopedic resident surgeon inserted four distal screws per procedure. Operating time, number of images and radiation time were recorded and statistically compared between interlocking techniques using non-parametric tests. Results A 58% reduction in number of taken images per screw was found for the guided freehand technique (7.4 ± 3.4) (mean ± SD) compared to the freehand technique (17.6 ± 10.3) (p < 0.001). Total radiation time (all 4 screws) was 55% lower for the guided freehand technique compared to conventional freehand (p = 0.001). Operating time per screw (from first shot to screw tightened) was on average 22% reduced by guided freehand (p = 0.018). Conclusions In an experimental setting, the newly developed guided freehand technique for distal interlocking has proven to markedly reduce radiation exposure when compared to the conventional freehand technique. The method utilizes established clinical workflows and does not require cost intensive add-on devices or extensive training. The underlying principle carries potential to assist implant positioning in numerous other applications within orthopedics and trauma from screw insertions to placement of plates, nails or prostheses. PMID:22276698

  11. A hybrid artificial bee colony algorithm and pattern search method for inversion of particle size distribution from spectral extinction data

    NASA Astrophysics Data System (ADS)

    Wang, Li; Li, Feng; Xing, Jian

    2017-10-01

    In this paper, a hybrid artificial bee colony (ABC) algorithm and pattern search (PS) method is proposed and applied for recovery of particle size distribution (PSD) from spectral extinction data. To be more useful and practical, size distribution function is modelled as the general Johnson's ? function that can overcome the difficulty of not knowing the exact type beforehand encountered in many real circumstances. The proposed hybrid algorithm is evaluated through simulated examples involving unimodal, bimodal and trimodal PSDs with different widths and mean particle diameters. For comparison, all examples are additionally validated by the single ABC algorithm. In addition, the performance of the proposed algorithm is further tested by actual extinction measurements with real standard polystyrene samples immersed in water. Simulation and experimental results illustrate that the hybrid algorithm can be used as an effective technique to retrieve the PSDs with high reliability and accuracy. Compared with the single ABC algorithm, our proposed algorithm can produce more accurate and robust inversion results while taking almost comparative CPU time over ABC algorithm alone. The superiority of ABC and PS hybridization strategy in terms of reaching a better balance of estimation accuracy and computation effort increases its potentials as an excellent inversion technique for reliable and efficient actual measurement of PSD.

  12. Using Optimisation Techniques to Granulise Rough Set Partitions

    NASA Astrophysics Data System (ADS)

    Crossingham, Bodie; Marwala, Tshilidzi

    2007-11-01

    This paper presents an approach to optimise rough set partition sizes using various optimisation techniques. Three optimisation techniques are implemented to perform the granularisation process, namely, genetic algorithm (GA), hill climbing (HC) and simulated annealing (SA). These optimisation methods maximise the classification accuracy of the rough sets. The proposed rough set partition method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey. The three techniques are compared in terms of their computational time, accuracy and number of rules produced when applied to the Human Immunodeficiency Virus (HIV) data set. The optimised methods results are compared to a well known non-optimised discretisation method, equal-width-bin partitioning (EWB). The accuracies achieved after optimising the partitions using GA, HC and SA are 66.89%, 65.84% and 65.48% respectively, compared to the accuracy of EWB of 59.86%. In addition to rough sets providing the plausabilities of the estimated HIV status, they also provide the linguistic rules describing how the demographic parameters drive the risk of HIV.

  13. Speckle noise reduction in ultrasound images using a discrete wavelet transform-based image fusion technique.

    PubMed

    Choi, Hyun Ho; Lee, Ju Hwan; Kim, Sung Min; Park, Sung Yun

    2015-01-01

    Here, the speckle noise in ultrasonic images is removed using an image fusion-based denoising method. To optimize the denoising performance, each discrete wavelet transform (DWT) and filtering technique was analyzed and compared. In addition, the performances were compared in order to derive the optimal input conditions. To evaluate the speckle noise removal performance, an image fusion algorithm was applied to the ultrasound images, and comparatively analyzed with the original image without the algorithm. As a result, applying DWT and filtering techniques caused information loss and noise characteristics, and did not represent the most significant noise reduction performance. Conversely, an image fusion method applying SRAD-original conditions preserved the key information in the original image, and the speckle noise was removed. Based on such characteristics, the input conditions of SRAD-original had the best denoising performance with the ultrasound images. From this study, the best denoising technique proposed based on the results was confirmed to have a high potential for clinical application.

  14. A novel time of arrival estimation algorithm using an energy detector receiver in MMW systems

    NASA Astrophysics Data System (ADS)

    Liang, Xiaolin; Zhang, Hao; Lyu, Tingting; Xiao, Han; Gulliver, T. Aaron

    2017-12-01

    This paper presents a new time of arrival (TOA) estimation technique using an improved energy detection (ED) receiver based on the empirical mode decomposition (EMD) in an impulse radio (IR) 60 GHz millimeter wave (MMW) system. A threshold is employed via analyzing the characteristics of the received energy values with an extreme learning machine (ELM). The effect of the channel and integration period on the TOA estimation is evaluated. Several well-known ED-based TOA algorithms are used to compare with the proposed technique. It is shown that this ELM-based technique has lower TOA estimation error compared to other approaches and provides robust performance with the IEEE 802.15.3c channel models.

  15. A modified form of conjugate gradient method for unconstrained optimization problems

    NASA Astrophysics Data System (ADS)

    Ghani, Nur Hamizah Abdul; Rivaie, Mohd.; Mamat, Mustafa

    2016-06-01

    Conjugate gradient (CG) methods have been recognized as an interesting technique to solve optimization problems, due to the numerical efficiency, simplicity and low memory requirements. In this paper, we propose a new CG method based on the study of Rivaie et al. [7] (Comparative study of conjugate gradient coefficient for unconstrained Optimization, Aus. J. Bas. Appl. Sci. 5(2011) 947-951). Then, we show that our method satisfies sufficient descent condition and converges globally with exact line search. Numerical results show that our proposed method is efficient for given standard test problems, compare to other existing CG methods.

  16. Spectral-Spatial Classification of Hyperspectral Images Using Hierarchical Optimization

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2011-01-01

    A new spectral-spatial method for hyperspectral data classification is proposed. For a given hyperspectral image, probabilistic pixelwise classification is first applied. Then, hierarchical step-wise optimization algorithm is performed, by iteratively merging neighboring regions with the smallest Dissimilarity Criterion (DC) and recomputing class labels for new regions. The DC is computed by comparing region mean vectors, class labels and a number of pixels in the two regions under consideration. The algorithm is converged when all the pixels get involved in the region merging procedure. Experimental results are presented on two remote sensing hyperspectral images acquired by the AVIRIS and ROSIS sensors. The proposed approach improves classification accuracies and provides maps with more homogeneous regions, when compared to previously proposed classification techniques.

  17. A comparative study of controlled random search algorithms with application to inverse aerofoil design

    NASA Astrophysics Data System (ADS)

    Manzanares-Filho, N.; Albuquerque, R. B. F.; Sousa, B. S.; Santos, L. G. C.

    2018-06-01

    This article presents a comparative study of some versions of the controlled random search algorithm (CRSA) in global optimization problems. The basic CRSA, originally proposed by Price in 1977 and improved by Ali et al. in 1997, is taken as a starting point. Then, some new modifications are proposed to improve the efficiency and reliability of this global optimization technique. The performance of the algorithms is assessed using traditional benchmark test problems commonly invoked in the literature. This comparative study points out the key features of the modified algorithm. Finally, a comparison is also made in a practical engineering application, namely the inverse aerofoil shape design.

  18. A proposed model for economic evaluations of major depressive disorder.

    PubMed

    Haji Ali Afzali, Hossein; Karnon, Jonathan; Gray, Jodi

    2012-08-01

    In countries like UK and Australia, the comparability of model-based analyses is an essential aspect of reimbursement decisions for new pharmaceuticals, medical services and technologies. Within disease areas, the use of models with alternative structures, type of modelling techniques and/or data sources for common parameters reduces the comparability of evaluations of alternative technologies for the same condition. The aim of this paper is to propose a decision analytic model to evaluate long-term costs and benefits of alternative management options in patients with depression. The structure of the proposed model is based on the natural history of depression and includes clinical events that are important from both clinical and economic perspectives. Considering its greater flexibility with respect to handling time, discrete event simulation (DES) is an appropriate simulation platform for modelling studies of depression. We argue that the proposed model can be used as a reference model in model-based studies of depression improving the quality and comparability of studies.

  19. Secure positioning technique based on encrypted visible light map for smart indoor service

    NASA Astrophysics Data System (ADS)

    Lee, Yong Up; Jung, Gillyoung

    2018-03-01

    Indoor visible light (VL) positioning systems for smart indoor services are negatively affected by both cochannel interference from adjacent light sources and VL reception position irregularity in the three-dimensional (3-D) VL channel. A secure positioning methodology based on a two-dimensional (2-D) encrypted VL map is proposed, implemented in prototypes of the specific positioning system, and analyzed based on performance tests. The proposed positioning technique enhances the positioning performance by more than 21.7% compared to the conventional method in real VL positioning tests. Further, the pseudonoise code is found to be the optimal encryption key for secure VL positioning for this smart indoor service.

  20. Cortical dipole imaging using truncated total least squares considering transfer matrix error.

    PubMed

    Hori, Junichi; Takeuchi, Kosuke

    2013-01-01

    Cortical dipole imaging has been proposed as a method to visualize electroencephalogram in high spatial resolution. We investigated the inverse technique of cortical dipole imaging using a truncated total least squares (TTLS). The TTLS is a regularization technique to reduce the influence from both the measurement noise and the transfer matrix error caused by the head model distortion. The estimation of the regularization parameter was also investigated based on L-curve. The computer simulation suggested that the estimation accuracy was improved by the TTLS compared with Tikhonov regularization. The proposed method was applied to human experimental data of visual evoked potentials. We confirmed the TTLS provided the high spatial resolution of cortical dipole imaging.

  1. Skipping Strategy (SS) for Initial Population of Job-Shop Scheduling Problem

    NASA Astrophysics Data System (ADS)

    Abdolrazzagh-Nezhad, M.; Nababan, E. B.; Sarim, H. M.

    2018-03-01

    Initial population in job-shop scheduling problem (JSSP) is an essential step to obtain near optimal solution. Techniques used to solve JSSP are computationally demanding. Skipping strategy (SS) is employed to acquire initial population after sequence of job on machine and sequence of operations (expressed in Plates-jobs and mPlates-jobs) are determined. The proposed technique is applied to benchmark datasets and the results are compared to that of other initialization techniques. It is shown that the initial population obtained from the SS approach could generate optimal solution.

  2. A spatially adaptive spectral re-ordering technique for lossless coding of hyper-spectral images

    NASA Technical Reports Server (NTRS)

    Memon, Nasir D.; Galatsanos, Nikolas

    1995-01-01

    In this paper, we propose a new approach, applicable to lossless compression of hyper-spectral images, that alleviates some limitations of linear prediction as applied to this problem. According to this approach, an adaptive re-ordering of the spectral components of each pixel is performed prior to prediction and encoding. This re-ordering adaptively exploits, on a pixel-by pixel basis, the presence of inter-band correlations for prediction. Furthermore, the proposed approach takes advantage of spatial correlations, and does not introduce any coding overhead to transmit the order of the spectral bands. This is accomplished by using the assumption that two spatially adjacent pixels are expected to have similar spectral relationships. We thus have a simple technique to exploit spectral and spatial correlations in hyper-spectral data sets, leading to compression performance improvements as compared to our previously reported techniques for lossless compression. We also look at some simple error modeling techniques for further exploiting any structure that remains in the prediction residuals prior to entropy coding.

  3. Continuous piecewise-linear, reduced-order electrochemical model for lithium-ion batteries in real-time applications

    NASA Astrophysics Data System (ADS)

    Farag, Mohammed; Fleckenstein, Matthias; Habibi, Saeid

    2017-02-01

    Model-order reduction and minimization of the CPU run-time while maintaining the model accuracy are critical requirements for real-time implementation of lithium-ion electrochemical battery models. In this paper, an isothermal, continuous, piecewise-linear, electrode-average model is developed by using an optimal knot placement technique. The proposed model reduces the univariate nonlinear function of the electrode's open circuit potential dependence on the state of charge to continuous piecewise regions. The parameterization experiments were chosen to provide a trade-off between extensive experimental characterization techniques and purely identifying all parameters using optimization techniques. The model is then parameterized in each continuous, piecewise-linear, region. Applying the proposed technique cuts down the CPU run-time by around 20%, compared to the reduced-order, electrode-average model. Finally, the model validation against real-time driving profiles (FTP-72, WLTP) demonstrates the ability of the model to predict the cell voltage accurately with less than 2% error.

  4. A comparative analysis of swarm intelligence techniques for feature selection in cancer classification.

    PubMed

    Gunavathi, Chellamuthu; Premalatha, Kandasamy

    2014-01-01

    Feature selection in cancer classification is a central area of research in the field of bioinformatics and used to select the informative genes from thousands of genes of the microarray. The genes are ranked based on T-statistics, signal-to-noise ratio (SNR), and F-test values. The swarm intelligence (SI) technique finds the informative genes from the top-m ranked genes. These selected genes are used for classification. In this paper the shuffled frog leaping with Lévy flight (SFLLF) is proposed for feature selection. In SFLLF, the Lévy flight is included to avoid premature convergence of shuffled frog leaping (SFL) algorithm. The SI techniques such as particle swarm optimization (PSO), cuckoo search (CS), SFL, and SFLLF are used for feature selection which identifies informative genes for classification. The k-nearest neighbour (k-NN) technique is used to classify the samples. The proposed work is applied on 10 different benchmark datasets and examined with SI techniques. The experimental results show that the results obtained from k-NN classifier through SFLLF feature selection method outperform PSO, CS, and SFL.

  5. Evolutionary algorithm based heuristic scheme for nonlinear heat transfer equations.

    PubMed

    Ullah, Azmat; Malik, Suheel Abdullah; Alimgeer, Khurram Saleem

    2018-01-01

    In this paper, a hybrid heuristic scheme based on two different basis functions i.e. Log Sigmoid and Bernstein Polynomial with unknown parameters is used for solving the nonlinear heat transfer equations efficiently. The proposed technique transforms the given nonlinear ordinary differential equation into an equivalent global error minimization problem. Trial solution for the given nonlinear differential equation is formulated using a fitness function with unknown parameters. The proposed hybrid scheme of Genetic Algorithm (GA) with Interior Point Algorithm (IPA) is opted to solve the minimization problem and to achieve the optimal values of unknown parameters. The effectiveness of the proposed scheme is validated by solving nonlinear heat transfer equations. The results obtained by the proposed scheme are compared and found in sharp agreement with both the exact solution and solution obtained by Haar Wavelet-Quasilinearization technique which witnesses the effectiveness and viability of the suggested scheme. Moreover, the statistical analysis is also conducted for investigating the stability and reliability of the presented scheme.

  6. New spatial diversity equalizer based on PLL

    NASA Astrophysics Data System (ADS)

    Rao, Wei

    2011-10-01

    A new Spatial Diversity Equalizer (SDE) based on phase-locked loop (PLL) is proposed to overcome the inter-symbol interference (ISI) and phase rotations simultaneously in the digital communication system. The proposed SDE consists of equal gain combining technique based on a famous blind equalization algorithm constant modulus algorithm (CMA) and a PLL. Compared with conventional SDE, the proposed SDE has not only faster convergence rate and lower residual error but also the ability to recover carrier phase rotation. The efficiency of the method is proved by computer simulation.

  7. Sci-Thur AM: YIS – 03: Combining sagittally-reconstructed 3D and live-2D ultrasound for high-dose-rate prostate brachytherapy needle segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hrinivich, Thomas; Hoover, Douglas; Surry, Kathlee

    Ultrasound-guided high-dose-rate prostate brachytherapy (HDR-BT) needle segmentation is performed clinically using live-2D sagittal images. Organ segmentation is then performed using axial images, introducing a source of geometric uncertainty. Sagittally-reconstructed 3D (SR3D) ultrasound enables both needle and organ segmentation, but suffers from shadow artifacts. We present a needle segmentation technique augmenting SR3D with live-2D sagittal images using mechanical probe tracking to mitigate image artifacts and compare it to the clinical standard. Seven prostate cancer patients underwent TRUS-guided HDR-BT during which the clinical and proposed segmentation techniques were completed in parallel using dual ultrasound video outputs. Calibrated needle end-length measurements were usedmore » to calculate insertion depth errors (IDEs), and the dosimetric impact of IDEs was evaluated by perturbing clinical treatment plan source positions. The proposed technique provided smaller IDEs than the clinical approach, with mean±SD of −0.3±2.2 mm and −0.5±3.7mm respectively. The proposed and clinical techniques resulted in 84% and 43% of needles with IDEs within ±3mm, and IDE ranges across all needles of [−7.7mm, 5.9mm] and [−9.3mm, 7.7mm] respectively. The proposed and clinical IDEs lead to mean±SD changes in the volume of the prostate receiving the prescription dose of −0.6±0.9% and −2.0±5.3% respectively. The proposed technique provides improved HDR-BT needle segmentation accuracy over the clinical technique leading to decreased dosimetric uncertainty by eliminating the axial-to-sagittal registration, and mitigates the effect of shadow artifacts by incorporating mechanically registered live-2D sagittal images.« less

  8. Electroencephalogram-based decoding cognitive states using convolutional neural network and likelihood ratio based score fusion.

    PubMed

    Zafar, Raheel; Dass, Sarat C; Malik, Aamir Saeed

    2017-01-01

    Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain-computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method.

  9. Review of phase measuring deflectometry

    DOE PAGES

    Huang, Lei; Idir, Mourad; Zuo, Chao; ...

    2018-04-07

    As a low cost, full-field three-dimensional shape measurement technique with high dynamic range, Phase Measuring Deflectometry (PMD) has been studied and improved to be a simple and effective manner to inspect specular reflecting surfaces. In this review, the fundamental principle and the basic concepts of PMD technique are introduced and followed by a brief overview of its key developments since it was first proposed. In addition, the similarities and differences compared with other related techniques are discussed to highlight the distinguishing features of the PMD technique. In conclusion, we will address the major challenges, the existing solutions and the remainingmore » limitations in this technique to provide some suggestions for potential future investigations.« less

  10. Noise reduction in Lidar signal using correlation-based EMD combined with soft thresholding and roughness penalty

    NASA Astrophysics Data System (ADS)

    Chang, Jianhua; Zhu, Lingyan; Li, Hongxu; Xu, Fan; Liu, Binggang; Yang, Zhenbo

    2018-01-01

    Empirical mode decomposition (EMD) is widely used to analyze the non-linear and non-stationary signals for noise reduction. In this study, a novel EMD-based denoising method, referred to as EMD with soft thresholding and roughness penalty (EMD-STRP), is proposed for the Lidar signal denoising. With the proposed method, the relevant and irrelevant intrinsic mode functions are first distinguished via a correlation coefficient. Then, the soft thresholding technique is applied to the irrelevant modes, and the roughness penalty technique is applied to the relevant modes to extract as much information as possible. The effectiveness of the proposed method was evaluated using three typical signals contaminated by white Gaussian noise. The denoising performance was then compared to the denoising capabilities of other techniques, such as correlation-based EMD partial reconstruction, correlation-based EMD hard thresholding, and wavelet transform. The use of EMD-STRP on the measured Lidar signal resulted in the noise being efficiently suppressed, with an improved signal to noise ratio of 22.25 dB and an extended detection range of 11 km.

  11. Localization of thermal anomalies in electrical equipment using Infrared Thermography and support vector machine

    NASA Astrophysics Data System (ADS)

    Laib dit Leksir, Y.; Mansour, M.; Moussaoui, A.

    2018-03-01

    Analysis and processing of databases obtained from infrared thermal inspections made on electrical installations require the development of new tools to obtain more information to visual inspections. Consequently, methods based on the capture of thermal images show a great potential and are increasingly employed in this field. However, there is a need for the development of effective techniques to analyse these databases in order to extract significant information relating to the state of the infrastructures. This paper presents a technique explaining how this approach can be implemented and proposes a system that can help to detect faults in thermal images of electrical installations. The proposed method classifies and identifies the region of interest (ROI). The identification is conducted using support vector machine (SVM) algorithm. The aim here is to capture the faults that exist in electrical equipments during an inspection of some machines using A40 FLIR camera. After that, binarization techniques are employed to select the region of interest. Later the comparative analysis of the obtained misclassification errors using the proposed method with Fuzzy c means and Ostu, has also be addressed.

  12. Spectrophotometric methods for simultaneous determination of betamethasone valerate and fusidic acid in their binary mixture.

    PubMed

    Lotfy, Hayam Mahmoud; Salem, Hesham; Abdelkawy, Mohammad; Samir, Ahmed

    2015-04-05

    Five spectrophotometric methods were successfully developed and validated for the determination of betamethasone valerate and fusidic acid in their binary mixture. Those methods are isoabsorptive point method combined with the first derivative (ISO Point--D1) and the recently developed and well established methods namely ratio difference (RD) and constant center coupled with spectrum subtraction (CC) methods, in addition to derivative ratio (1DD) and mean centering of ratio spectra (MCR). New enrichment technique called spectrum addition technique was used instead of traditional spiking technique. The proposed spectrophotometric procedures do not require any separation steps. Accuracy, precision and linearity ranges of the proposed methods were determined and the specificity was assessed by analyzing synthetic mixtures of both drugs. They were applied to their pharmaceutical formulation and the results obtained were statistically compared to that of official methods. The statistical comparison showed that there is no significant difference between the proposed methods and the official ones regarding both accuracy and precision. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. A new time-adaptive discrete bionic wavelet transform for enhancing speech from adverse noise environment

    NASA Astrophysics Data System (ADS)

    Palaniswamy, Sumithra; Duraisamy, Prakash; Alam, Mohammad Showkat; Yuan, Xiaohui

    2012-04-01

    Automatic speech processing systems are widely used in everyday life such as mobile communication, speech and speaker recognition, and for assisting the hearing impaired. In speech communication systems, the quality and intelligibility of speech is of utmost importance for ease and accuracy of information exchange. To obtain an intelligible speech signal and one that is more pleasant to listen, noise reduction is essential. In this paper a new Time Adaptive Discrete Bionic Wavelet Thresholding (TADBWT) scheme is proposed. The proposed technique uses Daubechies mother wavelet to achieve better enhancement of speech from additive non- stationary noises which occur in real life such as street noise and factory noise. Due to the integration of human auditory system model into the wavelet transform, bionic wavelet transform (BWT) has great potential for speech enhancement which may lead to a new path in speech processing. In the proposed technique, at first, discrete BWT is applied to noisy speech to derive TADBWT coefficients. Then the adaptive nature of the BWT is captured by introducing a time varying linear factor which updates the coefficients at each scale over time. This approach has shown better performance than the existing algorithms at lower input SNR due to modified soft level dependent thresholding on time adaptive coefficients. The objective and subjective test results confirmed the competency of the TADBWT technique. The effectiveness of the proposed technique is also evaluated for speaker recognition task under noisy environment. The recognition results show that the TADWT technique yields better performance when compared to alternate methods specifically at lower input SNR.

  14. Performance Analysis of Physical Layer Security of Opportunistic Scheduling in Multiuser Multirelay Cooperative Networks

    PubMed Central

    Shim, Kyusung; Do, Nhu Tri; An, Beongku

    2017-01-01

    In this paper, we study the physical layer security (PLS) of opportunistic scheduling for uplink scenarios of multiuser multirelay cooperative networks. To this end, we propose a low-complexity, yet comparable secrecy performance source relay selection scheme, called the proposed source relay selection (PSRS) scheme. Specifically, the PSRS scheme first selects the least vulnerable source and then selects the relay that maximizes the system secrecy capacity for the given selected source. Additionally, the maximal ratio combining (MRC) technique and the selection combining (SC) technique are considered at the eavesdropper, respectively. Investigating the system performance in terms of secrecy outage probability (SOP), closed-form expressions of the SOP are derived. The developed analysis is corroborated through Monte Carlo simulation. Numerical results show that the PSRS scheme significantly improves the secure ability of the system compared to that of the random source relay selection scheme, but does not outperform the optimal joint source relay selection (OJSRS) scheme. However, the PSRS scheme drastically reduces the required amount of channel state information (CSI) estimations compared to that required by the OJSRS scheme, specially in dense cooperative networks. PMID:28212286

  15. Efficient techniques for wave-based sound propagation in interactive applications

    NASA Astrophysics Data System (ADS)

    Mehra, Ravish

    Sound propagation techniques model the effect of the environment on sound waves and predict their behavior from point of emission at the source to the final point of arrival at the listener. Sound is a pressure wave produced by mechanical vibration of a surface that propagates through a medium such as air or water, and the problem of sound propagation can be formulated mathematically as a second-order partial differential equation called the wave equation. Accurate techniques based on solving the wave equation, also called the wave-based techniques, are too expensive computationally and memory-wise. Therefore, these techniques face many challenges in terms of their applicability in interactive applications including sound propagation in large environments, time-varying source and listener directivity, and high simulation cost for mid-frequencies. In this dissertation, we propose a set of efficient wave-based sound propagation techniques that solve these three challenges and enable the use of wave-based sound propagation in interactive applications. Firstly, we propose a novel equivalent source technique for interactive wave-based sound propagation in large scenes spanning hundreds of meters. It is based on the equivalent source theory used for solving radiation and scattering problems in acoustics and electromagnetics. Instead of using a volumetric or surface-based approach, this technique takes an object-centric approach to sound propagation. The proposed equivalent source technique generates realistic acoustic effects and takes orders of magnitude less runtime memory compared to prior wave-based techniques. Secondly, we present an efficient framework for handling time-varying source and listener directivity for interactive wave-based sound propagation. The source directivity is represented as a linear combination of elementary spherical harmonic sources. This spherical harmonic-based representation of source directivity can support analytical, data-driven, rotating or time-varying directivity function at runtime. Unlike previous approaches, the listener directivity approach can be used to compute spatial audio (3D audio) for a moving, rotating listener at interactive rates. Lastly, we propose an efficient GPU-based time-domain solver for the wave equation that enables wave simulation up to the mid-frequency range in tens of minutes on a desktop computer. It is demonstrated that by carefully mapping all the components of the wave simulator to match the parallel processing capabilities of the graphics processors, significant improvement in performance can be achieved compared to the CPU-based simulators, while maintaining numerical accuracy. We validate these techniques with offline numerical simulations and measured data recorded in an outdoor scene. We present results of preliminary user evaluations conducted to study the impact of these techniques on user's immersion in virtual environment. We have integrated these techniques with the Half-Life 2 game engine, Oculus Rift head-mounted display, and Xbox game controller to enable users to experience high-quality acoustics effects and spatial audio in the virtual environment.

  16. Novel Variants of a Histogram Shift-Based Reversible Watermarking Technique for Medical Images to Improve Hiding Capacity

    PubMed Central

    Tuckley, Kushal

    2017-01-01

    In telemedicine systems, critical medical data is shared on a public communication channel. This increases the risk of unauthorised access to patient's information. This underlines the importance of secrecy and authentication for the medical data. This paper presents two innovative variations of classical histogram shift methods to increase the hiding capacity. The first technique divides the image into nonoverlapping blocks and embeds the watermark individually using the histogram method. The second method separates the region of interest and embeds the watermark only in the region of noninterest. This approach preserves the medical information intact. This method finds its use in critical medical cases. The high PSNR (above 45 dB) obtained for both techniques indicates imperceptibility of the approaches. Experimental results illustrate superiority of the proposed approaches when compared with other methods based on histogram shifting techniques. These techniques improve embedding capacity by 5–15% depending on the image type, without affecting the quality of the watermarked image. Both techniques also enable lossless reconstruction of the watermark and the host medical image. A higher embedding capacity makes the proposed approaches attractive for medical image watermarking applications without compromising the quality of the image. PMID:29104744

  17. A secure and robust information hiding technique for covert communication

    NASA Astrophysics Data System (ADS)

    Parah, S. A.; Sheikh, J. A.; Hafiz, A. M.; Bhat, G. M.

    2015-08-01

    The unprecedented advancement of multimedia and growth of the internet has made it possible to reproduce and distribute digital media easier and faster. This has given birth to information security issues, especially when the information pertains to national security, e-banking transactions, etc. The disguised form of encrypted data makes an adversary suspicious and increases the chance of attack. Information hiding overcomes this inherent problem of cryptographic systems and is emerging as an effective means of securing sensitive data being transmitted over insecure channels. In this paper, a secure and robust information hiding technique referred to as Intermediate Significant Bit Plane Embedding (ISBPE) is presented. The data to be embedded is scrambled and embedding is carried out using the concept of Pseudorandom Address Vector (PAV) and Complementary Address Vector (CAV) to enhance the security of the embedded data. The proposed ISBPE technique is fully immune to Least Significant Bit (LSB) removal/replacement attack. Experimental investigations reveal that the proposed technique is more robust to various image processing attacks like JPEG compression, Additive White Gaussian Noise (AWGN), low pass filtering, etc. compared to conventional LSB techniques. The various advantages offered by ISBPE technique make it a good candidate for covert communication.

  18. Graph-cut based discrete-valued image reconstruction.

    PubMed

    Tuysuzoglu, Ahmet; Karl, W Clem; Stojanovic, Ivana; Castañòn, David; Ünlü, M Selim

    2015-05-01

    Efficient graph-cut methods have been used with great success for labeling and denoising problems occurring in computer vision. Unfortunately, the presence of linear image mappings has prevented the use of these techniques in most discrete-amplitude image reconstruction problems. In this paper, we develop a graph-cut based framework for the direct solution of discrete amplitude linear image reconstruction problems cast as regularized energy function minimizations. We first analyze the structure of discrete linear inverse problem cost functions to show that the obstacle to the application of graph-cut methods to their solution is the variable mixing caused by the presence of the linear sensing operator. We then propose to use a surrogate energy functional that overcomes the challenges imposed by the sensing operator yet can be utilized efficiently in existing graph-cut frameworks. We use this surrogate energy functional to devise a monotonic iterative algorithm for the solution of discrete valued inverse problems. We first provide experiments using local convolutional operators and show the robustness of the proposed technique to noise and stability to changes in regularization parameter. Then we focus on nonlocal, tomographic examples where we consider limited-angle data problems. We compare our technique with state-of-the-art discrete and continuous image reconstruction techniques. Experiments show that the proposed method outperforms state-of-the-art techniques in challenging scenarios involving discrete valued unknowns.

  19. An adaptive incremental approach to constructing ensemble classifiers: Application in an information-theoretic computer-aided decision system for detection of masses in mammograms

    PubMed Central

    Mazurowski, Maciej A.; Zurada, Jacek M.; Tourassi, Georgia D.

    2009-01-01

    Ensemble classifiers have been shown efficient in multiple applications. In this article, the authors explore the effectiveness of ensemble classifiers in a case-based computer-aided diagnosis system for detection of masses in mammograms. They evaluate two general ways of constructing subclassifiers by resampling of the available development dataset: Random division and random selection. Furthermore, they discuss the problem of selecting the ensemble size and propose two adaptive incremental techniques that automatically select the size for the problem at hand. All the techniques are evaluated with respect to a previously proposed information-theoretic CAD system (IT-CAD). The experimental results show that the examined ensemble techniques provide a statistically significant improvement (AUC=0.905±0.024) in performance as compared to the original IT-CAD system (AUC=0.865±0.029). Some of the techniques allow for a notable reduction in the total number of examples stored in the case base (to 1.3% of the original size), which, in turn, results in lower storage requirements and a shorter response time of the system. Among the methods examined in this article, the two proposed adaptive techniques are by far the most effective for this purpose. Furthermore, the authors provide some discussion and guidance for choosing the ensemble parameters. PMID:19673196

  20. Axial resolution improvement in spectral domain optical coherence tomography using a depth-adaptive maximum-a-posterior framework

    NASA Astrophysics Data System (ADS)

    Boroomand, Ameneh; Tan, Bingyao; Wong, Alexander; Bizheva, Kostadinka

    2015-03-01

    The axial resolution of Spectral Domain Optical Coherence Tomography (SD-OCT) images degrades with scanning depth due to the limited number of pixels and the pixel size of the camera, any aberrations in the spectrometer optics and wavelength dependent scattering and absorption in the imaged object [1]. Here we propose a novel algorithm which compensates for the blurring effect of these factors of the depth-dependent axial Point Spread Function (PSF) in SDOCT images. The proposed method is based on a Maximum A Posteriori (MAP) reconstruction framework which takes advantage of a Stochastic Fully Connected Conditional Random Field (SFCRF) model. The aim is to compensate for the depth-dependent axial blur in SD-OCT images and simultaneously suppress the speckle noise which is inherent to all OCT images. Applying the proposed depth-dependent axial resolution enhancement technique to an OCT image of cucumber considerably improved the axial resolution of the image especially at higher imaging depths and allowed for better visualization of cellular membrane and nuclei. Comparing the result of our proposed method with the conventional Lucy-Richardson deconvolution algorithm clearly demonstrates the efficiency of our proposed technique in better visualization and preservation of fine details and structures in the imaged sample, as well as better speckle noise suppression. This illustrates the potential usefulness of our proposed technique as a suitable replacement for the hardware approaches which are often very costly and complicated.

  1. Alternate deposition and hydrogen doping technique for ZnO thin films

    NASA Astrophysics Data System (ADS)

    Myong, Seung Yeop; Lim, Koeng Su

    2006-08-01

    We propose an alternate deposition and hydrogen doping (ADHD) technique for polycrystalline hydrogen-doped ZnO thin films, which is a sublayer-by-sublayer deposition based on metalorganic chemical vapor deposition and mercury-sensitized photodecomposition of hydrogen doping gas. Compared to conventional post-deposition hydrogen doping, the ADHD process provides superior electrical conductivity, stability, and surface roughness. Photoluminescence spectra measured at 10 K reveal that the ADHD technique improves ultraviolet and violet emissions by suppressing the green and yellow emissions. Therefore, the ADHD technique is shown to be very promising aid to the manufacture of improved transparent conducting electrodes and light emitting materials.

  2. Effects of Restricted Launch Conditions for the Enhancement of Bandwidth-Distance Product of Multimode Fiber Links

    NASA Technical Reports Server (NTRS)

    Andrawis, Alfred S.

    2000-01-01

    Several techniques had been proposed to enhance multimode fiber bandwidth-distance product. Single mode-to-multimode offset launch condition technique had been experimented with at Kennedy Space Center. Significant enhancement in multimode fiber link bandwidth is achieved using this technique. It is found that close to three-fold bandwidth enhancement can be achieved compared to standard zero offset launch technique. Moreover, significant reduction in modal noise has been observed as a function of offset launch displacement. However, significant reduction in the overall signal-to-noise ratio is also observed due to signal attenuation due to mode radiation from fiber core to its cladding.

  3. Automatic correction of echo-planar imaging (EPI) ghosting artifacts in real-time interactive cardiac MRI using sensitivity encoding.

    PubMed

    Kim, Yoon-Chul; Nielsen, Jon-Fredrik; Nayak, Krishna S

    2008-01-01

    To develop a method that automatically corrects ghosting artifacts due to echo-misalignment in interleaved gradient-echo echo-planar imaging (EPI) in arbitrary oblique or double-oblique scan planes. An automatic ghosting correction technique was developed based on an alternating EPI acquisition and the phased-array ghost elimination (PAGE) reconstruction method. The direction of k-space traversal is alternated at every temporal frame, enabling lower temporal-resolution ghost-free coil sensitivity maps to be dynamically estimated. The proposed method was compared with conventional one-dimensional (1D) phase correction in axial, oblique, and double-oblique scan planes in phantom and cardiac in vivo studies. The proposed method was also used in conjunction with two-fold acceleration. The proposed method with nonaccelerated acquisition provided excellent suppression of ghosting artifacts in all scan planes, and was substantially more effective than conventional 1D phase correction in oblique and double-oblique scan planes. The feasibility of real-time reconstruction using the proposed technique was demonstrated in a scan protocol with 3.1-mm spatial and 60-msec temporal resolution. The proposed technique with nonaccelerated acquisition provides excellent ghost suppression in arbitrary scan orientations without a calibration scan, and can be useful for real-time interactive imaging, in which scan planes are frequently changed with arbitrary oblique orientations.

  4. Imitation-tumor targeting based on continuous-wave near-infrared tomography.

    PubMed

    Liu, Dan; Liu, Xin; Zhang, Yan; Wang, Qisong; Lu, Jingyang; Sun, Jinwei

    2017-12-01

    Continuous-wave Near-Infrared (NIR) optical spectroscopy has shown great diagnostic capability in the early tumor detection with advantages of low-cost, portable, non-invasive, and non-radiative. In this paper, Modified Lambert-Beer Theory is deployed to address the low-resolution issues of the NIR technique and to design the tumor detecting and imaging system. Considering that tumor tissues have features such as high blood flow and hypoxia, the proposed technique can detect the location, size, and other information of the tumor tissues by comparing the absorbance between pathological and normal tissues. Finally, the tumor tissues can be imaged through tomographic method. The simulation experiments prove that the proposed technique and designed system can efficiently detect the tumor tissues, achieving imaging precision within 1 mm. The work of the paper has shown great potential in the diagnosis of tumor close to body surface.

  5. Nonlinear earthquake analysis of reinforced concrete frames with fiber and Bernoulli-Euler beam-column element.

    PubMed

    Karaton, Muhammet

    2014-01-01

    A beam-column element based on the Euler-Bernoulli beam theory is researched for nonlinear dynamic analysis of reinforced concrete (RC) structural element. Stiffness matrix of this element is obtained by using rigidity method. A solution technique that included nonlinear dynamic substructure procedure is developed for dynamic analyses of RC frames. A predicted-corrected form of the Bossak-α method is applied for dynamic integration scheme. A comparison of experimental data of a RC column element with numerical results, obtained from proposed solution technique, is studied for verification the numerical solutions. Furthermore, nonlinear cyclic analysis results of a portal reinforced concrete frame are achieved for comparing the proposed solution technique with Fibre element, based on flexibility method. However, seismic damage analyses of an 8-story RC frame structure with soft-story are investigated for cases of lumped/distributed mass and load. Damage region, propagation, and intensities according to both approaches are researched.

  6. A novel clinical decision support system using improved adaptive genetic algorithm for the assessment of fetal well-being.

    PubMed

    Ravindran, Sindhu; Jambek, Asral Bahari; Muthusamy, Hariharan; Neoh, Siew-Chin

    2015-01-01

    A novel clinical decision support system is proposed in this paper for evaluating the fetal well-being from the cardiotocogram (CTG) dataset through an Improved Adaptive Genetic Algorithm (IAGA) and Extreme Learning Machine (ELM). IAGA employs a new scaling technique (called sigma scaling) to avoid premature convergence and applies adaptive crossover and mutation techniques with masking concepts to enhance population diversity. Also, this search algorithm utilizes three different fitness functions (two single objective fitness functions and multi-objective fitness function) to assess its performance. The classification results unfold that promising classification accuracy of 94% is obtained with an optimal feature subset using IAGA. Also, the classification results are compared with those of other Feature Reduction techniques to substantiate its exhaustive search towards the global optimum. Besides, five other benchmark datasets are used to gauge the strength of the proposed IAGA algorithm.

  7. Integrating instance selection, instance weighting, and feature weighting for nearest neighbor classifiers by coevolutionary algorithms.

    PubMed

    Derrac, Joaquín; Triguero, Isaac; Garcia, Salvador; Herrera, Francisco

    2012-10-01

    Cooperative coevolution is a successful trend of evolutionary computation which allows us to define partitions of the domain of a given problem, or to integrate several related techniques into one, by the use of evolutionary algorithms. It is possible to apply it to the development of advanced classification methods, which integrate several machine learning techniques into a single proposal. A novel approach integrating instance selection, instance weighting, and feature weighting into the framework of a coevolutionary model is presented in this paper. We compare it with a wide range of evolutionary and nonevolutionary related methods, in order to show the benefits of the employment of coevolution to apply the techniques considered simultaneously. The results obtained, contrasted through nonparametric statistical tests, show that our proposal outperforms other methods in the comparison, thus becoming a suitable tool in the task of enhancing the nearest neighbor classifier.

  8. Optimal Design of MPPT Controllers for Grid Connected Photovoltaic Array System

    NASA Astrophysics Data System (ADS)

    Ebrahim, M. A.; AbdelHadi, H. A.; Mahmoud, H. M.; Saied, E. M.; Salama, M. M.

    2016-10-01

    Integrating photovoltaic (PV) plants into electric power system exhibits challenges to power system dynamic performance. These challenges stem primarily from the natural characteristics of PV plants, which differ in some respects from the conventional plants. The most significant challenge is how to extract and regulate the maximum power from the sun. This paper presents the optimal design for the most commonly used Maximum Power Point Tracking (MPPT) techniques based on Proportional Integral tuned by Particle Swarm Optimization (PI-PSO). These suggested techniques are, (1) the incremental conductance, (2) perturb and observe, (3) fractional short circuit current and (4) fractional open circuit voltage techniques. This research work provides a comprehensive comparative study with the energy availability ratio from photovoltaic panels. The simulation results proved that the proposed controllers have an impressive tracking response. The system dynamic performance improved greatly using the proposed controllers.

  9. Technique for Early Reliability Prediction of Software Components Using Behaviour Models

    PubMed Central

    Ali, Awad; N. A. Jawawi, Dayang; Adham Isa, Mohd; Imran Babar, Muhammad

    2016-01-01

    Behaviour models are the most commonly used input for predicting the reliability of a software system at the early design stage. A component behaviour model reveals the structure and behaviour of the component during the execution of system-level functionalities. There are various challenges related to component reliability prediction at the early design stage based on behaviour models. For example, most of the current reliability techniques do not provide fine-grained sequential behaviour models of individual components and fail to consider the loop entry and exit points in the reliability computation. Moreover, some of the current techniques do not tackle the problem of operational data unavailability and the lack of analysis results that can be valuable for software architects at the early design stage. This paper proposes a reliability prediction technique that, pragmatically, synthesizes system behaviour in the form of a state machine, given a set of scenarios and corresponding constraints as input. The state machine is utilized as a base for generating the component-relevant operational data. The state machine is also used as a source for identifying the nodes and edges of a component probabilistic dependency graph (CPDG). Based on the CPDG, a stack-based algorithm is used to compute the reliability. The proposed technique is evaluated by a comparison with existing techniques and the application of sensitivity analysis to a robotic wheelchair system as a case study. The results indicate that the proposed technique is more relevant at the early design stage compared to existing works, and can provide a more realistic and meaningful prediction. PMID:27668748

  10. A Highly Linear and Wide Input Range Four-Quadrant CMOS Analog Multiplier Using Active Feedback

    NASA Astrophysics Data System (ADS)

    Huang, Zhangcai; Jiang, Minglu; Inoue, Yasuaki

    Analog multipliers are one of the most important building blocks in analog signal processing circuits. The performance with high linearity and wide input range is usually required for analog four-quadrant multipliers in most applications. Therefore, a highly linear and wide input range four-quadrant CMOS analog multiplier using active feedback is proposed in this paper. Firstly, a novel configuration of four-quadrant multiplier cell is presented. Its input dynamic range and linearity are improved significantly by adding two resistors compared with the conventional structure. Then based on the proposed multiplier cell configuration, a four-quadrant CMOS analog multiplier with active feedback technique is implemented by two operational amplifiers. Because of both the proposed multiplier cell and active feedback technique, the proposed multiplier achieves a much wider input range with higher linearity than conventional structures. The proposed multiplier was fabricated by a 0.6µm CMOS process. Experimental results show that the input range of the proposed multiplier can be up to 5.6Vpp with 0.159% linearity error on VX and 4.8Vpp with 0.51% linearity error on VY for ±2.5V power supply voltages, respectively.

  11. Probabilistic retinal vessel segmentation

    NASA Astrophysics Data System (ADS)

    Wu, Chang-Hua; Agam, Gady

    2007-03-01

    Optic fundus assessment is widely used for diagnosing vascular and non-vascular pathology. Inspection of the retinal vasculature may reveal hypertension, diabetes, arteriosclerosis, cardiovascular disease and stroke. Due to various imaging conditions retinal images may be degraded. Consequently, the enhancement of such images and vessels in them is an important task with direct clinical applications. We propose a novel technique for vessel enhancement in retinal images that is capable of enhancing vessel junctions in addition to linear vessel segments. This is an extension of vessel filters we have previously developed for vessel enhancement in thoracic CT scans. The proposed approach is based on probabilistic models which can discern vessels and junctions. Evaluation shows the proposed filter is better than several known techniques and is comparable to the state of the art when evaluated on a standard dataset. A ridge-based vessel tracking process is applied on the enhanced image to demonstrate the effectiveness of the enhancement filter.

  12. Intrusion Detection System Using Deep Neural Network for In-Vehicle Network Security.

    PubMed

    Kang, Min-Joo; Kang, Je-Won

    2016-01-01

    A novel intrusion detection system (IDS) using a deep neural network (DNN) is proposed to enhance the security of in-vehicular network. The parameters building the DNN structure are trained with probability-based feature vectors that are extracted from the in-vehicular network packets. For a given packet, the DNN provides the probability of each class discriminating normal and attack packets, and, thus the sensor can identify any malicious attack to the vehicle. As compared to the traditional artificial neural network applied to the IDS, the proposed technique adopts recent advances in deep learning studies such as initializing the parameters through the unsupervised pre-training of deep belief networks (DBN), therefore improving the detection accuracy. It is demonstrated with experimental results that the proposed technique can provide a real-time response to the attack with a significantly improved detection ratio in controller area network (CAN) bus.

  13. Intrusion Detection System Using Deep Neural Network for In-Vehicle Network Security

    PubMed Central

    Kang, Min-Joo

    2016-01-01

    A novel intrusion detection system (IDS) using a deep neural network (DNN) is proposed to enhance the security of in-vehicular network. The parameters building the DNN structure are trained with probability-based feature vectors that are extracted from the in-vehicular network packets. For a given packet, the DNN provides the probability of each class discriminating normal and attack packets, and, thus the sensor can identify any malicious attack to the vehicle. As compared to the traditional artificial neural network applied to the IDS, the proposed technique adopts recent advances in deep learning studies such as initializing the parameters through the unsupervised pre-training of deep belief networks (DBN), therefore improving the detection accuracy. It is demonstrated with experimental results that the proposed technique can provide a real-time response to the attack with a significantly improved detection ratio in controller area network (CAN) bus. PMID:27271802

  14. Autoregressive statistical pattern recognition algorithms for damage detection in civil structures

    NASA Astrophysics Data System (ADS)

    Yao, Ruigen; Pakzad, Shamim N.

    2012-08-01

    Statistical pattern recognition has recently emerged as a promising set of complementary methods to system identification for automatic structural damage assessment. Its essence is to use well-known concepts in statistics for boundary definition of different pattern classes, such as those for damaged and undamaged structures. In this paper, several statistical pattern recognition algorithms using autoregressive models, including statistical control charts and hypothesis testing, are reviewed as potentially competitive damage detection techniques. To enhance the performance of statistical methods, new feature extraction techniques using model spectra and residual autocorrelation, together with resampling-based threshold construction methods, are proposed. Subsequently, simulated acceleration data from a multi degree-of-freedom system is generated to test and compare the efficiency of the existing and proposed algorithms. Data from laboratory experiments conducted on a truss and a large-scale bridge slab model are then used to further validate the damage detection methods and demonstrate the superior performance of proposed algorithms.

  15. Bag of Lines (BoL) for Improved Aerial Scene Representation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sridharan, Harini; Cheriyadat, Anil M.

    2014-09-22

    Feature representation is a key step in automated visual content interpretation. In this letter, we present a robust feature representation technique, referred to as bag of lines (BoL), for high-resolution aerial scenes. The proposed technique involves extracting and compactly representing low-level line primitives from the scene. The compact scene representation is generated by counting the different types of lines representing various linear structures in the scene. Through extensive experiments, we show that the proposed scene representation is invariant to scale changes and scene conditions and can discriminate urban scene categories accurately. We compare the BoL representation with the popular scalemore » invariant feature transform (SIFT) and Gabor wavelets for their classification and clustering performance on an aerial scene database consisting of images acquired by sensors with different spatial resolutions. The proposed BoL representation outperforms the SIFT- and Gabor-based representations.« less

  16. Electro-optic Mach-Zehnder Interferometer based Optical Digital Magnitude Comparator and 1's Complement Calculator

    NASA Astrophysics Data System (ADS)

    Kumar, Ajay; Raghuwanshi, Sanjeev Kumar

    2016-06-01

    The optical switching activity is one of the most essential phenomena in the optical domain. The electro-optic effect-based switching phenomena are applicable to generate some effective combinational and sequential logic circuits. The processing of digital computational technique in the optical domain includes some considerable advantages of optical communication technology, e.g. immunity to electro-magnetic interferences, compact size, signal security, parallel computing and larger bandwidth. The paper describes some efficient technique to implement single bit magnitude comparator and 1's complement calculator using the concepts of electro-optic effect. The proposed techniques are simulated on the MATLAB software. However, the suitability of the techniques is verified using the highly reliable Opti-BPM software. It is interesting to analyze the circuits in order to specify some optimized device parameter in order to optimize some performance affecting parameters, e.g. crosstalk, extinction ratio, signal losses through the curved and straight waveguide sections.

  17. Proposed alternative revision strategy for broken S1 pedicle screw: radiological study, review of the literature, and case reports.

    PubMed

    Elgafy, Hossein; Miller, Jacob D; Benedict, Gregory M; Seal, Ryan J; Liu, Jiayong

    2013-07-01

    There have been many reports outlining differing methods for managing a broken S1 screw. To the authors' best knowledge, the technique used in the present study has not been described previously. It involves insertion of a second pedicle screw without removing the broken screw shaft. Radiological study, literature review, and two case reports of the surgical technique. To report a proposed new surgical technique for management of broken S1 pedicle screws. Computed tomography (CT) scans of 50 patients with a total of 100 S1 pedicles were analyzed. There were 25 male and 25 female patients with an average age of 51 years ranging from 36 to 68 years. The cephalad-caudal length, medial-lateral width, and cross-sectional area of the S1 pedicle were measured and compared with the diameter of a pedicle screw to illustrate the possibility of inserting a second screw in S1 pedicle without removal of the broken screw shaft. Two case reports of the proposed technique are presented. The left and right S1 pedicle cross-sectional area in female measured 456.00 ± 4.00 and 457.00 ± 3.00 mm(2), respectively. The left and right S1 pedicle cross-section area in male measured 638.00 ± 2.00 and 639.00 ± 1.00 mm(2), respectively. There were statistically significant differences when comparing male and female S1 pedicle length, width, and cross-sectional area (p<.05). At 2-year follow-up, the two case reports of the proposed technique showed resolution of low back pain and radicular pain. Plain radiograph and CT scan showed posterolateral fusion mass and hardware in good position with no evidence of screw loosening. The S1 pedicle dimensions measured on CT scan reviewed in the present study showed that it may be anatomically feasible to place a second screw through the S1 pedicle without the removal of the broken screw shaft. This treatment method will reduce the complications associated with other described revision strategies for broken S1 screws. Published by Elsevier Inc.

  18. EVALUATION OF ACID DEPOSITION MODELS USING PRINCIPAL COMPONENT SPACES

    EPA Science Inventory

    An analytical technique involving principal components analysis is proposed for use in the evaluation of acid deposition models. elationships among model predictions are compared to those among measured data, rather than the more common one-to-one comparison of predictions to mea...

  19. Center of pressure based segment inertial parameters validation

    PubMed Central

    Rezzoug, Nasser; Gorce, Philippe; Isableu, Brice; Venture, Gentiane

    2017-01-01

    By proposing efficient methods for estimating Body Segment Inertial Parameters’ (BSIP) estimation and validating them with a force plate, it is possible to improve the inverse dynamic computations that are necessary in multiple research areas. Until today a variety of studies have been conducted to improve BSIP estimation but to our knowledge a real validation has never been completely successful. In this paper, we propose a validation method using both kinematic and kinetic parameters (contact forces) gathered from optical motion capture system and a force plate respectively. To compare BSIPs, we used the measured contact forces (Force plate) as the ground truth, and reconstructed the displacements of the Center of Pressure (COP) using inverse dynamics from two different estimation techniques. Only minor differences were seen when comparing the estimated segment masses. Their influence on the COP computation however is large and the results show very distinguishable patterns of the COP movements. Improving BSIP techniques is crucial and deviation from the estimations can actually result in large errors. This method could be used as a tool to validate BSIP estimation techniques. An advantage of this approach is that it facilitates the comparison between BSIP estimation methods and more specifically it shows the accuracy of those parameters. PMID:28662090

  20. Semantic Similarity between Web Documents Using Ontology

    NASA Astrophysics Data System (ADS)

    Chahal, Poonam; Singh Tomer, Manjeet; Kumar, Suresh

    2018-06-01

    The World Wide Web is the source of information available in the structure of interlinked web pages. However, the procedure of extracting significant information with the assistance of search engine is incredibly critical. This is for the reason that web information is written mainly by using natural language, and further available to individual human. Several efforts have been made in semantic similarity computation between documents using words, concepts and concepts relationship but still the outcome available are not as per the user requirements. This paper proposes a novel technique for computation of semantic similarity between documents that not only takes concepts available in documents but also relationships that are available between the concepts. In our approach documents are being processed by making ontology of the documents using base ontology and a dictionary containing concepts records. Each such record is made up of the probable words which represents a given concept. Finally, document ontology's are compared to find their semantic similarity by taking the relationships among concepts. Relevant concepts and relations between the concepts have been explored by capturing author and user intention. The proposed semantic analysis technique provides improved results as compared to the existing techniques.

  1. Respiration monitoring by Electrical Bioimpedance (EBI) Technique in a group of healthy males. Calibration equations.

    NASA Astrophysics Data System (ADS)

    Balleza, M.; Vargas, M.; Kashina, S.; Huerta, M. R.; Delgadillo, I.; Moreno, G.

    2017-01-01

    Several research groups have proposed the electrical impedance tomography (EIT) in order to analyse lung ventilation. With the use of 16 electrodes, the EIT is capable to obtain a set of transversal section images of thorax. In previous works, we have obtained an alternating signal in terms of impedance corresponding to respiration from EIT images. Then, in order to transform those impedance changes into a measurable volume signal a set of calibration equations has been obtained. However, EIT technique is still expensive to attend outpatients in basics hospitals. For that reason, we propose the use of electrical bioimpedance (EBI) technique to monitor respiration behaviour. The aim of this study was to obtain a set of calibration equations to transform EBI impedance changes determined at 4 different frequencies into a measurable volume signal. In this study a group of 8 healthy males was assessed. From obtained results, a high mathematical adjustment in the group calibrations equations was evidenced. Then, the volume determinations obtained by EBI were compared with those obtained by our gold standard. Therefore, despite EBI does not provide a complete information about impedance vectors of lung compared with EIT, it is possible to monitor the respiration.

  2. Semantic Similarity between Web Documents Using Ontology

    NASA Astrophysics Data System (ADS)

    Chahal, Poonam; Singh Tomer, Manjeet; Kumar, Suresh

    2018-03-01

    The World Wide Web is the source of information available in the structure of interlinked web pages. However, the procedure of extracting significant information with the assistance of search engine is incredibly critical. This is for the reason that web information is written mainly by using natural language, and further available to individual human. Several efforts have been made in semantic similarity computation between documents using words, concepts and concepts relationship but still the outcome available are not as per the user requirements. This paper proposes a novel technique for computation of semantic similarity between documents that not only takes concepts available in documents but also relationships that are available between the concepts. In our approach documents are being processed by making ontology of the documents using base ontology and a dictionary containing concepts records. Each such record is made up of the probable words which represents a given concept. Finally, document ontology's are compared to find their semantic similarity by taking the relationships among concepts. Relevant concepts and relations between the concepts have been explored by capturing author and user intention. The proposed semantic analysis technique provides improved results as compared to the existing techniques.

  3. Optimization of digital image processing to determine quantum dots' height and density from atomic force microscopy.

    PubMed

    Ruiz, J E; Paciornik, S; Pinto, L D; Ptak, F; Pires, M P; Souza, P L

    2018-01-01

    An optimized method of digital image processing to interpret quantum dots' height measurements obtained by atomic force microscopy is presented. The method was developed by combining well-known digital image processing techniques and particle recognition algorithms. The properties of quantum dot structures strongly depend on dots' height, among other features. Determination of their height is sensitive to small variations in their digital image processing parameters, which can generate misleading results. Comparing the results obtained with two image processing techniques - a conventional method and the new method proposed herein - with the data obtained by determining the height of quantum dots one by one within a fixed area, showed that the optimized method leads to more accurate results. Moreover, the log-normal distribution, which is often used to represent natural processes, shows a better fit to the quantum dots' height histogram obtained with the proposed method. Finally, the quantum dots' height obtained were used to calculate the predicted photoluminescence peak energies which were compared with the experimental data. Again, a better match was observed when using the proposed method to evaluate the quantum dots' height. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. A Fourier-based compressed sensing technique for accelerated CT image reconstruction using first-order methods.

    PubMed

    Choi, Kihwan; Li, Ruijiang; Nam, Haewon; Xing, Lei

    2014-06-21

    As a solution to iterative CT image reconstruction, first-order methods are prominent for the large-scale capability and the fast convergence rate [Formula: see text]. In practice, the CT system matrix with a large condition number may lead to slow convergence speed despite the theoretically promising upper bound. The aim of this study is to develop a Fourier-based scaling technique to enhance the convergence speed of first-order methods applied to CT image reconstruction. Instead of working in the projection domain, we transform the projection data and construct a data fidelity model in Fourier space. Inspired by the filtered backprojection formalism, the data are appropriately weighted in Fourier space. We formulate an optimization problem based on weighted least-squares in the Fourier space and total-variation (TV) regularization in image space for parallel-beam, fan-beam and cone-beam CT geometry. To achieve the maximum computational speed, the optimization problem is solved using a fast iterative shrinkage-thresholding algorithm with backtracking line search and GPU implementation of projection/backprojection. The performance of the proposed algorithm is demonstrated through a series of digital simulation and experimental phantom studies. The results are compared with the existing TV regularized techniques based on statistics-based weighted least-squares as well as basic algebraic reconstruction technique. The proposed Fourier-based compressed sensing (CS) method significantly improves both the image quality and the convergence rate compared to the existing CS techniques.

  5. Intelligent control for PMSM based on online PSO considering parameters change

    NASA Astrophysics Data System (ADS)

    Song, Zhengqiang; Yang, Huiling

    2018-03-01

    A novel online particle swarm optimization method is proposed to design speed and current controllers of vector controlled interior permanent magnet synchronous motor drives considering stator resistance variation. In the proposed drive system, the space vector modulation technique is employed to generate the switching signals for a two-level voltage-source inverter. The nonlinearity of the inverter is also taken into account due to the dead-time, threshold and voltage drop of the switching devices in order to simulate the system in the practical condition. Speed and PI current controller gains are optimized with PSO online, and the fitness function is changed according to the system dynamic and steady states. The proposed optimization algorithm is compared with conventional PI control method in the condition of step speed change and stator resistance variation, showing that the proposed online optimization method has better robustness and dynamic characteristics compared with conventional PI controller design.

  6. Copy-move forgery detection through stationary wavelets and local binary pattern variance for forensic analysis in digital images.

    PubMed

    Mahmood, Toqeer; Irtaza, Aun; Mehmood, Zahid; Tariq Mahmood, Muhammad

    2017-10-01

    The most common image tampering often for malicious purposes is to copy a region of the same image and paste to hide some other region. As both regions usually have same texture properties, therefore, this artifact is invisible for the viewers, and credibility of the image becomes questionable in proof centered applications. Hence, means are required to validate the integrity of the image and identify the tampered regions. Therefore, this study presents an efficient way of copy-move forgery detection (CMFD) through local binary pattern variance (LBPV) over the low approximation components of the stationary wavelets. CMFD technique presented in this paper is applied over the circular regions to address the possible post processing operations in a better way. The proposed technique is evaluated on CoMoFoD and Kodak lossless true color image (KLTCI) datasets in the presence of translation, flipping, blurring, rotation, scaling, color reduction, brightness change and multiple forged regions in an image. The evaluation reveals the prominence of the proposed technique compared to state of the arts. Consequently, the proposed technique can reliably be applied to detect the modified regions and the benefits can be obtained in journalism, law enforcement, judiciary, and other proof critical domains. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Low-Bit Rate Feedback Strategies for Iterative IA-Precoded MIMO-OFDM-Based Systems

    PubMed Central

    Teodoro, Sara; Silva, Adão; Dinis, Rui; Gameiro, Atílio

    2014-01-01

    Interference alignment (IA) is a promising technique that allows high-capacity gains in interference channels, but which requires the knowledge of the channel state information (CSI) for all the system links. We design low-complexity and low-bit rate feedback strategies where a quantized version of some CSI parameters is fed back from the user terminal (UT) to the base station (BS), which shares it with the other BSs through a limited-capacity backhaul network. This information is then used by BSs to perform the overall IA design. With the proposed strategies, we only need to send part of the CSI information, and this can even be sent only once for a set of data blocks transmitted over time-varying channels. These strategies are applied to iterative MMSE-based IA techniques for the downlink of broadband wireless OFDM systems with limited feedback. A new robust iterative IA technique, where channel quantization errors are taken into account in IA design, is also proposed and evaluated. With our proposed strategies, we need a small number of quantization bits to transmit and share the CSI, when comparing with the techniques used in previous works, while allowing performance close to the one obtained with perfect channel knowledge. PMID:24678274

  8. Low-bit rate feedback strategies for iterative IA-precoded MIMO-OFDM-based systems.

    PubMed

    Teodoro, Sara; Silva, Adão; Dinis, Rui; Gameiro, Atílio

    2014-01-01

    Interference alignment (IA) is a promising technique that allows high-capacity gains in interference channels, but which requires the knowledge of the channel state information (CSI) for all the system links. We design low-complexity and low-bit rate feedback strategies where a quantized version of some CSI parameters is fed back from the user terminal (UT) to the base station (BS), which shares it with the other BSs through a limited-capacity backhaul network. This information is then used by BSs to perform the overall IA design. With the proposed strategies, we only need to send part of the CSI information, and this can even be sent only once for a set of data blocks transmitted over time-varying channels. These strategies are applied to iterative MMSE-based IA techniques for the downlink of broadband wireless OFDM systems with limited feedback. A new robust iterative IA technique, where channel quantization errors are taken into account in IA design, is also proposed and evaluated. With our proposed strategies, we need a small number of quantization bits to transmit and share the CSI, when comparing with the techniques used in previous works, while allowing performance close to the one obtained with perfect channel knowledge.

  9. A comparative study of trochanteric and basicervical fractures of the femur treated with the Ender and McLaughlin techniques.

    PubMed

    Indemini, E; Clerico, P; Fenoglio, E; Mariotti, U

    1982-09-01

    Endomedullary nailing as proposed by Ender is an important alternative in the treatment of trochanteric and basicervical fractures of the femur (Amici et al., 1980; Carret et al., 1980; Ender, 1970; Kempf et al., 1979; Zinghi et al., 1979). Rush's concept (Eiffel Tower, for the distal epiphysis) is reproposed with some variations and transposed to the femoral neck. The aim of the operation differs from that of the nail and plate technique in that, instead of trying to achieve anatomical reconstruction, an immediate functional by-pass of the fractured part is attempted. After using this technique for three years, we compared the old method, which we had not abandoned, the McLaughlin nail and plate, with the new Ender nail.

  10. Doppler ultrasound-based measurement of tendon velocity and displacement for application toward detecting user-intended motion.

    PubMed

    Stegman, Kelly J; Park, Edward J; Dechev, Nikolai

    2012-07-01

    The motivation of this research is to non-invasively monitor the wrist tendon's displacement and velocity, for purposes of controlling a prosthetic device. This feasibility study aims to determine if the proposed technique using Doppler ultrasound is able to accurately estimate the tendon's instantaneous velocity and displacement. This study is conducted with a tendon mimicking experiment consisting of two different materials: a commercial ultrasound scanner, and a reference linear motion stage set-up. Audio-based output signals are acquired from the ultrasound scanner, and are processed with our proposed Fourier technique to obtain the tendon's velocity and displacement estimates. We then compare our estimates to an external reference system, and also to the ultrasound scanner's own estimates based on its proprietary software. The proposed tendon motion estimation method has been shown to be repeatable, effective and accurate in comparison to the external reference system, and is generally more accurate than the scanner's own estimates. After establishing this feasibility study, future testing will include cadaver-based studies to test the technique on the human arm tendon anatomy, and later on live human test subjects in order to further refine the proposed method for the novel purpose of detecting user-intended tendon motion for controlling wearable prosthetic devices.

  11. Surface registration technique for close-range mapping applications

    NASA Astrophysics Data System (ADS)

    Habib, Ayman F.; Cheng, Rita W. T.

    2006-08-01

    Close-range mapping applications such as cultural heritage restoration, virtual reality modeling for the entertainment industry, and anatomical feature recognition for medical activities require 3D data that is usually acquired by high resolution close-range laser scanners. Since these datasets are typically captured from different viewpoints and/or at different times, accurate registration is a crucial procedure for 3D modeling of mapped objects. Several registration techniques are available that work directly with the raw laser points or with extracted features from the point cloud. Some examples include the commonly known Iterative Closest Point (ICP) algorithm and a recently proposed technique based on matching spin-images. This research focuses on developing a surface matching algorithm that is based on the Modified Iterated Hough Transform (MIHT) and ICP to register 3D data. The proposed algorithm works directly with the raw 3D laser points and does not assume point-to-point correspondence between two laser scans. The algorithm can simultaneously establish correspondence between two surfaces and estimates the transformation parameters relating them. Experiment with two partially overlapping laser scans of a small object is performed with the proposed algorithm and shows successful registration. A high quality of fit between the two scans is achieved and improvement is found when compared to the results obtained using the spin-image technique. The results demonstrate the feasibility of the proposed algorithm for registering 3D laser scanning data in close-range mapping applications to help with the generation of complete 3D models.

  12. Best Merge Region Growing with Integrated Probabilistic Classification for Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2011-01-01

    A new method for spectral-spatial classification of hyperspectral images is proposed. The method is based on the integration of probabilistic classification within the hierarchical best merge region growing algorithm. For this purpose, preliminary probabilistic support vector machines classification is performed. Then, hierarchical step-wise optimization algorithm is applied, by iteratively merging regions with the smallest Dissimilarity Criterion (DC). The main novelty of this method consists in defining a DC between regions as a function of region statistical and geometrical features along with classification probabilities. Experimental results are presented on a 200-band AVIRIS image of the Northwestern Indiana s vegetation area and compared with those obtained by recently proposed spectral-spatial classification techniques. The proposed method improves classification accuracies when compared to other classification approaches.

  13. Anisotropic modeling and joint-MAP stitching for improved ultrasound model-based iterative reconstruction of large and thick specimens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almansouri, Hani; Venkatakrishnan, Singanallur V.; Clayton, Dwight A.

    One-sided non-destructive evaluation (NDE) is widely used to inspect materials, such as concrete structures in nuclear power plants (NPP). A widely used method for one-sided NDE is the synthetic aperture focusing technique (SAFT). The SAFT algorithm produces reasonable results when inspecting simple structures. However, for complex structures, such as heavily reinforced thick concrete structures, SAFT results in artifacts and hence there is a need for a more sophisticated inversion technique. Model-based iterative reconstruction (MBIR) algorithms, which are typically equivalent to regularized inversion techniques, offer a powerful framework to incorporate complex models for the physics, detector miscalibrations and the materials beingmore » imaged to obtain high quality reconstructions. Previously, we have proposed an ultrasonic MBIR method that signifcantly improves reconstruction quality compared to SAFT. However, the method made some simplifying assumptions on the propagation model and did not disucss ways to handle data that is obtained by raster scanning a system over a surface to inspect large regions. In this paper, we propose a novel MBIR algorithm that incorporates an anisotropic forward model and allows for the joint processing of data obtained from a system that raster scans a large surface. We demonstrate that the new MBIR method can produce dramatic improvements in reconstruction quality compared to SAFT and suppresses articfacts compared to the perviously presented MBIR approach.« less

  14. Comparative Approach of MRI-Based Brain Tumor Segmentation and Classification Using Genetic Algorithm.

    PubMed

    Bahadure, Nilesh Bhaskarrao; Ray, Arun Kumar; Thethi, Har Pal

    2018-01-17

    The detection of a brain tumor and its classification from modern imaging modalities is a primary concern, but a time-consuming and tedious work was performed by radiologists or clinical supervisors. The accuracy of detection and classification of tumor stages performed by radiologists is depended on their experience only, so the computer-aided technology is very important to aid with the diagnosis accuracy. In this study, to improve the performance of tumor detection, we investigated comparative approach of different segmentation techniques and selected the best one by comparing their segmentation score. Further, to improve the classification accuracy, the genetic algorithm is employed for the automatic classification of tumor stage. The decision of classification stage is supported by extracting relevant features and area calculation. The experimental results of proposed technique are evaluated and validated for performance and quality analysis on magnetic resonance brain images, based on segmentation score, accuracy, sensitivity, specificity, and dice similarity index coefficient. The experimental results achieved 92.03% accuracy, 91.42% specificity, 92.36% sensitivity, and an average segmentation score between 0.82 and 0.93 demonstrating the effectiveness of the proposed technique for identifying normal and abnormal tissues from brain MR images. The experimental results also obtained an average of 93.79% dice similarity index coefficient, which indicates better overlap between the automated extracted tumor regions with manually extracted tumor region by radiologists.

  15. Anisotropic modeling and joint-MAP stitching for improved ultrasound model-based iterative reconstruction of large and thick specimens

    NASA Astrophysics Data System (ADS)

    Almansouri, Hani; Venkatakrishnan, Singanallur; Clayton, Dwight; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector

    2018-04-01

    One-sided non-destructive evaluation (NDE) is widely used to inspect materials, such as concrete structures in nuclear power plants (NPP). A widely used method for one-sided NDE is the synthetic aperture focusing technique (SAFT). The SAFT algorithm produces reasonable results when inspecting simple structures. However, for complex structures, such as heavily reinforced thick concrete structures, SAFT results in artifacts and hence there is a need for a more sophisticated inversion technique. Model-based iterative reconstruction (MBIR) algorithms, which are typically equivalent to regularized inversion techniques, offer a powerful framework to incorporate complex models for the physics, detector miscalibrations and the materials being imaged to obtain high quality reconstructions. Previously, we have proposed an ultrasonic MBIR method that signifcantly improves reconstruction quality compared to SAFT. However, the method made some simplifying assumptions on the propagation model and did not disucss ways to handle data that is obtained by raster scanning a system over a surface to inspect large regions. In this paper, we propose a novel MBIR algorithm that incorporates an anisotropic forward model and allows for the joint processing of data obtained from a system that raster scans a large surface. We demonstrate that the new MBIR method can produce dramatic improvements in reconstruction quality compared to SAFT and suppresses articfacts compared to the perviously presented MBIR approach.

  16. Solar Panel System for Street Light Using Maximum Power Point Tracking (MPPT) Technique

    NASA Astrophysics Data System (ADS)

    Wiedjaja, A.; Harta, S.; Josses, L.; Winardi; Rinda, H.

    2014-03-01

    Solar energy is one form of the renewable energy which is very abundant in regions close to the equator. One application of solar energy is for street light. This research focuses on using the maximum power point tracking technique (MPPT), particularly the perturb and observe (P&O) algorithm, to charge battery for street light system. The proposed charger circuit can achieve 20.73% higher power efficiency compared to that of non-MPPT charger. We also develop the LED driver circuit for the system which can achieve power efficiency up to 91.9% at a current of 1.06 A. The proposed street lightning system can be implemented with a relatively low cost for public areas.

  17. nu-Anomica: A Fast Support Vector Based Novelty Detection Technique

    NASA Technical Reports Server (NTRS)

    Das, Santanu; Bhaduri, Kanishka; Oza, Nikunj C.; Srivastava, Ashok N.

    2009-01-01

    In this paper we propose nu-Anomica, a novel anomaly detection technique that can be trained on huge data sets with much reduced running time compared to the benchmark one-class Support Vector Machines algorithm. In -Anomica, the idea is to train the machine such that it can provide a close approximation to the exact decision plane using fewer training points and without losing much of the generalization performance of the classical approach. We have tested the proposed algorithm on a variety of continuous data sets under different conditions. We show that under all test conditions the developed procedure closely preserves the accuracy of standard one-class Support Vector Machines while reducing both the training time and the test time by 5 - 20 times.

  18. High-resolution differential mode delay measurement for a multimode optical fiber using a modified optical frequency domain reflectometer.

    PubMed

    Ahn, T-J; Kim, D

    2005-10-03

    A novel differential mode delay (DMD) measurement technique for a multimode optical fiber based on optical frequency domain reflectometry (OFDR) has been proposed. We have obtained a high-resolution DMD value of 0.054 ps/m for a commercial multimode optical fiber with length of 50 m by using a modified OFDR in a Mach-Zehnder interferometer structure with a tunable external cavity laser and a Mach-Zehnder interferometer instead of Michelson interferometer. We have also compared the OFDR measurement results with those obtained using a traditional time-domain measurement method. DMD resolution with our proposed OFDR technique is more than an order of magnitude better than a result obtainable with a conventional time-domain method.

  19. A fractional Fourier transform analysis of a bubble excited by an ultrasonic chirp.

    PubMed

    Barlow, Euan; Mulholland, Anthony J

    2011-11-01

    The fractional Fourier transform is proposed here as a model based, signal processing technique for determining the size of a bubble in a fluid. The bubble is insonified with an ultrasonic chirp and the radiated pressure field is recorded. This experimental bubble response is then compared with a series of theoretical model responses to identify the most accurate match between experiment and theory which allows the correct bubble size to be identified. The fractional Fourier transform is used to produce a more detailed description of each response, and two-dimensional cross correlation is then employed to identify the similarities between the experimental response and each theoretical response. In this paper the experimental bubble response is simulated by adding various levels of noise to the theoretical model output. The method is compared to the standard technique of using time-domain cross correlation. The proposed method is shown to be far more robust at correctly sizing the bubble and can cope with much lower signal to noise ratios.

  20. Robust sleep quality quantification method for a personal handheld device.

    PubMed

    Shin, Hangsik; Choi, Byunghun; Kim, Doyoon; Cho, Jaegeol

    2014-06-01

    The purpose of this study was to develop and validate a novel method for sleep quality quantification using personal handheld devices. The proposed method used 3- or 6-axes signals, including acceleration and angular velocity, obtained from built-in sensors in a smartphone and applied a real-time wavelet denoising technique to minimize the nonstationary noise. Sleep or wake status was decided on each axis, and the totals were finally summed to calculate sleep efficiency (SE), regarded as sleep quality in general. The sleep experiment was carried out for performance evaluation of the proposed method, and 14 subjects participated. An experimental protocol was designed for comparative analysis. The activity during sleep was recorded not only by the proposed method but also by well-known commercial applications simultaneously; moreover, activity was recorded on different mattresses and locations to verify the reliability in practical use. Every calculated SE was compared with the SE of a clinically certified medical device, the Philips (Amsterdam, The Netherlands) Actiwatch. In these experiments, the proposed method proved its reliability in quantifying sleep quality. Compared with the Actiwatch, accuracy and average bias error of SE calculated by the proposed method were 96.50% and -1.91%, respectively. The proposed method was vastly superior to other comparative applications with at least 11.41% in average accuracy and at least 6.10% in average bias; average accuracy and average absolute bias error of comparative applications were 76.33% and 17.52%, respectively.

  1. Segmentation of MR images via discriminative dictionary learning and sparse coding: application to hippocampus labeling.

    PubMed

    Tong, Tong; Wolz, Robin; Coupé, Pierrick; Hajnal, Joseph V; Rueckert, Daniel

    2013-08-01

    We propose a novel method for the automatic segmentation of brain MRI images by using discriminative dictionary learning and sparse coding techniques. In the proposed method, dictionaries and classifiers are learned simultaneously from a set of brain atlases, which can then be used for the reconstruction and segmentation of an unseen target image. The proposed segmentation strategy is based on image reconstruction, which is in contrast to most existing atlas-based labeling approaches that rely on comparing image similarities between atlases and target images. In addition, we propose a Fixed Discriminative Dictionary Learning for Segmentation (F-DDLS) strategy, which can learn dictionaries offline and perform segmentations online, enabling a significant speed-up in the segmentation stage. The proposed method has been evaluated for the hippocampus segmentation of 80 healthy ICBM subjects and 202 ADNI images. The robustness of the proposed method, especially of our F-DDLS strategy, was validated by training and testing on different subject groups in the ADNI database. The influence of different parameters was studied and the performance of the proposed method was also compared with that of the nonlocal patch-based approach. The proposed method achieved a median Dice coefficient of 0.879 on 202 ADNI images and 0.890 on 80 ICBM subjects, which is competitive compared with state-of-the-art methods. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Embedded wavelet packet transform technique for texture compression

    NASA Astrophysics Data System (ADS)

    Li, Jin; Cheng, Po-Yuen; Kuo, C.-C. Jay

    1995-09-01

    A highly efficient texture compression scheme is proposed in this research. With this scheme, energy compaction of texture images is first achieved by the wavelet packet transform, and an embedding approach is then adopted for the coding of the wavelet packet transform coefficients. By comparing the proposed algorithm with the JPEG standard, FBI wavelet/scalar quantization standard and the EZW scheme with extensive experimental results, we observe a significant improvement in the rate-distortion performance and visual quality.

  3. Parametric representation of weld fillets using shell finite elements—a proposal based on minimum stiffness and inertia errors

    NASA Astrophysics Data System (ADS)

    Echer, L.; Marczak, R. J.

    2018-02-01

    The objective of the present work is to introduce a methodology capable of modelling welded components for structural stress analysis. The modelling technique was based on the recommendations of the International Institute of Welding; however, some geometrical features of the weld fillet were used as design parameters in an optimization problem. Namely, the weld leg length and thickness of the shell elements representing the weld fillet were optimized in such a way that the first natural frequencies were not changed significantly when compared to a reference result. Sequential linear programming was performed for T-joint structures corresponding to two different structural details: with and without full penetration weld fillets. Both structural details were tested in scenarios of various plate thicknesses and depths. Once the optimal parameters were found, a modelling procedure was proposed for T-shaped components. Furthermore, the proposed modelling technique was extended for overlapped welded joints. The results obtained were compared to well-established methodologies presented in standards and in the literature. The comparisons included results for natural frequencies, total mass and structural stress. By these comparisons, it was observed that some established practices produce significant errors in the overall stiffness and inertia. The methodology proposed herein does not share this issue and can be easily extended to other types of structure.

  4. Three-Class Mammogram Classification Based on Descriptive CNN Features

    PubMed Central

    Zhang, Qianni; Jadoon, Adeel

    2017-01-01

    In this paper, a novel classification technique for large data set of mammograms using a deep learning method is proposed. The proposed model targets a three-class classification study (normal, malignant, and benign cases). In our model we have presented two methods, namely, convolutional neural network-discrete wavelet (CNN-DW) and convolutional neural network-curvelet transform (CNN-CT). An augmented data set is generated by using mammogram patches. To enhance the contrast of mammogram images, the data set is filtered by contrast limited adaptive histogram equalization (CLAHE). In the CNN-DW method, enhanced mammogram images are decomposed as its four subbands by means of two-dimensional discrete wavelet transform (2D-DWT), while in the second method discrete curvelet transform (DCT) is used. In both methods, dense scale invariant feature (DSIFT) for all subbands is extracted. Input data matrix containing these subband features of all the mammogram patches is created that is processed as input to convolutional neural network (CNN). Softmax layer and support vector machine (SVM) layer are used to train CNN for classification. Proposed methods have been compared with existing methods in terms of accuracy rate, error rate, and various validation assessment measures. CNN-DW and CNN-CT have achieved accuracy rate of 81.83% and 83.74%, respectively. Simulation results clearly validate the significance and impact of our proposed model as compared to other well-known existing techniques. PMID:28191461

  5. Three-Class Mammogram Classification Based on Descriptive CNN Features.

    PubMed

    Jadoon, M Mohsin; Zhang, Qianni; Haq, Ihsan Ul; Butt, Sharjeel; Jadoon, Adeel

    2017-01-01

    In this paper, a novel classification technique for large data set of mammograms using a deep learning method is proposed. The proposed model targets a three-class classification study (normal, malignant, and benign cases). In our model we have presented two methods, namely, convolutional neural network-discrete wavelet (CNN-DW) and convolutional neural network-curvelet transform (CNN-CT). An augmented data set is generated by using mammogram patches. To enhance the contrast of mammogram images, the data set is filtered by contrast limited adaptive histogram equalization (CLAHE). In the CNN-DW method, enhanced mammogram images are decomposed as its four subbands by means of two-dimensional discrete wavelet transform (2D-DWT), while in the second method discrete curvelet transform (DCT) is used. In both methods, dense scale invariant feature (DSIFT) for all subbands is extracted. Input data matrix containing these subband features of all the mammogram patches is created that is processed as input to convolutional neural network (CNN). Softmax layer and support vector machine (SVM) layer are used to train CNN for classification. Proposed methods have been compared with existing methods in terms of accuracy rate, error rate, and various validation assessment measures. CNN-DW and CNN-CT have achieved accuracy rate of 81.83% and 83.74%, respectively. Simulation results clearly validate the significance and impact of our proposed model as compared to other well-known existing techniques.

  6. Automatic QRS complex detection using two-level convolutional neural network.

    PubMed

    Xiang, Yande; Lin, Zhitao; Meng, Jianyi

    2018-01-29

    The QRS complex is the most noticeable feature in the electrocardiogram (ECG) signal, therefore, its detection is critical for ECG signal analysis. The existing detection methods largely depend on hand-crafted manual features and parameters, which may introduce significant computational complexity, especially in the transform domains. In addition, fixed features and parameters are not suitable for detecting various kinds of QRS complexes under different circumstances. In this study, based on 1-D convolutional neural network (CNN), an accurate method for QRS complex detection is proposed. The CNN consists of object-level and part-level CNNs for extracting different grained ECG morphological features automatically. All the extracted morphological features are used by multi-layer perceptron (MLP) for QRS complex detection. Additionally, a simple ECG signal preprocessing technique which only contains difference operation in temporal domain is adopted. Based on the MIT-BIH arrhythmia (MIT-BIH-AR) database, the proposed detection method achieves overall sensitivity Sen = 99.77%, positive predictivity rate PPR = 99.91%, and detection error rate DER = 0.32%. In addition, the performance variation is performed according to different signal-to-noise ratio (SNR) values. An automatic QRS detection method using two-level 1-D CNN and simple signal preprocessing technique is proposed for QRS complex detection. Compared with the state-of-the-art QRS complex detection approaches, experimental results show that the proposed method acquires comparable accuracy.

  7. [Blood levels of homocysteine by high pressure liquid chromatography and comparison with two other techniques].

    PubMed

    Ceppa, F; Drouillard, I; Chianea, D; Burnat, P; Perrier, F; Vaillant, C; El Jahiri, Y

    1999-01-01

    Cardio-vascular diseases are the most common cause of death in industrialized countries. A new marker has emerged among offending risk factors in the past few years: homocysteine. This sulphured amino-acid is an important intermediate in transsulphuration and remethylation reactions of methionine's metabolism. We proposed to evaluate a home made method of determination for this parameter by high performance liquid chromatography (HPLC) and to compare it to fluorescence polarization immunoassay technique (FPIA) and to gaz phase chromatography (CG-SM). This method associated with good sensibility and precision remain much less expensive than FPIA technique.

  8. Development of a method of alignment between various SOLAR MAXIMUM MISSION experiments

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Results of an engineering study of the methods of alignment between various experiments for the solar maximum mission are described. The configuration studied consists of the instruments, mounts and instrument support platform located within the experiment module. Hardware design, fabrication methods and alignment techniques were studied with regard to optimizing the coalignment between the experiments and the fine sun sensor. The proposed hardware design was reviewed with regard to loads, stress, thermal distortion, alignment error budgets, fabrication techniques, alignment techniques and producibility. Methods of achieving comparable alignment accuracies on previous projects were also reviewed.

  9. A genetic algorithm for replica server placement

    NASA Astrophysics Data System (ADS)

    Eslami, Ghazaleh; Toroghi Haghighat, Abolfazl

    2012-01-01

    Modern distribution systems use replication to improve communication delay experienced by their clients. Some techniques have been developed for web server replica placement. One of the previous studies was Greedy algorithm proposed by Qiu et al, that needs knowledge about network topology. In This paper, first we introduce a genetic algorithm for web server replica placement. Second, we compare our algorithm with Greedy algorithm proposed by Qiu et al, and Optimum algorithm. We found that our approach can achieve better results than Greedy algorithm proposed by Qiu et al but it's computational time is more than Greedy algorithm.

  10. A genetic algorithm for replica server placement

    NASA Astrophysics Data System (ADS)

    Eslami, Ghazaleh; Toroghi Haghighat, Abolfazl

    2011-12-01

    Modern distribution systems use replication to improve communication delay experienced by their clients. Some techniques have been developed for web server replica placement. One of the previous studies was Greedy algorithm proposed by Qiu et al, that needs knowledge about network topology. In This paper, first we introduce a genetic algorithm for web server replica placement. Second, we compare our algorithm with Greedy algorithm proposed by Qiu et al, and Optimum algorithm. We found that our approach can achieve better results than Greedy algorithm proposed by Qiu et al but it's computational time is more than Greedy algorithm.

  11. Multifocus watermarking approach based on discrete cosine transform.

    PubMed

    Waheed, Safa Riyadh; Alkawaz, Mohammed Hazim; Rehman, Amjad; Almazyad, Abdulaziz S; Saba, Tanzila

    2016-05-01

    Image fusion process consolidates data and information from various images of same sight into a solitary image. Each of the source images might speak to a fractional perspective of the scene, and contains both "pertinent" and "immaterial" information. In this study, a new image fusion method is proposed utilizing the Discrete Cosine Transform (DCT) to join the source image into a solitary minimized image containing more exact depiction of the sight than any of the individual source images. In addition, the fused image comes out with most ideal quality image without bending appearance or loss of data. DCT algorithm is considered efficient in image fusion. The proposed scheme is performed in five steps: (1) RGB colour image (input image) is split into three channels R, G, and B for source images. (2) DCT algorithm is applied to each channel (R, G, and B). (3) The variance values are computed for the corresponding 8 × 8 blocks of each channel. (4) Each block of R of source images is compared with each other based on the variance value and then the block with maximum variance value is selected to be the block in the new image. This process is repeated for all channels of source images. (5) Inverse discrete cosine transform is applied on each fused channel to convert coefficient values to pixel values, and then combined all the channels to generate the fused image. The proposed technique can potentially solve the problem of unwanted side effects such as blurring or blocking artifacts by reducing the quality of the subsequent image in image fusion process. The proposed approach is evaluated using three measurement units: the average of Q(abf), standard deviation, and peak Signal Noise Rate. The experimental results of this proposed technique have shown good results as compared with older techniques. © 2016 Wiley Periodicals, Inc.

  12. Probabilistic topic modeling for the analysis and classification of genomic sequences

    PubMed Central

    2015-01-01

    Background Studies on genomic sequences for classification and taxonomic identification have a leading role in the biomedical field and in the analysis of biodiversity. These studies are focusing on the so-called barcode genes, representing a well defined region of the whole genome. Recently, alignment-free techniques are gaining more importance because they are able to overcome the drawbacks of sequence alignment techniques. In this paper a new alignment-free method for DNA sequences clustering and classification is proposed. The method is based on k-mers representation and text mining techniques. Methods The presented method is based on Probabilistic Topic Modeling, a statistical technique originally proposed for text documents. Probabilistic topic models are able to find in a document corpus the topics (recurrent themes) characterizing classes of documents. This technique, applied on DNA sequences representing the documents, exploits the frequency of fixed-length k-mers and builds a generative model for a training group of sequences. This generative model, obtained through the Latent Dirichlet Allocation (LDA) algorithm, is then used to classify a large set of genomic sequences. Results and conclusions We performed classification of over 7000 16S DNA barcode sequences taken from Ribosomal Database Project (RDP) repository, training probabilistic topic models. The proposed method is compared to the RDP tool and Support Vector Machine (SVM) classification algorithm in a extensive set of trials using both complete sequences and short sequence snippets (from 400 bp to 25 bp). Our method reaches very similar results to RDP classifier and SVM for complete sequences. The most interesting results are obtained when short sequence snippets are considered. In these conditions the proposed method outperforms RDP and SVM with ultra short sequences and it exhibits a smooth decrease of performance, at every taxonomic level, when the sequence length is decreased. PMID:25916734

  13. Efficient computational model for classification of protein localization images using Extended Threshold Adjacency Statistics and Support Vector Machines.

    PubMed

    Tahir, Muhammad; Jan, Bismillah; Hayat, Maqsood; Shah, Shakir Ullah; Amin, Muhammad

    2018-04-01

    Discriminative and informative feature extraction is the core requirement for accurate and efficient classification of protein subcellular localization images so that drug development could be more effective. The objective of this paper is to propose a novel modification in the Threshold Adjacency Statistics technique and enhance its discriminative power. In this work, we utilized Threshold Adjacency Statistics from a novel perspective to enhance its discrimination power and efficiency. In this connection, we utilized seven threshold ranges to produce seven distinct feature spaces, which are then used to train seven SVMs. The final prediction is obtained through the majority voting scheme. The proposed ETAS-SubLoc system is tested on two benchmark datasets using 5-fold cross-validation technique. We observed that our proposed novel utilization of TAS technique has improved the discriminative power of the classifier. The ETAS-SubLoc system has achieved 99.2% accuracy, 99.3% sensitivity and 99.1% specificity for Endogenous dataset outperforming the classical Threshold Adjacency Statistics technique. Similarly, 91.8% accuracy, 96.3% sensitivity and 91.6% specificity values are achieved for Transfected dataset. Simulation results validated the effectiveness of ETAS-SubLoc that provides superior prediction performance compared to the existing technique. The proposed methodology aims at providing support to pharmaceutical industry as well as research community towards better drug designing and innovation in the fields of bioinformatics and computational biology. The implementation code for replicating the experiments presented in this paper is available at: https://drive.google.com/file/d/0B7IyGPObWbSqRTRMcXI2bG5CZWs/view?usp=sharing. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Model averaging techniques for quantifying conceptual model uncertainty.

    PubMed

    Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg

    2010-01-01

    In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.

  15. Comparative interpretations of renormalization inversion technique for reconstructing unknown emissions from measured atmospheric concentrations

    NASA Astrophysics Data System (ADS)

    Singh, Sarvesh Kumar; Kumar, Pramod; Rani, Raj; Turbelin, Grégory

    2017-04-01

    The study highlights a theoretical comparison and various interpretations of a recent inversion technique, called renormalization, developed for the reconstruction of unknown tracer emissions from their measured concentrations. The comparative interpretations are presented in relation to the other inversion techniques based on principle of regularization, Bayesian, minimum norm, maximum entropy on mean, and model resolution optimization. It is shown that the renormalization technique can be interpreted in a similar manner to other techniques, with a practical choice of a priori information and error statistics, while eliminating the need of additional constraints. The study shows that the proposed weight matrix and weighted Gram matrix offer a suitable deterministic choice to the background error and measurement covariance matrices, respectively, in the absence of statistical knowledge about background and measurement errors. The technique is advantageous since it (i) utilizes weights representing a priori information apparent to the monitoring network, (ii) avoids dependence on background source estimates, (iii) improves on alternative choices for the error statistics, (iv) overcomes the colocalization problem in a natural manner, and (v) provides an optimally resolved source reconstruction. A comparative illustration of source retrieval is made by using the real measurements from a continuous point release conducted in Fusion Field Trials, Dugway Proving Ground, Utah.

  16. Building Change Detection from LIDAR Point Cloud Data Based on Connected Component Analysis

    NASA Astrophysics Data System (ADS)

    Awrangjeb, M.; Fraser, C. S.; Lu, G.

    2015-08-01

    Building data are one of the important data types in a topographic database. Building change detection after a period of time is necessary for many applications, such as identification of informal settlements. Based on the detected changes, the database has to be updated to ensure its usefulness. This paper proposes an improved building detection technique, which is a prerequisite for many building change detection techniques. The improved technique examines the gap between neighbouring buildings in the building mask in order to avoid under segmentation errors. Then, a new building change detection technique from LIDAR point cloud data is proposed. Buildings which are totally new or demolished are directly added to the change detection output. However, for demolished or extended building parts, a connected component analysis algorithm is applied and for each connected component its area, width and height are estimated in order to ascertain if it can be considered as a demolished or new building part. Finally, a graphical user interface (GUI) has been developed to update detected changes to the existing building map. Experimental results show that the improved building detection technique can offer not only higher performance in terms of completeness and correctness, but also a lower number of undersegmentation errors as compared to its original counterpart. The proposed change detection technique produces no omission errors and thus it can be exploited for enhanced automated building information updating within a topographic database. Using the developed GUI, the user can quickly examine each suggested change and indicate his/her decision with a minimum number of mouse clicks.

  17. Singularity-sensitive gauge-based radar rainfall adjustment methods for urban hydrological applications

    NASA Astrophysics Data System (ADS)

    Wang, L.-P.; Ochoa-Rodríguez, S.; Onof, C.; Willems, P.

    2015-09-01

    Gauge-based radar rainfall adjustment techniques have been widely used to improve the applicability of radar rainfall estimates to large-scale hydrological modelling. However, their use for urban hydrological applications is limited as they were mostly developed based upon Gaussian approximations and therefore tend to smooth off so-called "singularities" (features of a non-Gaussian field) that can be observed in the fine-scale rainfall structure. Overlooking the singularities could be critical, given that their distribution is highly consistent with that of local extreme magnitudes. This deficiency may cause large errors in the subsequent urban hydrological modelling. To address this limitation and improve the applicability of adjustment techniques at urban scales, a method is proposed herein which incorporates a local singularity analysis into existing adjustment techniques and allows the preservation of the singularity structures throughout the adjustment process. In this paper the proposed singularity analysis is incorporated into the Bayesian merging technique and the performance of the resulting singularity-sensitive method is compared with that of the original Bayesian (non singularity-sensitive) technique and the commonly used mean field bias adjustment. This test is conducted using as case study four storm events observed in the Portobello catchment (53 km2) (Edinburgh, UK) during 2011 and for which radar estimates, dense rain gauge and sewer flow records, as well as a recently calibrated urban drainage model were available. The results suggest that, in general, the proposed singularity-sensitive method can effectively preserve the non-normality in local rainfall structure, while retaining the ability of the original adjustment techniques to generate nearly unbiased estimates. Moreover, the ability of the singularity-sensitive technique to preserve the non-normality in rainfall estimates often leads to better reproduction of the urban drainage system's dynamics, particularly of peak runoff flows.

  18. Nonlocal means-based speckle filtering for ultrasound images

    PubMed Central

    Coupé, Pierrick; Hellier, Pierre; Kervrann, Charles; Barillot, Christian

    2009-01-01

    In image processing, restoration is expected to improve the qualitative inspection of the image and the performance of quantitative image analysis techniques. In this paper, an adaptation of the Non Local (NL-) means filter is proposed for speckle reduction in ultrasound (US) images. Originally developed for additive white Gaussian noise, we propose to use a Bayesian framework to derive a NL-means filter adapted to a relevant ultrasound noise model. Quantitative results on synthetic data show the performances of the proposed method compared to well-established and state-of-the-art methods. Results on real images demonstrate that the proposed method is able to preserve accurately edges and structural details of the image. PMID:19482578

  19. Theoretical performance analysis of doped optical fibers based on pseudo parameters

    NASA Astrophysics Data System (ADS)

    Karimi, Maryam; Seraji, Faramarz E.

    2010-09-01

    Characterization of doped optical fibers (DOFs) is an essential primary stage for design of DOF-based devices. This paper presents design of novel measurement techniques to determine DOFs parameters using mono-beam propagation in a low-loss medium by generating pseudo parameters for the DOFs. The designed techniques are able to characterize simultaneously the absorption, emission cross-sections (ACS and ECS), and dopant concentration of DOFs. In both the proposed techniques, we assume pseudo parameters for the DOFs instead of their actual values and show that the choice of these pseudo parameters values for design of DOF-based devices, such as erbium-doped fiber amplifier (EDFA), are appropriate and the resulting error is quite negligible when compared with the actual parameters values.Utilization of pseudo ACS and ECS values in design procedure of EDFAs does not require the measurement of background loss coefficient (BLC) and makes the rate equation of the DOFs simple. It is shown that by using the pseudo parameters values obtained by the proposed techniques, the error in the gain of a designed EDFA with a BLC of about 1 dB/km, are about 0.08 dB. It is further indicated that the same scenario holds good for BLC lower than 5 dB/m and higher than 12 dB/m. The proposed characterization techniques have simple procedures and are low cost that can have an advantageous use in manufacturing of the DOFs.

  20. A simplified technique for delivering total body irradiation (TBI) with improved dose homogeneity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao Rui; Bernard, Damian; Turian, Julius

    2012-04-15

    Purpose: Total body irradiation (TBI) with megavoltage photon beams has been accepted as an important component of management for a number of hematologic malignancies, generally as part of bone marrow conditioning regimens. The purpose of this paper is to present and discuss the authors' TBI technique, which both simplifies the treatment process and improves the treatment quality. Methods: An AP/PA TBI treatment technique to produce uniform dose distributions using sequential collimator reductions during each fraction was implemented, and a sample calculation worksheet is presented. Using this methodology, the dosimetric characteristics of both 6 and 18 MV photon beams, including lungmore » dose under cerrobend blocks was investigated. A method of estimating midplane lung doses based on measured entrance and exit doses was proposed, and the estimated results were compared with measurements. Results: Whole body midplane dose uniformity of {+-}10% was achieved with no more than two collimator-based beam modulations. The proposed model predicted midplane lung doses 5% to 10% higher than the measured doses for 6 and 18 MV beams. The estimated total midplane doses were within {+-}5% of the prescribed midplane dose on average except for the lungs where the doses were 6% to 10% lower than the prescribed dose on average. Conclusions: The proposed TBI technique can achieve dose uniformity within {+-}10%. This technique is easy to implement and does not require complicated dosimetry and/or compensators.« less

  1. Segmenting overlapping nano-objects in atomic force microscopy image

    NASA Astrophysics Data System (ADS)

    Wang, Qian; Han, Yuexing; Li, Qing; Wang, Bing; Konagaya, Akihiko

    2018-01-01

    Recently, techniques for nanoparticles have rapidly been developed for various fields, such as material science, medical, and biology. In particular, methods of image processing have widely been used to automatically analyze nanoparticles. A technique to automatically segment overlapping nanoparticles with image processing and machine learning is proposed. Here, two tasks are necessary: elimination of image noises and action of the overlapping shapes. For the first task, mean square error and the seed fill algorithm are adopted to remove noises and improve the quality of the original image. For the second task, four steps are needed to segment the overlapping nanoparticles. First, possibility split lines are obtained by connecting the high curvature pixels on the contours. Second, the candidate split lines are classified with a machine learning algorithm. Third, the overlapping regions are detected with the method of density-based spatial clustering of applications with noise (DBSCAN). Finally, the best split lines are selected with a constrained minimum value. We give some experimental examples and compare our technique with two other methods. The results can show the effectiveness of the proposed technique.

  2. A novel numerical framework for self-similarity in plasticity: Wedge indentation in single crystals

    NASA Astrophysics Data System (ADS)

    Juul, K. J.; Niordson, C. F.; Nielsen, K. L.; Kysar, J. W.

    2018-03-01

    A novel numerical framework for analyzing self-similar problems in plasticity is developed and demonstrated. Self-similar problems of this kind include processes such as stationary cracks, void growth, indentation etc. The proposed technique offers a simple and efficient method for handling this class of complex problems by avoiding issues related to traditional Lagrangian procedures. Moreover, the proposed technique allows for focusing the mesh in the region of interest. In the present paper, the technique is exploited to analyze the well-known wedge indentation problem of an elastic-viscoplastic single crystal. However, the framework may be readily adapted to any constitutive law of interest. The main focus herein is the development of the self-similar framework, while the indentation study serves primarily as verification of the technique by comparing to existing numerical and analytical studies. In this study, the three most common metal crystal structures will be investigated, namely the face-centered cubic (FCC), body-centered cubic (BCC), and hexagonal close packed (HCP) crystal structures, where the stress and slip rate fields around the moving contact point singularity are presented.

  3. Electroencephalogram-based decoding cognitive states using convolutional neural network and likelihood ratio based score fusion

    PubMed Central

    2017-01-01

    Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain–computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method. PMID:28558002

  4. A Space-Time Signal Decomposition Algorithm for Downlink MIMO DS-CDMA Receivers

    NASA Astrophysics Data System (ADS)

    Wang, Yung-Yi; Fang, Wen-Hsien; Chen, Jiunn-Tsair

    We propose a dimension reduction algorithm for the receiver of the downlink of direct-sequence code-division multiple access (DS-CDMA) systems in which both the transmitters and the receivers employ antenna arrays of multiple elements. To estimate the high order channel parameters, we develop a layered architecture using dimension-reduced parameter estimation algorithms to estimate the frequency-selective multipath channels. In the proposed architecture, to exploit the space-time geometric characteristics of multipath channels, spatial beamformers and constrained (or unconstrained) temporal filters are adopted for clustered-multipath grouping and path isolation. In conjunction with the multiple access interference (MAI) suppression techniques, the proposed architecture jointly estimates the direction of arrivals, propagation delays, and fading amplitudes of the downlink fading multipaths. With the outputs of the proposed architecture, the signals of interest can then be naturally detected by using path-wise maximum ratio combining. Compared to the traditional techniques, such as the Joint-Angle-and-Delay-Estimation (JADE) algorithm for DOA-delay joint estimation and the space-time minimum mean square error (ST-MMSE) algorithm for signal detection, computer simulations show that the proposed algorithm substantially mitigate the computational complexity at the expense of only slight performance degradation.

  5. Teaching learning based optimization-functional link artificial neural network filter for mixed noise reduction from magnetic resonance image.

    PubMed

    Kumar, M; Mishra, S K

    2017-01-01

    The clinical magnetic resonance imaging (MRI) images may get corrupted due to the presence of the mixture of different types of noises such as Rician, Gaussian, impulse, etc. Most of the available filtering algorithms are noise specific, linear, and non-adaptive. There is a need to develop a nonlinear adaptive filter that adapts itself according to the requirement and effectively applied for suppression of mixed noise from different MRI images. In view of this, a novel nonlinear neural network based adaptive filter i.e. functional link artificial neural network (FLANN) whose weights are trained by a recently developed derivative free meta-heuristic technique i.e. teaching learning based optimization (TLBO) is proposed and implemented. The performance of the proposed filter is compared with five other adaptive filters and analyzed by considering quantitative metrics and evaluating the nonparametric statistical test. The convergence curve and computational time are also included for investigating the efficiency of the proposed as well as competitive filters. The simulation outcomes of proposed filter outperform the other adaptive filters. The proposed filter can be hybridized with other evolutionary technique and utilized for removing different noise and artifacts from others medical images more competently.

  6. Hybrid method to predict the resonant frequencies and to characterise dual band proximity coupled microstrip antennas

    NASA Astrophysics Data System (ADS)

    Varma, Ruchi; Ghosh, Jayanta

    2018-06-01

    A new hybrid technique, which is a combination of neural network (NN) and support vector machine, is proposed for designing of different slotted dual band proximity coupled microstrip antennas. Slots on the patch are employed to produce the second resonance along with size reduction. The proposed hybrid model provides flexibility to design the dual band antennas in the frequency range from 1 to 6 GHz. This includes DCS (1.71-1.88 GHz), PCS (1.88-1.99 GHz), UMTS (1.92-2.17 GHz), LTE2300 (2.3-2.4 GHz), Bluetooth (2.4-2.485 GHz), WiMAX (3.3-3.7 GHz), and WLAN (5.15-5.35 GHz, 5.725-5.825 GHz) bands applications. Also, the comparative study of this proposed technique is done with the existing methods like knowledge based NN and support vector machine. The proposed method is found to be more accurate in terms of % error and root mean square % error and the results are in good accord with the measured values.

  7. Theoretically informed Monte Carlo simulation of liquid crystals by sampling of alignment-tensor fields.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armas-Perez, Julio C.; Londono-Hurtado, Alejandro; Guzman, Orlando

    2015-07-27

    A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystalmore » droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.« less

  8. Theoretically informed Monte Carlo simulation of liquid crystals by sampling of alignment-tensor fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armas-Pérez, Julio C.; Londono-Hurtado, Alejandro; Guzmán, Orlando

    2015-07-28

    A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystalmore » droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.« less

  9. Characterisation of debris from laser and mechanical cutting of bone.

    PubMed

    Rachmanis, Nikolaos; McGuinness, Garrett B; McGeough, Joseph A

    2014-07-01

    Laser cutting of bones has been proposed as a technology in orthopaedic surgery. In this short study, the laser-bone interaction was examined using a pulsed erbium-doped yttrium aluminium garnet laser and compared to a conventional cutting technique. Microscopic analysis revealed the nature of waste debris and showed higher proportions of finer particles for conventional sagittal sawing compared to laser cutting. © IMechE 2014.

  10. Automatic screening and classification of diabetic retinopathy and maculopathy using fuzzy image processing.

    PubMed

    Rahim, Sarni Suhaila; Palade, Vasile; Shuttleworth, James; Jayne, Chrisina

    2016-12-01

    Digital retinal imaging is a challenging screening method for which effective, robust and cost-effective approaches are still to be developed. Regular screening for diabetic retinopathy and diabetic maculopathy diseases is necessary in order to identify the group at risk of visual impairment. This paper presents a novel automatic detection of diabetic retinopathy and maculopathy in eye fundus images by employing fuzzy image processing techniques. The paper first introduces the existing systems for diabetic retinopathy screening, with an emphasis on the maculopathy detection methods. The proposed medical decision support system consists of four parts, namely: image acquisition, image preprocessing including four retinal structures localisation, feature extraction and the classification of diabetic retinopathy and maculopathy. A combination of fuzzy image processing techniques, the Circular Hough Transform and several feature extraction methods are implemented in the proposed system. The paper also presents a novel technique for the macula region localisation in order to detect the maculopathy. In addition to the proposed detection system, the paper highlights a novel online dataset and it presents the dataset collection, the expert diagnosis process and the advantages of our online database compared to other public eye fundus image databases for diabetic retinopathy purposes.

  11. A Grey Wolf Optimizer for Modular Granular Neural Networks for Human Recognition

    PubMed Central

    Sánchez, Daniela; Melin, Patricia

    2017-01-01

    A grey wolf optimizer for modular neural network (MNN) with a granular approach is proposed. The proposed method performs optimal granulation of data and design of modular neural networks architectures to perform human recognition, and to prove its effectiveness benchmark databases of ear, iris, and face biometric measures are used to perform tests and comparisons against other works. The design of a modular granular neural network (MGNN) consists in finding optimal parameters of its architecture; these parameters are the number of subgranules, percentage of data for the training phase, learning algorithm, goal error, number of hidden layers, and their number of neurons. Nowadays, there is a great variety of approaches and new techniques within the evolutionary computing area, and these approaches and techniques have emerged to help find optimal solutions to problems or models and bioinspired algorithms are part of this area. In this work a grey wolf optimizer is proposed for the design of modular granular neural networks, and the results are compared against a genetic algorithm and a firefly algorithm in order to know which of these techniques provides better results when applied to human recognition. PMID:28894461

  12. A Grey Wolf Optimizer for Modular Granular Neural Networks for Human Recognition.

    PubMed

    Sánchez, Daniela; Melin, Patricia; Castillo, Oscar

    2017-01-01

    A grey wolf optimizer for modular neural network (MNN) with a granular approach is proposed. The proposed method performs optimal granulation of data and design of modular neural networks architectures to perform human recognition, and to prove its effectiveness benchmark databases of ear, iris, and face biometric measures are used to perform tests and comparisons against other works. The design of a modular granular neural network (MGNN) consists in finding optimal parameters of its architecture; these parameters are the number of subgranules, percentage of data for the training phase, learning algorithm, goal error, number of hidden layers, and their number of neurons. Nowadays, there is a great variety of approaches and new techniques within the evolutionary computing area, and these approaches and techniques have emerged to help find optimal solutions to problems or models and bioinspired algorithms are part of this area. In this work a grey wolf optimizer is proposed for the design of modular granular neural networks, and the results are compared against a genetic algorithm and a firefly algorithm in order to know which of these techniques provides better results when applied to human recognition.

  13. Dynamic re-weighted total variation technique and statistic Iterative reconstruction method for x-ray CT metal artifact reduction

    NASA Astrophysics Data System (ADS)

    Peng, Chengtao; Qiu, Bensheng; Zhang, Cheng; Ma, Changyu; Yuan, Gang; Li, Ming

    2017-07-01

    Over the years, the X-ray computed tomography (CT) has been successfully used in clinical diagnosis. However, when the body of the patient to be examined contains metal objects, the image reconstructed would be polluted by severe metal artifacts, which affect the doctor's diagnosis of disease. In this work, we proposed a dynamic re-weighted total variation (DRWTV) technique combined with the statistic iterative reconstruction (SIR) method to reduce the artifacts. The DRWTV method is based on the total variation (TV) and re-weighted total variation (RWTV) techniques, but it provides a sparser representation than TV and protects the tissue details better than RWTV. Besides, the DRWTV can suppress the artifacts and noise, and the SIR convergence speed is also accelerated. The performance of the algorithm is tested on both simulated phantom dataset and clinical dataset, which are the teeth phantom with two metal implants and the skull with three metal implants, respectively. The proposed algorithm (SIR-DRWTV) is compared with two traditional iterative algorithms, which are SIR and SIR constrained by RWTV regulation (SIR-RWTV). The results show that the proposed algorithm has the best performance in reducing metal artifacts and protecting tissue details.

  14. A New Approach to Predict user Mobility Using Semantic Analysis and Machine Learning.

    PubMed

    Fernandes, Roshan; D'Souza G L, Rio

    2017-10-19

    Mobility prediction is a technique in which the future location of a user is identified in a given network. Mobility prediction provides solutions to many day-to-day life problems. It helps in seamless handovers in wireless networks to provide better location based services and to recalculate paths in Mobile Ad hoc Networks (MANET). In the present study, a framework is presented which predicts user mobility in presence and absence of mobility history. Naïve Bayesian classification algorithm and Markov Model are used to predict user future location when user mobility history is available. An attempt is made to predict user future location by using Short Message Service (SMS) and instantaneous Geological coordinates in the absence of mobility patterns. The proposed technique compares the performance metrics with commonly used Markov Chain model. From the experimental results it is evident that the techniques used in this work gives better results when considering both spatial and temporal information. The proposed method predicts user's future location in the absence of mobility history quite fairly. The proposed work is applied to predict the mobility of medical rescue vehicles and social security systems.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shumway, R.H.; McQuarrie, A.D.

    Robust statistical approaches to the problem of discriminating between regional earthquakes and explosions are developed. We compare linear discriminant analysis using descriptive features like amplitude and spectral ratios with signal discrimination techniques using the original signal waveforms and spectral approximations to the log likelihood function. Robust information theoretic techniques are proposed and all methods are applied to 8 earthquakes and 8 mining explosions in Scandinavia and to an event from Novaya Zemlya of unknown origin. It is noted that signal discrimination approaches based on discrimination information and Renyi entropy perform better in the test sample than conventional methods based onmore » spectral ratios involving the P and S phases. Two techniques for identifying the ripple-firing pattern for typical mining explosions are proposed and shown to work well on simulated data and on several Scandinavian earthquakes and explosions. We use both cepstral analysis in the frequency domain and a time domain method based on the autocorrelation and partial autocorrelation functions. The proposed approach strips off underlying smooth spectral and seasonal spectral components corresponding to the echo pattern induced by two simple ripple-fired models. For two mining explosions, a pattern is identified whereas for two earthquakes, no pattern is evident.« less

  16. Ionic liquid-based ultrasonic/microwave-assisted extraction combined with UPLC-MS-MS for the determination of tannins in Galla chinensis.

    PubMed

    Lu, Chunxia; Wang, Hongxin; Lv, Wenping; Ma, Chaoyang; Lou, Zaixiang; Xie, Jun; Liu, Bo

    2012-01-01

    Ionic liquid was used as extraction solvents and applied to the extraction of tannins from Galla chinensis in the simultaneous ultrasonic- and microwave-assisted extraction (UMAE) technique. Several parameters of UMAE were optimised, and the results were compared with of the conventional extraction techniques. Under optimal conditions, the content of tannins was 630.2 ± 12.1 mg g⁻¹. Compared with the conventional heat-reflux extraction, maceration extraction, regular ultrasound- and microwave-assisted extraction, the proposed approach exhibited higher efficiency (11.7-22.0% enhanced) and shorter extraction time (from 6 h to 1 min). The tannins were then identified by ultraperformance liquid chromatography tandem mass spectrometry. This study suggests that ionic liquid-based UMAE is an efficient, rapid, simple and green sample preparation technique.

  17. Evaluation of macrozone dimensions by ultrasound and EBSD techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreau, Andre, E-mail: Andre.Moreau@cnrc-nrc.gc.ca; Toubal, Lotfi; Ecole de technologie superieure, 1100, rue Notre-Dame Ouest, Montreal, QC, Canada H3C 1K3

    2013-01-15

    Titanium alloys are known to have texture heterogeneities, i.e. regions much larger than the grain dimensions, where the local orientation distribution of the grains differs from one region to the next. The electron backscattering diffraction (EBSD) technique is the method of choice to characterize these macro regions, which are called macrozones. Qualitatively, the images obtained by EBSD show that these macrozones may be larger or smaller, elongated or equiaxed. However, often no well-defined boundaries are observed between the macrozones and it is very hard to obtain objective and quantitative estimates of the macrozone dimensions from these data. In the presentmore » work, we present a novel, non-destructive ultrasonic technique that provides objective and quantitative characteristic dimensions of the macrozones. The obtained dimensions are based on the spatial autocorrelation function of fluctuations in the sound velocity. Thus, a pragmatic definition of macrozone dimensions naturally arises from the ultrasonic measurement. This paper has three objectives: 1) to disclose the novel, non-destructive ultrasonic technique to measure macrozone dimensions, 2) to propose a quantitative and objective definition of macrozone dimensions adapted to and arising from the ultrasonic measurement, and which is also applicable to the orientation data obtained by EBSD, and 3) to compare the macrozone dimensions obtained using the two techniques on two samples of the near-alpha titanium alloy IMI834. In addition, it was observed that macrozones may present a semi-periodical arrangement. - Highlights: Black-Right-Pointing-Pointer Discloses a novel, ultrasonic NDT technique to measure macrozone dimensions Black-Right-Pointing-Pointer Proposes a quantitative and objective definition of macrozone dimensions Black-Right-Pointing-Pointer Compares macrozone dimensions obtained using EBSD and ultrasonics on 2 Ti samples Black-Right-Pointing-Pointer Observes that macrozones may have a semi-periodical arrangement.« less

  18. Poster — Thur Eve — 03: Application of the non-negative matrix factorization technique to [{sup 11}C]-DTBZ dynamic PET data for the early detection of Parkinson's disease

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Dong-Chang; Jans, Hans; McEwan, Sandy

    2014-08-15

    In this work, a class of non-negative matrix factorization (NMF) technique known as alternating non-negative least squares, combined with the projected gradient method, is used to analyze twenty-five [{sup 11}C]-DTBZ dynamic PET/CT brain data. For each subject, a two-factor model is assumed and two factors representing the striatum (factor 1) and the non-striatum (factor 2) tissues are extracted using the proposed NMF technique and commercially available factor analysis software “Pixies”. The extracted factor 1 and 2 curves represent the binding site of the radiotracer and describe the uptake and clearance of the radiotracer by soft tissues in the brain, respectively.more » The proposed NMF technique uses prior information about the dynamic data to obtain sample time-activity curves representing the striatum and the non-striatum tissues. These curves are then used for “warm” starting the optimization. Factor solutions from the two methods are compared graphically and quantitatively. In healthy subjects, radiotracer uptake by factors 1 and 2 are approximately 35–40% and 60–65%, respectively. The solutions are also used to develop a factor-based metric for the detection of early, untreated Parkinson's disease. The metric stratifies healthy subjects from suspected Parkinson's patients (based on the graphical method). The analysis shows that both techniques produce comparable results with similar computational time. The “semi-automatic” approach used by the NMF technique allows clinicians to manually set a starting condition for “warm” starting the optimization in order to facilitate control and efficient interaction with the data.« less

  19. Weighted least squares techniques for improved received signal strength based localization.

    PubMed

    Tarrío, Paula; Bernardos, Ana M; Casar, José R

    2011-01-01

    The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling.

  20. Biometric feature embedding using robust steganography technique

    NASA Astrophysics Data System (ADS)

    Rashid, Rasber D.; Sellahewa, Harin; Jassim, Sabah A.

    2013-05-01

    This paper is concerned with robust steganographic techniques to hide and communicate biometric data in mobile media objects like images, over open networks. More specifically, the aim is to embed binarised features extracted using discrete wavelet transforms and local binary patterns of face images as a secret message in an image. The need for such techniques can arise in law enforcement, forensics, counter terrorism, internet/mobile banking and border control. What differentiates this problem from normal information hiding techniques is the added requirement that there should be minimal effect on face recognition accuracy. We propose an LSB-Witness embedding technique in which the secret message is already present in the LSB plane but instead of changing the cover image LSB values, the second LSB plane will be changed to stand as a witness/informer to the receiver during message recovery. Although this approach may affect the stego quality, it is eliminating the weakness of traditional LSB schemes that is exploited by steganalysis techniques for LSB, such as PoV and RS steganalysis, to detect the existence of secrete message. Experimental results show that the proposed method is robust against PoV and RS attacks compared to other variants of LSB. We also discussed variants of this approach and determine capacity requirements for embedding face biometric feature vectors while maintain accuracy of face recognition.

  1. Weighted Least Squares Techniques for Improved Received Signal Strength Based Localization

    PubMed Central

    Tarrío, Paula; Bernardos, Ana M.; Casar, José R.

    2011-01-01

    The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling. PMID:22164092

  2. Added Value of Assessing Adnexal Masses with Advanced MRI Techniques

    PubMed Central

    Thomassin-Naggara, I.; Balvay, D.; Rockall, A.; Carette, M. F.; Ballester, M.; Darai, E.; Bazot, M.

    2015-01-01

    This review will present the added value of perfusion and diffusion MR sequences to characterize adnexal masses. These two functional MR techniques are readily available in routine clinical practice. We will describe the acquisition parameters and a method of analysis to optimize their added value compared with conventional images. We will then propose a model of interpretation that combines the anatomical and morphological information from conventional MRI sequences with the functional information provided by perfusion and diffusion weighted sequences. PMID:26413542

  3. Use of activity theory-based need finding for biomedical device development.

    PubMed

    Rismani, Shalaleh; Ratto, Matt; Machiel Van der Loos, H F

    2016-08-01

    Identifying the appropriate needs for biomedical device design is challenging, especially for less structured environments. The paper proposes an alternate need-finding method based on Cultural Historical Activity Theory and expanded to explicitly examine the role of devices within a socioeconomic system. This is compared to a conventional need-finding technique in a preliminary study with engineering student teams. The initial results show that the Activity Theory-based technique allows teams to gain deeper insights into their needs space.

  4. Advanced Packaging Materials and Techniques for High Power TR Module: Standard Flight vs. Advanced Packaging

    NASA Technical Reports Server (NTRS)

    Hoffman, James Patrick; Del Castillo, Linda; Miller, Jennifer; Jenabi, Masud; Hunter, Donald; Birur, Gajanana

    2011-01-01

    The higher output power densities required of modern radar architectures, such as the proposed DESDynI [Deformation, Ecosystem Structure, and Dynamics of Ice] SAR [Synthetic Aperture Radar] Instrument (or DSI) require increasingly dense high power electronics. To enable these higher power densities, while maintaining or even improving hardware reliability, requires advances in integrating advanced thermal packaging technologies into radar transmit/receive (TR) modules. New materials and techniques have been studied and compared to standard technologies.

  5. Plasma sheath effects on ion collection by a pinhole

    NASA Technical Reports Server (NTRS)

    Herr, Joel L.; Snyder, David B.

    1993-01-01

    This work presents tables to assist in the evaluation of pinhole collection effects on spacecraft. These tables summarize results of a computer model which tracks particle trajectories through a simplified electric field in the plasma sheath. A technique is proposed to account for plasma sheath effects in the application of these results and scaling rules are proposed to apply the calculations to specific situations. This model is compared to ion current measurements obtained by another worker, and the agreement is very good.

  6. Noise parameter estimation for poisson corrupted images using variance stabilization transforms.

    PubMed

    Jin, Xiaodan; Xu, Zhenyu; Hirakawa, Keigo

    2014-03-01

    Noise is present in all images captured by real-world image sensors. Poisson distribution is said to model the stochastic nature of the photon arrival process and agrees with the distribution of measured pixel values. We propose a method for estimating unknown noise parameters from Poisson corrupted images using properties of variance stabilization. With a significantly lower computational complexity and improved stability, the proposed estimation technique yields noise parameters that are comparable in accuracy to the state-of-art methods.

  7. Hetero-Material Gate Doping-Less Tunnel FET and Its Misalignment Effects on Analog/RF Parameters

    NASA Astrophysics Data System (ADS)

    Anand, Sunny; Sarin, R. K.

    2018-03-01

    In this paper, with the use of a hetero-material gate technique, a tunnel field-effect transistor (TFET) subject to charge plasma technique is proposed, named as hetero-material gate doping-less tunnel FET (HMG-DLTFET) and a brief study has been done on the effects due to misalignment of the bottom gate towards drain (GMAD) and towards source (GMAS). The proposed devices provide better performance as the drive current increased by three times as compared to conventional doping-less TFET (DLTFET). The results are then analyzed and compared with conventional doped hetero-material gate double-gate tunnel FET (HMG-DGTFET). The analog/radiofrequency (RF) performance has been studied for both devices and comparative analysis has been done for different parameters such as drain current (I D), transconductance (g m), output conductance (g d), total gate capacitance (C gg) and cutoff frequency (f T). Both devices performed similarly in different misalignment configurations. When the bottom gate is perfectly aligned, the best performance is observed for both devices, but the doping-less device gives slightly more freedom for fabrication engineers as the amount of tolerance for HMG-DLTFET is better than that of HMG-DGTFET.

  8. Rapid repair of severely earthquake-damaged bridge piers with flexural-shear failure mode

    NASA Astrophysics Data System (ADS)

    Sun, Zhiguo; Wang, Dongsheng; Du, Xiuli; Si, Bingjun

    2011-12-01

    An experimental study was conducted to investigate the feasibility of a proposed rapid repair technique for severely earthquake-damaged bridge piers with flexural-shear failure mode. Six circular pier specimens were first tested to severe damage in flexural-shear mode and repaired using early-strength concrete with high-fluidity and carbon fiber reinforced polymers (CFRP). After about four days, the repaired specimens were tested to failure again. The seismic behavior of the repaired specimens was evaluated and compared to the original specimens. Test results indicate that the proposed repair technique is highly effective. Both shear strength and lateral displacement of the repaired piers increased when compared to the original specimens, and the failure mechanism of the piers shifted from flexural-shear failure to ductile flexural failure. Finally, a simple design model based on the Seible formulation for post-earthquake repair design was compared to the experimental results. It is concluded that the design equation for bridge pier strengthening before an earthquake could be applicable to seismic repairs after an earthquake if the shear strength contribution of the spiral bars in the repaired piers is disregarded and 1.5 times more FRP sheets is provided.

  9. Accounting for the Confound of Meninges in Segmenting Entorhinal and Perirhinal Cortices in T1-Weighted MRI.

    PubMed

    Xie, Long; Wisse, Laura E M; Das, Sandhitsu R; Wang, Hongzhi; Wolk, David A; Manjón, Jose V; Yushkevich, Paul A

    2016-10-01

    Quantification of medial temporal lobe (MTL) cortices, including entorhinal cortex (ERC) and perirhinal cortex (PRC), from in vivo MRI is desirable for studying the human memory system as well as in early diagnosis and monitoring of Alzheimer's disease. However, ERC and PRC are commonly over-segmented in T1-weighted (T1w) MRI because of the adjacent meninges that have similar intensity to gray matter in T1 contrast. This introduces errors in the quantification and could potentially confound imaging studies of ERC/PRC. In this paper, we propose to segment MTL cortices along with the adjacent meninges in T1w MRI using an established multi-atlas segmentation framework together with super-resolution technique. Experimental results comparing the proposed pipeline with existing pipelines support the notion that a large portion of meninges is segmented as gray matter by existing algorithms but not by our algorithm. Cross-validation experiments demonstrate promising segmentation accuracy. Further, agreement between the volume and thickness measures from the proposed pipeline and those from the manual segmentations increase dramatically as a result of accounting for the confound of meninges. Evaluated in the context of group discrimination between patients with amnestic mild cognitive impairment and normal controls, the proposed pipeline generates more biologically plausible results and improves the statistical power in discriminating groups in absolute terms comparing to other techniques using T1w MRI. Although the performance of the proposed pipeline is inferior to that using T2-weighted MRI, which is optimized to image MTL sub-structures, the proposed pipeline could still provide important utilities in analyzing many existing large datasets that only have T1w MRI available.

  10. A web-based overview, systematic review and meta-analysis of pancreatic anastomosis techniques following pancreatoduodenectomy.

    PubMed

    Daamen, Lois A; Smits, F Jasmijn; Besselink, Marc G; Busch, Olivier R; Borel Rinkes, Inne H; van Santvoort, Hjalmar C; Molenaar, I Quintus

    2018-05-14

    Many pancreatic anastomoses have been proposed to reduce the incidence of postoperative pancreatic fistula (POPF) after pancreatoduodenectomy, but a complete overview is lacking. This systematic review and meta-analysis aims to provide an online overview of all pancreatic anastomosis techniques and to evaluate the incidence of clinically relevant POPF in randomized controlled trials (RCTs). A literature search was performed to December 2017. Included were studies giving a detailed description of the pancreatic anastomosis after open pancreatoduodenectomy and RCTs comparing techniques for the incidence of POPF (International Study Group of Pancreatic Surgery [ISGPS] Grade B/C). Meta-analyses were performed using a random-effects model. A total of 61 different anastomoses were found and summarized in 19 subgroups (www.pancreatic-anastomosis.com). In 6 RCTs, the POPF rate was 12% after pancreaticogastrostomy (n = 69/555) versus 20% after pancreaticojejunostomy (n = 106/531) (RR0.59; 95%CI 0.35-1.01, P = 0.05). Six RCTs comparing subtypes of pancreaticojejunostomy showed a pooled POPF rate of 10% (n = 109/1057). Duct-to-mucosa and invagination pancreaticojejunostomy showed similar results, respectively 14% (n = 39/278) versus 10% (n = 27/278) (RR1.40, 95%CI 0.47-4.15, P = 0.54). The proposed online overview can be used as an interactive platform, for uniformity in reporting anastomotic techniques and for educational purposes. The meta-analysis showed no significant difference in POPF rate between pancreatic anastomosis techniques. Copyright © 2018 International Hepato-Pancreato-Biliary Association Inc. Published by Elsevier Ltd. All rights reserved.

  11. New bandwidth selection criterion for Kernel PCA: approach to dimensionality reduction and classification problems.

    PubMed

    Thomas, Minta; De Brabanter, Kris; De Moor, Bart

    2014-05-10

    DNA microarrays are potentially powerful technology for improving diagnostic classification, treatment selection, and prognostic assessment. The use of this technology to predict cancer outcome has a history of almost a decade. Disease class predictors can be designed for known disease cases and provide diagnostic confirmation or clarify abnormal cases. The main input to this class predictors are high dimensional data with many variables and few observations. Dimensionality reduction of these features set significantly speeds up the prediction task. Feature selection and feature transformation methods are well known preprocessing steps in the field of bioinformatics. Several prediction tools are available based on these techniques. Studies show that a well tuned Kernel PCA (KPCA) is an efficient preprocessing step for dimensionality reduction, but the available bandwidth selection method for KPCA was computationally expensive. In this paper, we propose a new data-driven bandwidth selection criterion for KPCA, which is related to least squares cross-validation for kernel density estimation. We propose a new prediction model with a well tuned KPCA and Least Squares Support Vector Machine (LS-SVM). We estimate the accuracy of the newly proposed model based on 9 case studies. Then, we compare its performances (in terms of test set Area Under the ROC Curve (AUC) and computational time) with other well known techniques such as whole data set + LS-SVM, PCA + LS-SVM, t-test + LS-SVM, Prediction Analysis of Microarrays (PAM) and Least Absolute Shrinkage and Selection Operator (Lasso). Finally, we assess the performance of the proposed strategy with an existing KPCA parameter tuning algorithm by means of two additional case studies. We propose, evaluate, and compare several mathematical/statistical techniques, which apply feature transformation/selection for subsequent classification, and consider its application in medical diagnostics. Both feature selection and feature transformation perform well on classification tasks. Due to the dynamic selection property of feature selection, it is hard to define significant features for the classifier, which predicts classes of future samples. Moreover, the proposed strategy enjoys a distinctive advantage with its relatively lesser time complexity.

  12. On the electrochemical deposition of metal–organic frameworks

    DOE PAGES

    Campagnol, Nicolo; Van Assche, Tom R. C.; Li, Minyuan; ...

    2016-02-11

    In this paper we study and compare the anodic and cathodic electrodeposition of Metal–Organic Frameworks (MOFs) and suggest guidelines for the electrodeposition of new MOFs with this technique. KHUST-1 was electrodeposited both anodically and cathodically and a four step mechanism is proposed to explain the anodic synthesis.

  13. A novel pre-processing technique for improving image quality in digital breast tomosynthesis.

    PubMed

    Kim, Hyeongseok; Lee, Taewon; Hong, Joonpyo; Sabir, Sohail; Lee, Jung-Ryun; Choi, Young Wook; Kim, Hak Hee; Chae, Eun Young; Cho, Seungryong

    2017-02-01

    Nonlinear pre-reconstruction processing of the projection data in computed tomography (CT) where accurate recovery of the CT numbers is important for diagnosis is usually discouraged, for such a processing would violate the physics of image formation in CT. However, one can devise a pre-processing step to enhance detectability of lesions in digital breast tomosynthesis (DBT) where accurate recovery of the CT numbers is fundamentally impossible due to the incompleteness of the scanned data. Since the detection of lesions such as micro-calcifications and mass in breasts is the purpose of using DBT, it is justified that a technique producing higher detectability of lesions is a virtue. A histogram modification technique was developed in the projection data domain. Histogram of raw projection data was first divided into two parts: One for the breast projection data and the other for background. Background pixel values were set to a single value that represents the boundary between breast and background. After that, both histogram parts were shifted by an appropriate amount of offset and the histogram-modified projection data were log-transformed. Filtered-backprojection (FBP) algorithm was used for image reconstruction of DBT. To evaluate performance of the proposed method, we computed the detectability index for the reconstructed images from clinically acquired data. Typical breast border enhancement artifacts were greatly suppressed and the detectability of calcifications and masses was increased by use of the proposed method. Compared to a global threshold-based post-reconstruction processing technique, the proposed method produced images of higher contrast without invoking additional image artifacts. In this work, we report a novel pre-processing technique that improves detectability of lesions in DBT and has potential advantages over the global threshold-based post-reconstruction processing technique. The proposed method not only increased the lesion detectability but also reduced typical image artifacts pronounced in conventional FBP-based DBT. © 2016 American Association of Physicists in Medicine.

  14. a Modified Genetic Algorithm for Finding Fuzzy Shortest Paths in Uncertain Networks

    NASA Astrophysics Data System (ADS)

    Heidari, A. A.; Delavar, M. R.

    2016-06-01

    In realistic network analysis, there are several uncertainties in the measurements and computation of the arcs and vertices. These uncertainties should also be considered in realizing the shortest path problem (SPP) due to the inherent fuzziness in the body of expert's knowledge. In this paper, we investigated the SPP under uncertainty to evaluate our modified genetic strategy. We improved the performance of genetic algorithm (GA) to investigate a class of shortest path problems on networks with vague arc weights. The solutions of the uncertain SPP with considering fuzzy path lengths are examined and compared in detail. As a robust metaheuristic, GA algorithm is modified and evaluated to tackle the fuzzy SPP (FSPP) with uncertain arcs. For this purpose, first, a dynamic operation is implemented to enrich the exploration/exploitation patterns of the conventional procedure and mitigate the premature convergence of GA technique. Then, the modified GA (MGA) strategy is used to resolve the FSPP. The attained results of the proposed strategy are compared to those of GA with regard to the cost, quality of paths and CPU times. Numerical instances are provided to demonstrate the success of the proposed MGA-FSPP strategy in comparison with GA. The simulations affirm that not only the proposed technique can outperform GA, but also the qualities of the paths are effectively improved. The results clarify that the competence of the proposed GA is preferred in view of quality quantities. The results also demonstrate that the proposed method can efficiently be utilized to handle FSPP in uncertain networks.

  15. Experimental Quasi-Microwave Whole-Body Averaged SAR Estimation Method Using Cylindrical-External Field Scanning

    NASA Astrophysics Data System (ADS)

    Kawamura, Yoshifumi; Hikage, Takashi; Nojima, Toshio

    The aim of this study is to develop a new whole-body averaged specific absorption rate (SAR) estimation method based on the external-cylindrical field scanning technique. This technique is adopted with the goal of simplifying the dosimetry estimation of human phantoms that have different postures or sizes. An experimental scaled model system is constructed. In order to examine the validity of the proposed method for realistic human models, we discuss the pros and cons of measurements and numerical analyses based on the finite-difference time-domain (FDTD) method. We consider the anatomical European human phantoms and plane-wave in the 2GHz mobile phone frequency band. The measured whole-body averaged SAR results obtained by the proposed method are compared with the results of the FDTD analyses.

  16. Improved optical efficiency of bulk laser amplifiers with femtosecond written waveguides

    NASA Astrophysics Data System (ADS)

    Bukharin, Mikhail A.; Lyashedko, Andrey; Skryabin, Nikolay N.; Khudyakov, Dmitriy V.; Vartapetov, Sergey K.

    2016-04-01

    In the paper we proposed improved technique of three-dimensional waveguides writing with direct femtosecond laser inscription technology. The technique allows, for the first time of our knowledge, production of waveguides with mode field diameter larger than 200 μm. This result broadens field of application of femtosecond writing technology into bulk laser schemes and creates an opportunity to develop novel amplifiers with increased efficiency. We proposed a novel architecture of laser amplifier that combines free-space propagation of signal beam with low divergence and propagation of pump irradiation inside femtosecond written waveguide with large mode field diameter due to total internal reflection effect. Such scheme provides constant tight confinement of pump irradiation over the full length of active laser element (3-10 cm). The novel amplifier architecture was investigated numerically and experimentally in Nd:phosphate glass. Waveguides with 200 μm mode field diameter were written with high frequency femtosecond oscillator. Proposed technique of three-dimensional waveguides writing based on decreasing and compensation of spherical aberration effect due to writing in heat cumulative regime and dynamic pulse energy adjustment at different depths of writing. It was shown, that written waveguides could increase optical efficiency of amplifier up to 4 times compared with corresponding usual free-space schemes. Novelty of the results consists in technique of femtosecond writing of waveguides with large mode field diameter. Actuality of the results consists in originally proposed architecture allows to improve up to 4 times optical efficiency of conventional bulk laser schemes and especially ultrafast pulse laser amplifiers.

  17. A Comparative Study with RapidMiner and WEKA Tools over some Classification Techniques for SMS Spam

    NASA Astrophysics Data System (ADS)

    Foozy, Cik Feresa Mohd; Ahmad, Rabiah; Faizal Abdollah, M. A.; Chai Wen, Chuah

    2017-08-01

    SMS Spamming is a serious attack that can manipulate the use of the SMS by spreading the advertisement in bulk. By sending the unwanted SMS that contain advertisement can make the users feeling disturb and this against the privacy of the mobile users. To overcome these issues, many studies have proposed to detect SMS Spam by using data mining tools. This paper will do a comparative study using five machine learning techniques such as Naïve Bayes, K-NN (K-Nearest Neighbour Algorithm), Decision Tree, Random Forest and Decision Stumps to observe the accuracy result between RapidMiner and WEKA for dataset SMS Spam UCI Machine Learning repository.

  18. Comparing Laser Interferometry and Atom Interferometry Approaches to Space-Based Gravitational-Wave Measurement

    NASA Technical Reports Server (NTRS)

    Baker, John; Thorpe, Ira

    2012-01-01

    Thoroughly studied classic space-based gravitational-wave missions concepts such as the Laser Interferometer Space Antenna (LISA) are based on laser-interferometry techniques. Ongoing developments in atom-interferometry techniques have spurred recently proposed alternative mission concepts. These different approaches can be understood on a common footing. We present an comparative analysis of how each type of instrument responds to some of the noise sources which may limiting gravitational-wave mission concepts. Sensitivity to laser frequency instability is essentially the same for either approach. Spacecraft acceleration reference stability sensitivities are different, allowing smaller spacecraft separations in the atom interferometry approach, but acceleration noise requirements are nonetheless similar. Each approach has distinct additional measurement noise issues.

  19. Reducing charging effects in scanning electron microscope images by Rayleigh contrast stretching method (RCS).

    PubMed

    Wan Ismail, W Z; Sim, K S; Tso, C P; Ting, H Y

    2011-01-01

    To reduce undesirable charging effects in scanning electron microscope images, Rayleigh contrast stretching is developed and employed. First, re-scaling is performed on the input image histograms with Rayleigh algorithm. Then, contrast stretching or contrast adjustment is implemented to improve the images while reducing the contrast charging artifacts. This technique has been compared to some existing histogram equalization (HE) extension techniques: recursive sub-image HE, contrast stretching dynamic HE, multipeak HE and recursive mean separate HE. Other post processing methods, such as wavelet approach, spatial filtering, and exponential contrast stretching, are compared as well. Overall, the proposed method produces better image compensation in reducing charging artifacts. Copyright © 2011 Wiley Periodicals, Inc.

  20. Adaptive output-based command shaping for sway control of a 3D overhead crane with payload hoisting and wind disturbance

    NASA Astrophysics Data System (ADS)

    Abdullahi, Auwalu M.; Mohamed, Z.; Selamat, H.; Pota, Hemanshu R.; Zainal Abidin, M. S.; Ismail, F. S.; Haruna, A.

    2018-01-01

    Payload hoisting and wind disturbance during crane operations are among the challenging factors that affect a payload sway and thus, affect the crane's performance. This paper proposes a new online adaptive output-based command shaping (AOCS) technique for an effective payload sway reduction of an overhead crane under the influence of those effects. This technique enhances the previously developed output-based command shaping (OCS) which was effective only for a fixed system and without external disturbances. Unlike the conventional input shaping design technique which requires the system's natural frequency and damping ratio, the proposed technique is designed by using the output signal and thus, an online adaptive algorithm can be formulated. To test the effectiveness of the AOCS, experiments are carried out using a laboratory overhead crane with a payload hoisting in the presence of wind, and with different payloads. The superiority of the method is confirmed by 82% and 29% reductions in the overall sway and the maximum transient sway respectively, when compared to the OCS, and two robust input shapers namely Zero Vibration Derivative-Derivative and Extra-Insensitive shapers. Furthermore, the method demonstrates a uniform crane's performance under all conditions. It is envisaged that the proposed method can be very useful in designing an effective controller for a crane system with an unknown payload and under the influence of external disturbances.

  1. Fingerprint pattern restoration by digital image processing techniques.

    PubMed

    Wen, Che-Yen; Yu, Chiu-Chung

    2003-09-01

    Fingerprint evidence plays an important role in solving criminal problems. However, defective (lacking information needed for completeness) or contaminated (undesirable information included) fingerprint patterns make identifying and recognizing processes difficult. Unfortunately. this is the usual case. In the recognizing process (enhancement of patterns, or elimination of "false alarms" so that a fingerprint pattern can be searched in the Automated Fingerprint Identification System (AFIS)), chemical and physical techniques have been proposed to improve pattern legibility. In the identifying process, a fingerprint examiner can enhance contaminated (but not defective) fingerprint patterns under guidelines provided by the Scientific Working Group on Friction Ridge Analysis, Study and Technology (SWGFAST), the Scientific Working Group on Imaging Technology (SWGIT), and an AFIS working group within the National Institute of Justice. Recently, the image processing techniques have been successfully applied in forensic science. For example, we have applied image enhancement methods to improve the legibility of digital images such as fingerprints and vehicle plate numbers. In this paper, we propose a novel digital image restoration technique based on the AM (amplitude modulation)-FM (frequency modulation) reaction-diffusion method to restore defective or contaminated fingerprint patterns. This method shows its potential application to fingerprint pattern enhancement in the recognizing process (but not for the identifying process). Synthetic and real images are used to show the capability of the proposed method. The results of enhancing fingerprint patterns by the manual process and our method are evaluated and compared.

  2. Analyzing the effectiveness of a frame-level redundancy scrubbing technique for SRAM-based FPGAs

    DOE PAGES

    Tonfat, Jorge; Lima Kastensmidt, Fernanda; Rech, Paolo; ...

    2015-12-17

    Radiation effects such as soft errors are the major threat to the reliability of SRAM-based FPGAs. This work analyzes the effectiveness in correcting soft errors of a novel scrubbing technique using internal frame redundancy called Frame-level Redundancy Scrubbing (FLR-scrubbing). This correction technique can be implemented in a coarse grain TMR design. The FLR-scrubbing technique was implemented on a mid-size Xilinx Virtex-5 FPGA device used as a case study. The FLR-scrubbing technique was tested under neutron radiation and fault injection. Implementation results demonstrated minimum area and energy consumption overhead when compared to other techniques. The time to repair the fault ismore » also improved by using the Internal Configuration Access Port (ICAP). Lastly, neutron radiation test results demonstrated that the proposed technique is suitable for correcting accumulated SEUs and MBUs.« less

  3. Analyzing the effectiveness of a frame-level redundancy scrubbing technique for SRAM-based FPGAs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tonfat, Jorge; Lima Kastensmidt, Fernanda; Rech, Paolo

    Radiation effects such as soft errors are the major threat to the reliability of SRAM-based FPGAs. This work analyzes the effectiveness in correcting soft errors of a novel scrubbing technique using internal frame redundancy called Frame-level Redundancy Scrubbing (FLR-scrubbing). This correction technique can be implemented in a coarse grain TMR design. The FLR-scrubbing technique was implemented on a mid-size Xilinx Virtex-5 FPGA device used as a case study. The FLR-scrubbing technique was tested under neutron radiation and fault injection. Implementation results demonstrated minimum area and energy consumption overhead when compared to other techniques. The time to repair the fault ismore » also improved by using the Internal Configuration Access Port (ICAP). Lastly, neutron radiation test results demonstrated that the proposed technique is suitable for correcting accumulated SEUs and MBUs.« less

  4. Robust infrared target tracking using discriminative and generative approaches

    NASA Astrophysics Data System (ADS)

    Asha, C. S.; Narasimhadhan, A. V.

    2017-09-01

    The process of designing an efficient tracker for thermal infrared imagery is one of the most challenging tasks in computer vision. Although a lot of advancement has been achieved in RGB videos over the decades, textureless and colorless properties of objects in thermal imagery pose hard constraints in the design of an efficient tracker. Tracking of an object using a single feature or a technique often fails to achieve greater accuracy. Here, we propose an effective method to track an object in infrared imagery based on a combination of discriminative and generative approaches. The discriminative technique makes use of two complementary methods such as kernelized correlation filter with spatial feature and AdaBoost classifier with pixel intesity features to operate in parallel. After obtaining optimized locations through discriminative approaches, the generative technique is applied to determine the best target location using a linear search method. Unlike the baseline algorithms, the proposed method estimates the scale of the target by Lucas-Kanade homography estimation. To evaluate the proposed method, extensive experiments are conducted on 17 challenging infrared image sequences obtained from LTIR dataset and a significant improvement of mean distance precision and mean overlap precision is accomplished as compared with the existing trackers. Further, a quantitative and qualitative assessment of the proposed approach with the state-of-the-art trackers is illustrated to clearly demonstrate an overall increase in performance.

  5. Cost-sensitive AdaBoost algorithm for ordinal regression based on extreme learning machine.

    PubMed

    Riccardi, Annalisa; Fernández-Navarro, Francisco; Carloni, Sante

    2014-10-01

    In this paper, the well known stagewise additive modeling using a multiclass exponential (SAMME) boosting algorithm is extended to address problems where there exists a natural order in the targets using a cost-sensitive approach. The proposed ensemble model uses an extreme learning machine (ELM) model as a base classifier (with the Gaussian kernel and the additional regularization parameter). The closed form of the derived weighted least squares problem is provided, and it is employed to estimate analytically the parameters connecting the hidden layer to the output layer at each iteration of the boosting algorithm. Compared to the state-of-the-art boosting algorithms, in particular those using ELM as base classifier, the suggested technique does not require the generation of a new training dataset at each iteration. The adoption of the weighted least squares formulation of the problem has been presented as an unbiased and alternative approach to the already existing ELM boosting techniques. Moreover, the addition of a cost model for weighting the patterns, according to the order of the targets, enables the classifier to tackle ordinal regression problems further. The proposed method has been validated by an experimental study by comparing it with already existing ensemble methods and ELM techniques for ordinal regression, showing competitive results.

  6. Time and Memory Efficient Online Piecewise Linear Approximation of Sensor Signals.

    PubMed

    Grützmacher, Florian; Beichler, Benjamin; Hein, Albert; Kirste, Thomas; Haubelt, Christian

    2018-05-23

    Piecewise linear approximation of sensor signals is a well-known technique in the fields of Data Mining and Activity Recognition. In this context, several algorithms have been developed, some of them with the purpose to be performed on resource constrained microcontroller architectures of wireless sensor nodes. While microcontrollers are usually constrained in computational power and memory resources, all state-of-the-art piecewise linear approximation techniques either need to buffer sensor data or have an execution time depending on the segment’s length. In the paper at hand, we propose a novel piecewise linear approximation algorithm, with a constant computational complexity as well as a constant memory complexity. Our proposed algorithm’s worst-case execution time is one to three orders of magnitude smaller and its average execution time is three to seventy times smaller compared to the state-of-the-art Piecewise Linear Approximation (PLA) algorithms in our experiments. In our evaluations, we show that our algorithm is time and memory efficient without sacrificing the approximation quality compared to other state-of-the-art piecewise linear approximation techniques, while providing a maximum error guarantee per segment, a small parameter space of only one parameter, and a maximum latency of one sample period plus its worst-case execution time.

  7. Use of simulated evaporation to assess the potential for scale formation during reverse osmosis desalination

    USGS Publications Warehouse

    Huff, G.F.

    2004-01-01

    The tendency of solutes in input water to precipitate efficiency lowering scale deposits on the membranes of reverse osmosis (RO) desalination systems is an important factor in determining the suitability of input water for desalination. Simulated input water evaporation can be used as a technique to quantitatively assess the potential for scale formation in RO desalination systems. The technique was demonstrated by simulating the increase in solute concentrations required to form calcite, gypsum, and amorphous silica scales at 25??C and 40??C from 23 desalination input waters taken from the literature. Simulation results could be used to quantitatively assess the potential of a given input water to form scale or to compare the potential of a number of input waters to form scale during RO desalination. Simulated evaporation of input waters cannot accurately predict the conditions under which scale will form owing to the effects of potentially stable supersaturated solutions, solution velocity, and residence time inside RO systems. However, the simulated scale-forming potential of proposed input waters could be compared with the simulated scale-forming potentials and actual scale-forming properties of input waters having documented operational histories in RO systems. This may provide a technique to estimate the actual performance and suitability of proposed input waters during RO.

  8. Sliding-slab three-dimensional TSE imaging with a spiral-In/Out readout.

    PubMed

    Li, Zhiqiang; Wang, Dinghui; Robison, Ryan K; Zwart, Nicholas R; Schär, Michael; Karis, John P; Pipe, James G

    2016-02-01

    T2 -weighted imaging is of great diagnostic value in neuroimaging. Three-dimensional (3D) Cartesian turbo spin echo (TSE) scans provide high signal-to-noise ratio (SNR) and contiguous slice coverage. The purpose of this preliminary work is to implement a novel 3D spiral TSE technique with image quality comparable to 2D/3D Cartesian TSE. The proposed technique uses multislab 3D TSE imaging. To mitigate the slice boundary artifacts, a sliding-slab method is extended to spiral imaging. A spiral-in/out readout is adopted to minimize the artifacts that may be present with the conventional spiral-out readout. Phase errors induced by B0 eddy currents are measured and compensated to allow for the combination of the spiral-in and spiral-out images. A nonuniform slice encoding scheme is used to reduce the truncation artifacts while preserving the SNR performance. Preliminary results show that each of the individual measures contributes to the overall performance, and the image quality of the results obtained with the proposed technique is, in general, comparable to that of 2D or 3D Cartesian TSE. 3D sliding-slab TSE with a spiral-in/out readout provides good-quality T2 -weighted images, and, therefore, may become a promising alternative to Cartesian TSE. © 2015 Wiley Periodicals, Inc.

  9. Two-dimensional compression of surface electromyographic signals using column-correlation sorting and image encoders.

    PubMed

    Costa, Marcus V C; Carvalho, Joao L A; Berger, Pedro A; Zaghetto, Alexandre; da Rocha, Adson F; Nascimento, Francisco A O

    2009-01-01

    We present a new preprocessing technique for two-dimensional compression of surface electromyographic (S-EMG) signals, based on correlation sorting. We show that the JPEG2000 coding system (originally designed for compression of still images) and the H.264/AVC encoder (video compression algorithm operating in intraframe mode) can be used for compression of S-EMG signals. We compare the performance of these two off-the-shelf image compression algorithms for S-EMG compression, with and without the proposed preprocessing step. Compression of both isotonic and isometric contraction S-EMG signals is evaluated. The proposed methods were compared with other S-EMG compression algorithms from the literature.

  10. A MAP-based image interpolation method via Viterbi decoding of Markov chains of interpolation functions.

    PubMed

    Vedadi, Farhang; Shirani, Shahram

    2014-01-01

    A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.

  11. Locality constrained joint dynamic sparse representation for local matching based face recognition.

    PubMed

    Wang, Jianzhong; Yi, Yugen; Zhou, Wei; Shi, Yanjiao; Qi, Miao; Zhang, Ming; Zhang, Baoxue; Kong, Jun

    2014-01-01

    Recently, Sparse Representation-based Classification (SRC) has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC) in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW) demonstrate the effectiveness of LCJDSRC.

  12. Direct Position Determination of Multiple Non-Circular Sources with a Moving Coprime Array.

    PubMed

    Zhang, Yankui; Ba, Bin; Wang, Daming; Geng, Wei; Xu, Haiyun

    2018-05-08

    Direct position determination (DPD) is currently a hot topic in wireless localization research as it is more accurate than traditional two-step positioning. However, current DPD algorithms are all based on uniform arrays, which have an insufficient degree of freedom and limited estimation accuracy. To improve the DPD accuracy, this paper introduces a coprime array to the position model of multiple non-circular sources with a moving array. To maximize the advantages of this coprime array, we reconstruct the covariance matrix by vectorization, apply a spatial smoothing technique, and converge the subspace data from each measuring position to establish the cost function. Finally, we obtain the position coordinates of the multiple non-circular sources. The complexity of the proposed method is computed and compared with that of other methods, and the Cramer⁻Rao lower bound of DPD for multiple sources with a moving coprime array, is derived. Theoretical analysis and simulation results show that the proposed algorithm is not only applicable to circular sources, but can also improve the positioning accuracy of non-circular sources. Compared with existing two-step positioning algorithms and DPD algorithms based on uniform linear arrays, the proposed technique offers a significant improvement in positioning accuracy with a slight increase in complexity.

  13. Virtual screening by a new Clustering-based Weighted Similarity Extreme Learning Machine approach

    PubMed Central

    Kudisthalert, Wasu

    2018-01-01

    Machine learning techniques are becoming popular in virtual screening tasks. One of the powerful machine learning algorithms is Extreme Learning Machine (ELM) which has been applied to many applications and has recently been applied to virtual screening. We propose the Weighted Similarity ELM (WS-ELM) which is based on a single layer feed-forward neural network in a conjunction of 16 different similarity coefficients as activation function in the hidden layer. It is known that the performance of conventional ELM is not robust due to random weight selection in the hidden layer. Thus, we propose a Clustering-based WS-ELM (CWS-ELM) that deterministically assigns weights by utilising clustering algorithms i.e. k-means clustering and support vector clustering. The experiments were conducted on one of the most challenging datasets–Maximum Unbiased Validation Dataset–which contains 17 activity classes carefully selected from PubChem. The proposed algorithms were then compared with other machine learning techniques such as support vector machine, random forest, and similarity searching. The results show that CWS-ELM in conjunction with support vector clustering yields the best performance when utilised together with Sokal/Sneath(1) coefficient. Furthermore, ECFP_6 fingerprint presents the best results in our framework compared to the other types of fingerprints, namely ECFP_4, FCFP_4, and FCFP_6. PMID:29652912

  14. Nonlinear Earthquake Analysis of Reinforced Concrete Frames with Fiber and Bernoulli-Euler Beam-Column Element

    PubMed Central

    Karaton, Muhammet

    2014-01-01

    A beam-column element based on the Euler-Bernoulli beam theory is researched for nonlinear dynamic analysis of reinforced concrete (RC) structural element. Stiffness matrix of this element is obtained by using rigidity method. A solution technique that included nonlinear dynamic substructure procedure is developed for dynamic analyses of RC frames. A predicted-corrected form of the Bossak-α method is applied for dynamic integration scheme. A comparison of experimental data of a RC column element with numerical results, obtained from proposed solution technique, is studied for verification the numerical solutions. Furthermore, nonlinear cyclic analysis results of a portal reinforced concrete frame are achieved for comparing the proposed solution technique with Fibre element, based on flexibility method. However, seismic damage analyses of an 8-story RC frame structure with soft-story are investigated for cases of lumped/distributed mass and load. Damage region, propagation, and intensities according to both approaches are researched. PMID:24578667

  15. Smart Grid Privacy through Distributed Trust

    NASA Astrophysics Data System (ADS)

    Lipton, Benjamin

    Though the smart electrical grid promises many advantages in efficiency and reliability, the risks to consumer privacy have impeded its deployment. Researchers have proposed protecting privacy by aggregating user data before it reaches the utility, using techniques of homomorphic encryption to prevent exposure of unaggregated values. However, such schemes generally require users to trust in the correct operation of a single aggregation server. We propose two alternative systems based on secret sharing techniques that distribute this trust among multiple service providers, protecting user privacy against a misbehaving server. We also provide an extensive evaluation of the systems considered, comparing their robustness to privacy compromise, error handling, computational performance, and data transmission costs. We conclude that while all the systems should be computationally feasible on smart meters, the two methods based on secret sharing require much less computation while also providing better protection against corrupted aggregators. Building systems using these techniques could help defend the privacy of electricity customers, as well as customers of other utilities as they move to a more data-driven architecture.

  16. Provably secure identity-based identification and signature schemes from code assumptions

    PubMed Central

    Zhao, Yiming

    2017-01-01

    Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure. PMID:28809940

  17. Provably secure identity-based identification and signature schemes from code assumptions.

    PubMed

    Song, Bo; Zhao, Yiming

    2017-01-01

    Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure.

  18. Non-uniform refractive index field measurement based on light field imaging technique

    NASA Astrophysics Data System (ADS)

    Du, Xiaokun; Zhang, Yumin; Zhou, Mengjie; Xu, Dong

    2018-02-01

    In this paper, a method for measuring the non-uniform refractive index field based on the light field imaging technique is proposed. First, the light field camera is used to collect the four-dimensional light field data, and then the light field data is decoded according to the light field imaging principle to obtain image sequences with different acquisition angles of the refractive index field. Subsequently PIV (Particle Image Velocimetry) technique is used to extract ray offset of each image. Finally, the distribution of non-uniform refractive index field can be calculated by inversing the deflection of light rays. Compared with traditional optical methods which require multiple optical detectors from multiple angles to synchronously collect data, the method proposed in this paper only needs a light field camera and shoot once. The effectiveness of the method has been verified by the experiment which quantitatively measures the distribution of the refractive index field above the flame of the alcohol lamp.

  19. Automatic identification of the number of food items in a meal using clustering techniques based on the monitoring of swallowing and chewing.

    PubMed

    Lopez-Meyer, Paulo; Schuckers, Stephanie; Makeyev, Oleksandr; Fontana, Juan M; Sazonov, Edward

    2012-09-01

    The number of distinct foods consumed in a meal is of significant clinical concern in the study of obesity and other eating disorders. This paper proposes the use of information contained in chewing and swallowing sequences for meal segmentation by food types. Data collected from experiments of 17 volunteers were analyzed using two different clustering techniques. First, an unsupervised clustering technique, Affinity Propagation (AP), was used to automatically identify the number of segments within a meal. Second, performance of the unsupervised AP method was compared to a supervised learning approach based on Agglomerative Hierarchical Clustering (AHC). While the AP method was able to obtain 90% accuracy in predicting the number of food items, the AHC achieved an accuracy >95%. Experimental results suggest that the proposed models of automatic meal segmentation may be utilized as part of an integral application for objective Monitoring of Ingestive Behavior in free living conditions.

  20. Co-Design Method and Wafer-Level Packaging Technique of Thin-Film Flexible Antenna and Silicon CMOS Rectifier Chips for Wireless-Powered Neural Interface Systems.

    PubMed

    Okabe, Kenji; Jeewan, Horagodage Prabhath; Yamagiwa, Shota; Kawano, Takeshi; Ishida, Makoto; Akita, Ippei

    2015-12-16

    In this paper, a co-design method and a wafer-level packaging technique of a flexible antenna and a CMOS rectifier chip for use in a small-sized implantable system on the brain surface are proposed. The proposed co-design method optimizes the system architecture, and can help avoid the use of external matching components, resulting in the realization of a small-size system. In addition, the technique employed to assemble a silicon large-scale integration (LSI) chip on the very thin parylene film (5 μm) enables the integration of the rectifier circuits and the flexible antenna (rectenna). In the demonstration of wireless power transmission (WPT), the fabricated flexible rectenna achieved a maximum efficiency of 0.497% with a distance of 3 cm between antennas. In addition, WPT with radio waves allows a misalignment of 185% against antenna size, implying that the misalignment has a less effect on the WPT characteristics compared with electromagnetic induction.

  1. Co-Design Method and Wafer-Level Packaging Technique of Thin-Film Flexible Antenna and Silicon CMOS Rectifier Chips for Wireless-Powered Neural Interface Systems

    PubMed Central

    Okabe, Kenji; Jeewan, Horagodage Prabhath; Yamagiwa, Shota; Kawano, Takeshi; Ishida, Makoto; Akita, Ippei

    2015-01-01

    In this paper, a co-design method and a wafer-level packaging technique of a flexible antenna and a CMOS rectifier chip for use in a small-sized implantable system on the brain surface are proposed. The proposed co-design method optimizes the system architecture, and can help avoid the use of external matching components, resulting in the realization of a small-size system. In addition, the technique employed to assemble a silicon large-scale integration (LSI) chip on the very thin parylene film (5 μm) enables the integration of the rectifier circuits and the flexible antenna (rectenna). In the demonstration of wireless power transmission (WPT), the fabricated flexible rectenna achieved a maximum efficiency of 0.497% with a distance of 3 cm between antennas. In addition, WPT with radio waves allows a misalignment of 185% against antenna size, implying that the misalignment has a less effect on the WPT characteristics compared with electromagnetic induction. PMID:26694407

  2. Accuracy Assessment of a Canal-Tunnel 3d Model by Comparing Photogrammetry and Laserscanning Recording Techniques

    NASA Astrophysics Data System (ADS)

    Charbonnier, P.; Chavant, P.; Foucher, P.; Muzet, V.; Prybyla, D.; Perrin, T.; Grussenmeyer, P.; Guillemin, S.

    2013-07-01

    With recent developments in the field of technology and computer science, conventional methods are being supplanted by laser scanning and digital photogrammetry. These two different surveying techniques generate 3-D models of real world objects or structures. In this paper, we consider the application of terrestrial Laser scanning (TLS) and photogrammetry to the surveying of canal tunnels. The inspection of such structures requires time, safe access, specific processing and professional operators. Therefore, a French partnership proposes to develop a dedicated equipment based on image processing for visual inspection of canal tunnels. A 3D model of the vault and side walls of the tunnel is constructed from images recorded onboard a boat moving inside the tunnel. To assess the accuracy of this photogrammetric model (PM), a reference model is build using static TLS. We here address the problem comparing the resulting point clouds. Difficulties arise because of the highly differentiated acquisition processes, which result in very different point densities. We propose a new tool, designed to compare differences between pairs of point cloud or surfaces (triangulated meshes). Moreover, dealing with huge datasets requires the implementation of appropriate structures and algorithms. Several techniques are presented : point-to-point, cloud-to-cloud and cloud-to-mesh. In addition farthest point resampling, octree structure and Hausdorff distance are adopted and described. Experimental results are shown for a 475 m long canal tunnel located in France.

  3. The comparative evaluation of expanded national immunization policies in Korea using an analytic hierarchy process.

    PubMed

    Shin, Taeksoo; Kim, Chun-Bae; Ahn, Yang-Heui; Kim, Hyo-Youl; Cha, Byung Ho; Uh, Young; Lee, Joo-Heon; Hyun, Sook-Jung; Lee, Dong-Han; Go, Un-Yeong

    2009-01-29

    The purpose of this paper is to propose new evaluation criteria and an analytic hierarchy process (AHP) model to assess the expanded national immunization programs (ENIPs) and to evaluate two alternative health care policies. One of the alternative policies is that private clinics and hospitals would offer free vaccination services to children and the other of them is that public health centers would offer these free vaccination services. Our model to evaluate the ENIPs was developed using brainstorming, Delphi techniques, and the AHP model. We first used the brainstorming and Delphi techniques, as well as literature reviews, to determine 25 criteria with which to evaluate the national immunization policy; we then proposed a hierarchical structure of the AHP model to assess ENIPs. By applying the proposed AHP model to the assessment of ENIPs for Korean immunization policies, we show that free vaccination services should be provided by private clinics and hospitals rather than public health centers.

  4. Sensing Methods for Detecting Analog Television Signals

    NASA Astrophysics Data System (ADS)

    Rahman, Mohammad Azizur; Song, Chunyi; Harada, Hiroshi

    This paper introduces a unified method of spectrum sensing for all existing analog television (TV) signals including NTSC, PAL and SECAM. We propose a correlation based method (CBM) with a single reference signal for sensing any analog TV signals. In addition we also propose an improved energy detection method. The CBM approach has been implemented in a hardware prototype specially designed for participating in Singapore TV white space (WS) test trial conducted by Infocomm Development Authority (IDA) of the Singapore government. Analytical and simulation results of the CBM method will be presented in the paper, as well as hardware testing results for sensing various analog TV signals. Both AWGN and fading channels will be considered. It is shown that the theoretical results closely match with those from simulations. Sensing performance of the hardware prototype will also be presented in fading environment by using a fading simulator. We present performance of the proposed techniques in terms of probability of false alarm, probability of detection, sensing time etc. We also present a comparative study of the various techniques.

  5. A New Position Location System Using DTV Transmitter Identification Watermark Signals

    NASA Astrophysics Data System (ADS)

    Wang, Xianbin; Wu, Yiyan; Chouinard, Jean-Yves

    2006-12-01

    A new position location technique using the transmitter identification (TxID) RF watermark in the digital TV (DTV) signals is proposed in this paper. Conventional global positioning system (GPS) usually does not work well inside buildings due to the high frequency and weak field strength of the signal. In contrast to the GPS, the DTV signals are received from transmitters at relatively short distance, while the broadcast transmitters operate at levels up to the megawatts effective radiated power (ERP). Also the RF frequency of the DTV signal is much lower than the GPS, which makes it easier for the signal to penetrate buildings and other objects. The proposed position location system based on DTV TxID signal is presented in this paper. Practical receiver implementation issues including nonideal correlation and synchronization are analyzed and discussed. Performance of the proposed technique is evaluated through Monte Carlo simulations and compared with other existing position location systems. Possible ways to improve the accuracy of the new position location system is discussed.

  6. Solving Fractional Programming Problems based on Swarm Intelligence

    NASA Astrophysics Data System (ADS)

    Raouf, Osama Abdel; Hezam, Ibrahim M.

    2014-04-01

    This paper presents a new approach to solve Fractional Programming Problems (FPPs) based on two different Swarm Intelligence (SI) algorithms. The two algorithms are: Particle Swarm Optimization, and Firefly Algorithm. The two algorithms are tested using several FPP benchmark examples and two selected industrial applications. The test aims to prove the capability of the SI algorithms to solve any type of FPPs. The solution results employing the SI algorithms are compared with a number of exact and metaheuristic solution methods used for handling FPPs. Swarm Intelligence can be denoted as an effective technique for solving linear or nonlinear, non-differentiable fractional objective functions. Problems with an optimal solution at a finite point and an unbounded constraint set, can be solved using the proposed approach. Numerical examples are given to show the feasibility, effectiveness, and robustness of the proposed algorithm. The results obtained using the two SI algorithms revealed the superiority of the proposed technique among others in computational time. A better accuracy was remarkably observed in the solution results of the industrial application problems.

  7. Imaging of human vertebral surface using ultrasound RF data received at each element of probe for thoracic anesthesia

    NASA Astrophysics Data System (ADS)

    Takahashi, Kazuki; Taki, Hirofumi; Onishi, Eiko; Yamauchi, Masanori; Kanai, Hiroshi

    2017-07-01

    Epidural anesthesia is a common technique for perioperative analgesia and chronic pain treatment. Since ultrasonography is insufficient for depicting the human vertebral surface, most examiners apply epidural puncture by body surface landmarks on the back such as the spinous process and scapulae without any imaging, including ultrasonography. The puncture route to the epidural space at thoracic vertebrae is much narrower than that at lumber vertebrae, and therefore, epidural anesthesia at thoracic vertebrae is difficult, especially for a beginner. Herein, a novel imaging method is proposed based on a bi-static imaging technique by making use of the transmit beam width and direction. In an in vivo experimental study on human thoracic vertebrae, the proposed method succeeded in depicting the vertebral surface clearly as compared with conventional B-mode imaging and the conventional envelope method. This indicates the potential of the proposed method in visualizing the vertebral surface for the proper and safe execution of epidural anesthesia.

  8. A pseudo-discrete algebraic reconstruction technique (PDART) prior image-based suppression of high density artifacts in computed tomography

    NASA Astrophysics Data System (ADS)

    Pua, Rizza; Park, Miran; Wi, Sunhee; Cho, Seungryong

    2016-12-01

    We propose a hybrid metal artifact reduction (MAR) approach for computed tomography (CT) that is computationally more efficient than a fully iterative reconstruction method, but at the same time achieves superior image quality to the interpolation-based in-painting techniques. Our proposed MAR method, an image-based artifact subtraction approach, utilizes an intermediate prior image reconstructed via PDART to recover the background information underlying the high density objects. For comparison, prior images generated by total-variation minimization (TVM) algorithm, as a realization of fully iterative approach, were also utilized as intermediate images. From the simulation and real experimental results, it has been shown that PDART drastically accelerates the reconstruction to an acceptable quality of prior images. Incorporating PDART-reconstructed prior images in the proposed MAR scheme achieved higher quality images than those by a conventional in-painting method. Furthermore, the results were comparable to the fully iterative MAR that uses high-quality TVM prior images.

  9. Multivariable PID controller design tuning using bat algorithm for activated sludge process

    NASA Astrophysics Data System (ADS)

    Atikah Nor’Azlan, Nur; Asmiza Selamat, Nur; Mat Yahya, Nafrizuan

    2018-04-01

    The designing of a multivariable PID control for multi input multi output is being concerned with this project by applying four multivariable PID control tuning which is Davison, Penttinen-Koivo, Maciejowski and Proposed Combined method. The determination of this study is to investigate the performance of selected optimization technique to tune the parameter of MPID controller. The selected optimization technique is Bat Algorithm (BA). All the MPID-BA tuning result will be compared and analyzed. Later, the best MPID-BA will be chosen in order to determine which techniques are better based on the system performances in terms of transient response.

  10. Muscle activity characterization by laser Doppler Myography

    NASA Astrophysics Data System (ADS)

    Scalise, Lorenzo; Casaccia, Sara; Marchionni, Paolo; Ercoli, Ilaria; Primo Tomasini, Enrico

    2013-09-01

    Electromiography (EMG) is the gold-standard technique used for the evaluation of muscle activity. This technique is used in biomechanics, sport medicine, neurology and rehabilitation therapy and it provides the electrical activity produced by skeletal muscles. Among the parameters measured with EMG, two very important quantities are: signal amplitude and duration of muscle contraction, muscle fatigue and maximum muscle power. Recently, a new measurement procedure, named Laser Doppler Myography (LDMi), for the non contact assessment of muscle activity has been proposed to measure the vibro-mechanical behaviour of the muscle. The aim of this study is to present the LDMi technique and to evaluate its capacity to measure some characteristic features proper of the muscle. In this paper LDMi is compared with standard superficial EMG (sEMG) requiring the application of sensors on the skin of each patient. sEMG and LDMi signals have been simultaneously acquired and processed to test correlations. Three parameters has been analyzed to compare these techniques: Muscle activation timing, signal amplitude and muscle fatigue. LDMi appears to be a reliable and promising measurement technique allowing the measurements without contact with the patient skin.

  11. A Marker-Based Approach for the Automated Selection of a Single Segmentation from a Hierarchical Set of Image Segmentations

    NASA Technical Reports Server (NTRS)

    Tarabalka, Y.; Tilton, J. C.; Benediktsson, J. A.; Chanussot, J.

    2012-01-01

    The Hierarchical SEGmentation (HSEG) algorithm, which combines region object finding with region object clustering, has given good performances for multi- and hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. Two classification-based approaches for automatic marker selection are adapted and compared for this purpose. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. Three different implementations of the M-HSEG method are proposed and their performances in terms of classification accuracies are compared. The experimental results, presented for three hyperspectral airborne images, demonstrate that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for remote sensing image analysis.

  12. Hybrid surrogate-model-based multi-fidelity efficient global optimization applied to helicopter blade design

    NASA Astrophysics Data System (ADS)

    Ariyarit, Atthaphon; Sugiura, Masahiko; Tanabe, Yasutada; Kanazaki, Masahiro

    2018-06-01

    A multi-fidelity optimization technique by an efficient global optimization process using a hybrid surrogate model is investigated for solving real-world design problems. The model constructs the local deviation using the kriging method and the global model using a radial basis function. The expected improvement is computed to decide additional samples that can improve the model. The approach was first investigated by solving mathematical test problems. The results were compared with optimization results from an ordinary kriging method and a co-kriging method, and the proposed method produced the best solution. The proposed method was also applied to aerodynamic design optimization of helicopter blades to obtain the maximum blade efficiency. The optimal shape obtained by the proposed method achieved performance almost equivalent to that obtained using the high-fidelity, evaluation-based single-fidelity optimization. Comparing all three methods, the proposed method required the lowest total number of high-fidelity evaluation runs to obtain a converged solution.

  13. Experimental validation of spatial Fourier transform-based multiple sound zone generation with a linear loudspeaker array.

    PubMed

    Okamoto, Takuma; Sakaguchi, Atsushi

    2017-03-01

    Generating acoustically bright and dark zones using loudspeakers is gaining attention as one of the most important acoustic communication techniques for such uses as personal sound systems and multilingual guide services. Although most conventional methods are based on numerical solutions, an analytical approach based on the spatial Fourier transform with a linear loudspeaker array has been proposed, and its effectiveness has been compared with conventional acoustic energy difference maximization and presented by computer simulations. To describe the effectiveness of the proposal in actual environments, this paper investigates the experimental validation of the proposed approach with rectangular and Hann windows and compared it with three conventional methods: simple delay-and-sum beamforming, contrast maximization, and least squares-based pressure matching using an actually implemented linear array of 64 loudspeakers in an anechoic chamber. The results of both the computer simulations and the actual experiments show that the proposed approach with a Hann window more accurately controlled the bright and dark zones than the conventional methods.

  14. Frequency-domain elastic full waveform inversion using encoded simultaneous sources

    NASA Astrophysics Data System (ADS)

    Jeong, W.; Son, W.; Pyun, S.; Min, D.

    2011-12-01

    Currently, numerous studies have endeavored to develop robust full waveform inversion and migration algorithms. These processes require enormous computational costs, because of the number of sources in the survey. To avoid this problem, the phase encoding technique for prestack migration was proposed by Romero (2000) and Krebs et al. (2009) proposed the encoded simultaneous-source inversion technique in the time domain. On the other hand, Ben-Hadj-Ali et al. (2011) demonstrated the robustness of the frequency-domain full waveform inversion with simultaneous sources for noisy data changing the source assembling. Although several studies on simultaneous-source inversion tried to estimate P- wave velocity based on the acoustic wave equation, seismic migration and waveform inversion based on the elastic wave equations are required to obtain more reliable subsurface information. In this study, we propose a 2-D frequency-domain elastic full waveform inversion technique using phase encoding methods. In our algorithm, the random phase encoding method is employed to calculate the gradients of the elastic parameters, source signature estimation and the diagonal entries of approximate Hessian matrix. The crosstalk for the estimated source signature and the diagonal entries of approximate Hessian matrix are suppressed with iteration as for the gradients. Our 2-D frequency-domain elastic waveform inversion algorithm is composed using the back-propagation technique and the conjugate-gradient method. Source signature is estimated using the full Newton method. We compare the simultaneous-source inversion with the conventional waveform inversion for synthetic data sets of the Marmousi-2 model. The inverted results obtained by simultaneous sources are comparable to those obtained by individual sources, and source signature is successfully estimated in simultaneous source technique. Comparing the inverted results using the pseudo Hessian matrix with previous inversion results provided by the approximate Hessian matrix, it is noted that the latter are better than the former for deeper parts of the model. This work was financially supported by the Brain Korea 21 project of Energy System Engineering, by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2010-0006155), by the Energy Efficiency & Resources of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government Ministry of Knowledge Economy (No. 2010T100200133).

  15. Model-based multi-fringe interferometry using Zernike polynomials

    NASA Astrophysics Data System (ADS)

    Gu, Wei; Song, Weihong; Wu, Gaofeng; Quan, Haiyang; Wu, Yongqian; Zhao, Wenchuan

    2018-06-01

    In this paper, a general phase retrieval method is proposed, which is based on one single interferogram with a small amount of fringes (either tilt or power). Zernike polynomials are used to characterize the phase to be measured; the phase distribution is reconstructed by a non-linear least squares method. Experiments show that the proposed method can obtain satisfactory results compared to the standard phase-shifting interferometry technique. Additionally, the retrace errors of proposed method can be neglected because of the few fringes; it does not need any auxiliary phase shifting facilities (low cost) and it is easy to implement without the process of phase unwrapping.

  16. A novel load balanced energy conservation approach in WSN using biogeography based optimization

    NASA Astrophysics Data System (ADS)

    Kaushik, Ajay; Indu, S.; Gupta, Daya

    2017-09-01

    Clustering sensor nodes is an effective technique to reduce energy consumption of the sensor nodes and maximize the lifetime of Wireless sensor networks. Balancing load of the cluster head is an important factor in long run operation of WSNs. In this paper we propose a novel load balancing approach using biogeography based optimization (LB-BBO). LB-BBO uses two separate fitness functions to perform load balancing of equal and unequal load respectively. The proposed method is simulated using matlab and compared with existing methods. The proposed method shows better performance than all the previous works implemented for energy conservation in WSN

  17. Methods of localization of Lamb wave sources on thin plates

    NASA Astrophysics Data System (ADS)

    Turkaya, Semih; Toussaint, Renaud; Kvalheim Eriksen, Fredrik; Daniel, Guillaume; Grude Flekkøy, Eirik; Jørgen Måløy, Knut

    2015-04-01

    Signal localization techniques are ubiquitous in both industry and academic communities. We propose a new localization method on plates which is based on energy amplitude attenuation and inverted source amplitude comparison. This inversion is tested on synthetic data using Lamb wave propagation direct model and on experimental dataset (recorded with 4 Brüel & Kjær Type 4374 miniature piezoelectric shock accelerometers (1-26 kHz frequency range)). We compare the performance of the technique to the classical source localization algorithms, arrival time localization, time reversal localization, localization based on energy amplitude. Furthermore, we measure and compare the accuracy of these techniques as function of sampling rate, dynamic range, geometry, Signal to Noise Ratio, and we show that this very versatile technique works better than classical ones over the sampling rates 100kHz - 1MHz. Experimental phase consists of a glass plate having dimensions of 80cmx40cm with a thickness of 1cm. Generated signals due to a wooden hammer hit or a steel ball hit are captured by sensors placed on the plate on different locations with the mentioned sensors. Numerical simulations are done using dispersive far field approximation of plate waves. Signals are generated using a hertzian loading over the plate. Using imaginary sources outside the plate boundaries the effect of reflections is also included. This proposed method, can be modified to be implemented on 3d environments, monitor industrial activities (e.g boreholes drilling/production activities) or natural brittle systems (e.g earthquakes, volcanoes, avalanches).

  18. Cone-beam volume CT mammographic imaging: feasibility study

    NASA Astrophysics Data System (ADS)

    Chen, Biao; Ning, Ruola

    2001-06-01

    X-ray projection mammography, using a film/screen combination or digital techniques, has proven to be the most effective imaging modality for early detection of breast cancer currently available. However, the inherent superimposition of structures makes small carcinoma (a few millimeters in size) difficult to detect in the occultation case or in dense breasts, resulting in a high false positive biopsy rate. The cone-beam x-ray projection based volume imaging using flat panel detectors (FPDs) makes it possible to obtain three-dimensional breast images. This may benefit diagnosis of the structure and pattern of the lesion while eliminating hard compression of the breast. This paper presents a novel cone-beam volume CT mammographic imaging protocol based on the above techniques. Through computer simulation, the key issues of the system and imaging techniques, including the x-ray imaging geometry and corresponding reconstruction algorithms, x-ray characteristics of breast tissues, x-ray setting techniques, the absorbed dose estimation and the quantitative effect of x-ray scattering on image quality, are addressed. The preliminary simulation results support the proposed cone-beam volume CT mammographic imaging modality in respect to feasibility and practicability for mammography. The absorbed dose level is comparable to that of current two-view mammography and would not be a prominent problem for this imaging protocol. Compared to traditional mammography, the proposed imaging protocol with isotropic spatial resolution will potentially provide significantly better low contrast detectability of breast tumors and more accurate location of breast lesions.

  19. Evaluating motion processing algorithms for use with functional near-infrared spectroscopy data from young children.

    PubMed

    Delgado Reyes, Lourdes M; Bohache, Kevin; Wijeakumar, Sobanawartiny; Spencer, John P

    2018-04-01

    Motion artifacts are often a significant component of the measured signal in functional near-infrared spectroscopy (fNIRS) experiments. A variety of methods have been proposed to address this issue, including principal components analysis (PCA), correlation-based signal improvement (CBSI), wavelet filtering, and spline interpolation. The efficacy of these techniques has been compared using simulated data; however, our understanding of how these techniques fare when dealing with task-based cognitive data is limited. Brigadoi et al. compared motion correction techniques in a sample of adult data measured during a simple cognitive task. Wavelet filtering showed the most promise as an optimal technique for motion correction. Given that fNIRS is often used with infants and young children, it is critical to evaluate the effectiveness of motion correction techniques directly with data from these age groups. This study addresses that problem by evaluating motion correction algorithms implemented in HomER2. The efficacy of each technique was compared quantitatively using objective metrics related to the physiological properties of the hemodynamic response. Results showed that targeted PCA (tPCA), spline, and CBSI retained a higher number of trials. These techniques also performed well in direct head-to-head comparisons with the other approaches using quantitative metrics. The CBSI method corrected many of the artifacts present in our data; however, this approach produced sometimes unstable HRFs. The targeted PCA and spline methods proved to be the most robust, performing well across all comparison metrics. When compared head to head, tPCA consistently outperformed spline. We conclude, therefore, that tPCA is an effective technique for correcting motion artifacts in fNIRS data from young children.

  20. Evaluation of Carbon Anodes for Rechargeable Lithium Cells

    NASA Technical Reports Server (NTRS)

    Huang, C-K.; Surampudi, S.; Attia, A.; Halpert, G.

    1993-01-01

    Both liquid phase intercalation technique and electrochemical intercalation technique were examined for the Li-carbon material preparation. The electrochemical techniques include a intermittent discharge method and a two step method. These two electrochemical techniques can ensure to achieve the maximum reversible Li capacity for common commercially available carbon materials. The carbon materials evaluated by the intercalacation method includes: pitch coke, petroleum cole, PAN fiber and graphite materials. Their reversible Li capacity were determined and compared. In this paper, we also demonstrate the importance of EPDM binder composition in the carbon electrode. Our results indicated that it can impact the Li intercalation and de-intercalation capacity in carbon materials. Finally, two possibilities that may help explain the capacity degradation during practical cell cycling were proposed.

  1. Deep Learning-Based Noise Reduction Approach to Improve Speech Intelligibility for Cochlear Implant Recipients.

    PubMed

    Lai, Ying-Hui; Tsao, Yu; Lu, Xugang; Chen, Fei; Su, Yu-Ting; Chen, Kuang-Chao; Chen, Yu-Hsuan; Chen, Li-Ching; Po-Hung Li, Lieber; Lee, Chin-Hui

    2018-01-20

    We investigate the clinical effectiveness of a novel deep learning-based noise reduction (NR) approach under noisy conditions with challenging noise types at low signal to noise ratio (SNR) levels for Mandarin-speaking cochlear implant (CI) recipients. The deep learning-based NR approach used in this study consists of two modules: noise classifier (NC) and deep denoising autoencoder (DDAE), thus termed (NC + DDAE). In a series of comprehensive experiments, we conduct qualitative and quantitative analyses on the NC module and the overall NC + DDAE approach. Moreover, we evaluate the speech recognition performance of the NC + DDAE NR and classical single-microphone NR approaches for Mandarin-speaking CI recipients under different noisy conditions. The testing set contains Mandarin sentences corrupted by two types of maskers, two-talker babble noise, and a construction jackhammer noise, at 0 and 5 dB SNR levels. Two conventional NR techniques and the proposed deep learning-based approach are used to process the noisy utterances. We qualitatively compare the NR approaches by the amplitude envelope and spectrogram plots of the processed utterances. Quantitative objective measures include (1) normalized covariance measure to test the intelligibility of the utterances processed by each of the NR approaches; and (2) speech recognition tests conducted by nine Mandarin-speaking CI recipients. These nine CI recipients use their own clinical speech processors during testing. The experimental results of objective evaluation and listening test indicate that under challenging listening conditions, the proposed NC + DDAE NR approach yields higher intelligibility scores than the two compared classical NR techniques, under both matched and mismatched training-testing conditions. When compared to the two well-known conventional NR techniques under challenging listening condition, the proposed NC + DDAE NR approach has superior noise suppression capabilities and gives less distortion for the key speech envelope information, thus, improving speech recognition more effectively for Mandarin CI recipients. The results suggest that the proposed deep learning-based NR approach can potentially be integrated into existing CI signal processors to overcome the degradation of speech perception caused by noise.

  2. An efficient CU partition algorithm for HEVC based on improved Sobel operator

    NASA Astrophysics Data System (ADS)

    Sun, Xuebin; Chen, Xiaodong; Xu, Yong; Sun, Gang; Yang, Yunsheng

    2018-04-01

    As the latest video coding standard, High Efficiency Video Coding (HEVC) achieves over 50% bit rate reduction with similar video quality compared with previous standards H.264/AVC. However, the higher compression efficiency is attained at the cost of significantly increasing computational load. In order to reduce the complexity, this paper proposes a fast coding unit (CU) partition technique to speed up the process. To detect the edge features of each CU, a more accurate improved Sobel filtering is developed and performed By analyzing the textural features of CU, an early CU splitting termination is proposed to decide whether a CU should be decomposed into four lower-dimensions CUs or not. Compared with the reference software HM16.7, experimental results indicate the proposed algorithm can lessen the encoding time up to 44.09% on average, with a negligible bit rate increase of 0.24%, and quality losses lower 0.03 dB, respectively. In addition, the proposed algorithm gets a better trade-off between complexity and rate-distortion among the other proposed works.

  3. Predicting breast cancer using an expression values weighted clinical classifier.

    PubMed

    Thomas, Minta; De Brabanter, Kris; Suykens, Johan A K; De Moor, Bart

    2014-12-31

    Clinical data, such as patient history, laboratory analysis, ultrasound parameters-which are the basis of day-to-day clinical decision support-are often used to guide the clinical management of cancer in the presence of microarray data. Several data fusion techniques are available to integrate genomics or proteomics data, but only a few studies have created a single prediction model using both gene expression and clinical data. These studies often remain inconclusive regarding an obtained improvement in prediction performance. To improve clinical management, these data should be fully exploited. This requires efficient algorithms to integrate these data sets and design a final classifier. LS-SVM classifiers and generalized eigenvalue/singular value decompositions are successfully used in many bioinformatics applications for prediction tasks. While bringing up the benefits of these two techniques, we propose a machine learning approach, a weighted LS-SVM classifier to integrate two data sources: microarray and clinical parameters. We compared and evaluated the proposed methods on five breast cancer case studies. Compared to LS-SVM classifier on individual data sets, generalized eigenvalue decomposition (GEVD) and kernel GEVD, the proposed weighted LS-SVM classifier offers good prediction performance, in terms of test area under ROC Curve (AUC), on all breast cancer case studies. Thus a clinical classifier weighted with microarray data set results in significantly improved diagnosis, prognosis and prediction responses to therapy. The proposed model has been shown as a promising mathematical framework in both data fusion and non-linear classification problems.

  4. Spectra resolution for simultaneous spectrophotometric determination of lamivudine and zidovudine components in pharmaceutical formulation of human immunodeficiency virus drug based on using continuous wavelet transform and derivative transform techniques.

    PubMed

    Sohrabi, Mahmoud Reza; Tayefeh Zarkesh, Mahshid

    2014-05-01

    In the present paper, two spectrophotometric methods based on signal processing are proposed for the simultaneous determination of two components of an anti-HIV drug called lamivudine (LMV) and zidovudine (ZDV). The proposed methods are applied to synthetic binary mixtures and commercial pharmaceutical tablets without the need for any chemical separation procedures. The developed methods are based on the application of Continuous Wavelet Transform (CWT) and Derivative Spectrophotometry (DS) combined with the zero cross point technique. The Daubechies (db5) wavelet family (242 nm) and Dmey wavelet family (236 nm) were found to give the best results under optimum conditions for simultaneous analysis of lamivudine and zidovudine, respectively. In addition, the first derivative absorption spectra were selected for the determination of lamivudine and zidovudine at 266 nm and 248 nm, respectively. Assaying various synthetic mixtures of the components validated the presented methods. Mean recovery values were found to be between 100.31% and 100.2% for CWT and 99.42% and 97.37% for DS, respectively for determination of LMV and ZDV. The results obtained from analyzing the real samples by the proposed methods were compared to the HPLC reference method. One-way ANOVA test at 95% confidence level was applied to the results. The statistical data from comparing the proposed methods with the reference method showed no significant differences. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Multi-objective evolutionary algorithms for fuzzy classification in survival prediction.

    PubMed

    Jiménez, Fernando; Sánchez, Gracia; Juárez, José M

    2014-03-01

    This paper presents a novel rule-based fuzzy classification methodology for survival/mortality prediction in severe burnt patients. Due to the ethical aspects involved in this medical scenario, physicians tend not to accept a computer-based evaluation unless they understand why and how such a recommendation is given. Therefore, any fuzzy classifier model must be both accurate and interpretable. The proposed methodology is a three-step process: (1) multi-objective constrained optimization of a patient's data set, using Pareto-based elitist multi-objective evolutionary algorithms to maximize accuracy and minimize the complexity (number of rules) of classifiers, subject to interpretability constraints; this step produces a set of alternative (Pareto) classifiers; (2) linguistic labeling, which assigns a linguistic label to each fuzzy set of the classifiers; this step is essential to the interpretability of the classifiers; (3) decision making, whereby a classifier is chosen, if it is satisfactory, according to the preferences of the decision maker. If no classifier is satisfactory for the decision maker, the process starts again in step (1) with a different input parameter set. The performance of three multi-objective evolutionary algorithms, niched pre-selection multi-objective algorithm, elitist Pareto-based multi-objective evolutionary algorithm for diversity reinforcement (ENORA) and the non-dominated sorting genetic algorithm (NSGA-II), was tested using a patient's data set from an intensive care burn unit and a standard machine learning data set from an standard machine learning repository. The results are compared using the hypervolume multi-objective metric. Besides, the results have been compared with other non-evolutionary techniques and validated with a multi-objective cross-validation technique. Our proposal improves the classification rate obtained by other non-evolutionary techniques (decision trees, artificial neural networks, Naive Bayes, and case-based reasoning) obtaining with ENORA a classification rate of 0.9298, specificity of 0.9385, and sensitivity of 0.9364, with 14.2 interpretable fuzzy rules on average. Our proposal improves the accuracy and interpretability of the classifiers, compared with other non-evolutionary techniques. We also conclude that ENORA outperforms niched pre-selection and NSGA-II algorithms. Moreover, given that our multi-objective evolutionary methodology is non-combinational based on real parameter optimization, the time cost is significantly reduced compared with other evolutionary approaches existing in literature based on combinational optimization. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Application of modified dynamic conformal arc (MDCA) technique on liver stereotactic body radiation therapy (SBRT) planning following RTOG 0438 guideline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Chengyu, E-mail: shicy1974@yahoo.com; Chen, Yong; Fang, Deborah

    2015-04-01

    Liver stereotactic body radiation therapy (SBRT) is a feasible treatment method for the nonoperable, patient with early-stage liver cancer. Treatment planning for the SBRT is very important and has to consider the simulation accuracy, planning time, treatment efficiency effects etc. The modified dynamic conformal arc (MDCA) technique is a 3-dimensional conformal arc planning method, which has been proposed for liver SBRT planning at our center. In this study, we compared the MDCA technique with the RapidArc technique in terms of planning target volume (PTV) coverage and sparing of organs at risk (OARs). The results show that the MDCA technique hasmore » comparable plan quality to RapidArc considering PTV coverage, hot spots, heterogeneity index, and effective liver volume. For the 5 PTVs studied among 4 patients, the MDCA plan, when compared with the RapidArc plan, showed 9% more hot spots, more heterogeneity effect, more sparing of OARs, and lower liver effective volume. The monitor unit (MU) number for the MDCA plan is much lower than for the RapidArc plans. The MDCA plan has the advantages of less planning time, no-collision treatment, and a lower MU number.« less

  7. Speckle tracking and speckle content based composite strain imaging for solid and fluid filled lesions.

    PubMed

    Rabbi, Md Shifat-E; Hasan, Md Kamrul

    2017-02-01

    Strain imaging though for solid lesions provides an effective way for determining their pathologic condition by displaying the tissue stiffness contrast, for fluid filled lesions such an imaging is yet an open problem. In this paper, we propose a novel speckle content based strain imaging technique for visualization and classification of fluid filled lesions in elastography after automatic identification of the presence of fluid filled lesions. Speckle content based strain, defined as a function of speckle density based on the relationship between strain and speckle density, gives an indirect strain value for fluid filled lesions. To measure the speckle density of the fluid filled lesions, two new criteria based on oscillation count of the windowed radio frequency signal and local variance of the normalized B-mode image are used. An improved speckle tracking technique is also proposed for strain imaging of the solid lesions and background. A wavelet-based integration technique is then proposed for combining the strain images from these two techniques for visualizing both the solid and fluid filled lesions from a common framework. The final output of our algorithm is a high quality composite strain image which can effectively visualize both solid and fluid filled breast lesions in addition to the speckle content of the fluid filled lesions for their discrimination. The performance of our algorithm is evaluated using the in vivo patient data and compared with recently reported techniques. The results show that both the solid and fluid filled lesions can be better visualized using our technique and the fluid filled lesions can be classified with good accuracy. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Accelerating simultaneous algebraic reconstruction technique with motion compensation using CUDA-enabled GPU.

    PubMed

    Pang, Wai-Man; Qin, Jing; Lu, Yuqiang; Xie, Yongming; Chui, Chee-Kong; Heng, Pheng-Ann

    2011-03-01

    To accelerate the simultaneous algebraic reconstruction technique (SART) with motion compensation for speedy and quality computed tomography reconstruction by exploiting CUDA-enabled GPU. Two core techniques are proposed to fit SART into the CUDA architecture: (1) a ray-driven projection along with hardware trilinear interpolation, and (2) a voxel-driven back-projection that can avoid redundant computation by combining CUDA shared memory. We utilize the independence of each ray and voxel on both techniques to design CUDA kernel to represent a ray in the projection and a voxel in the back-projection respectively. Thus, significant parallelization and performance boost can be achieved. For motion compensation, we rectify each ray's direction during the projection and back-projection stages based on a known motion vector field. Extensive experiments demonstrate the proposed techniques can provide faster reconstruction without compromising image quality. The process rate is nearly 100 projections s (-1), and it is about 150 times faster than a CPU-based SART. The reconstructed image is compared against ground truth visually and quantitatively by peak signal-to-noise ratio (PSNR) and line profiles. We further evaluate the reconstruction quality using quantitative metrics such as signal-to-noise ratio (SNR) and mean-square-error (MSE). All these reveal that satisfactory results are achieved. The effects of major parameters such as ray sampling interval and relaxation parameter are also investigated by a series of experiments. A simulated dataset is used for testing the effectiveness of our motion compensation technique. The results demonstrate our reconstructed volume can eliminate undesirable artifacts like blurring. Our proposed method has potential to realize instantaneous presentation of 3D CT volume to physicians once the projection data are acquired.

  9. Sparsity-Aware DOA Estimation Scheme for Noncircular Source in MIMO Radar.

    PubMed

    Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Qi; Liu, Jing

    2016-04-14

    In this paper, a novel sparsity-aware direction of arrival (DOA) estimation scheme for a noncircular source is proposed in multiple-input multiple-output (MIMO) radar. In the proposed method, the reduced-dimensional transformation technique is adopted to eliminate the redundant elements. Then, exploiting the noncircularity of signals, a joint sparsity-aware scheme based on the reweighted l1 norm penalty is formulated for DOA estimation, in which the diagonal elements of the weight matrix are the coefficients of the noncircular MUSIC-like (NC MUSIC-like) spectrum. Compared to the existing l1 norm penalty-based methods, the proposed scheme provides higher angular resolution and better DOA estimation performance. Results from numerical experiments are used to show the effectiveness of our proposed method.

  10. Modal parameter identification based on combining transmissibility functions and blind source separation techniques

    NASA Astrophysics Data System (ADS)

    Araújo, Iván Gómez; Sánchez, Jesús Antonio García; Andersen, Palle

    2018-05-01

    Transmissibility-based operational modal analysis is a recent and alternative approach used to identify the modal parameters of structures under operational conditions. This approach is advantageous compared with traditional operational modal analysis because it does not make any assumptions about the excitation spectrum (i.e., white noise with a flat spectrum). However, common methodologies do not include a procedure to extract closely spaced modes with low signal-to-noise ratios. This issue is relevant when considering that engineering structures generally have closely spaced modes and that their measured responses present high levels of noise. Therefore, to overcome these problems, a new combined method for modal parameter identification is proposed in this work. The proposed method combines blind source separation (BSS) techniques and transmissibility-based methods. Here, BSS techniques were used to recover source signals, and transmissibility-based methods were applied to estimate modal information from the recovered source signals. To achieve this combination, a new method to define a transmissibility function was proposed. The suggested transmissibility function is based on the relationship between the power spectral density (PSD) of mixed signals and the PSD of signals from a single source. The numerical responses of a truss structure with high levels of added noise and very closely spaced modes were processed using the proposed combined method to evaluate its ability to identify modal parameters in these conditions. Colored and white noise excitations were used for the numerical example. The proposed combined method was also used to evaluate the modal parameters of an experimental test on a structure containing closely spaced modes. The results showed that the proposed combined method is capable of identifying very closely spaced modes in the presence of noise and, thus, may be potentially applied to improve the identification of damping ratios.

  11. An Examination of Alternative Multidimensional Scaling Techniques

    ERIC Educational Resources Information Center

    Papazoglou, Sofia; Mylonas, Kostas

    2017-01-01

    The purpose of this study is to compare alternative multidimensional scaling (MDS) methods for constraining the stimuli on the circumference of a circle and on the surface of a sphere. Specifically, the existing MDS-T method for plotting the stimuli on the circumference of a circle is applied, and its extension is proposed for constraining the…

  12. Do E-Learning Tools Make a Difference? Results from a Case Study

    ERIC Educational Resources Information Center

    Desplaces, David; Blair, Carrie A.; Salvaggio, Trent

    2015-01-01

    Even as academics continue to debate whether distance education techniques are successful, the market demands increased distance education programs and a growing number of corporations are using e-learning to train their employees. We propose and examine a model comparing outcomes in 3 different pedagogical classroom settings: traditional,…

  13. Quantification of Spatial Heterogeneity in Old Growth Forst of Korean Pine

    Treesearch

    Wang Zhengquan; Wang Qingcheng; Zhang Yandong

    1997-01-01

    Spatial hetergeneity is a very important issue in studying functions and processes of ecological systems at various scales. Semivariogram analysis is an effective technique to summarize spatial data, and quantification of sptail heterogeneity. In this paper, we propose some principles to use semivariograms to characterize and compare spatial heterogeneity of...

  14. Efficient constraint handling in electromagnetism-like algorithm for traveling salesman problem with time windows.

    PubMed

    Yurtkuran, Alkın; Emel, Erdal

    2014-01-01

    The traveling salesman problem with time windows (TSPTW) is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA) that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle's boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms.

  15. Efficient Constraint Handling in Electromagnetism-Like Algorithm for Traveling Salesman Problem with Time Windows

    PubMed Central

    Yurtkuran, Alkın

    2014-01-01

    The traveling salesman problem with time windows (TSPTW) is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA) that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle's boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms. PMID:24723834

  16. Improved Spatial Differencing Scheme for 2-D DOA Estimation of Coherent Signals with Uniform Rectangular Arrays.

    PubMed

    Shi, Junpeng; Hu, Guoping; Sun, Fenggang; Zong, Binfeng; Wang, Xin

    2017-08-24

    This paper proposes an improved spatial differencing (ISD) scheme for two-dimensional direction of arrival (2-D DOA) estimation of coherent signals with uniform rectangular arrays (URAs). We first divide the URA into a number of row rectangular subarrays. Then, by extracting all the data information of each subarray, we only perform difference-operation on the auto-correlations, while the cross-correlations are kept unchanged. Using the reconstructed submatrices, both the forward only ISD (FO-ISD) and forward backward ISD (FB-ISD) methods are developed under the proposed scheme. Compared with the existing spatial smoothing techniques, the proposed scheme can use more data information of the sample covariance matrix and also suppress the effect of additive noise more effectively. Simulation results show that both FO-ISD and FB-ISD can improve the estimation performance largely as compared to the others, in white or colored noise conditions.

  17. Improved Spatial Differencing Scheme for 2-D DOA Estimation of Coherent Signals with Uniform Rectangular Arrays

    PubMed Central

    Hu, Guoping; Zong, Binfeng; Wang, Xin

    2017-01-01

    This paper proposes an improved spatial differencing (ISD) scheme for two-dimensional direction of arrival (2-D DOA) estimation of coherent signals with uniform rectangular arrays (URAs). We first divide the URA into a number of row rectangular subarrays. Then, by extracting all the data information of each subarray, we only perform difference-operation on the auto-correlations, while the cross-correlations are kept unchanged. Using the reconstructed submatrices, both the forward only ISD (FO-ISD) and forward backward ISD (FB-ISD) methods are developed under the proposed scheme. Compared with the existing spatial smoothing techniques, the proposed scheme can use more data information of the sample covariance matrix and also suppress the effect of additive noise more effectively. Simulation results show that both FO-ISD and FB-ISD can improve the estimation performance largely as compared to the others, in white or colored noise conditions. PMID:28837115

  18. Modeling Progressive Damage Using Local Displacement Discontinuities Within the FEAMAC Multiscale Modeling Framework

    NASA Technical Reports Server (NTRS)

    Ranatunga, Vipul; Bednarcyk, Brett A.; Arnold, Steven M.

    2010-01-01

    A method for performing progressive damage modeling in composite materials and structures based on continuum level interfacial displacement discontinuities is presented. The proposed method enables the exponential evolution of the interfacial compliance, resulting in unloading of the tractions at the interface after delamination or failure occurs. In this paper, the proposed continuum displacement discontinuity model has been used to simulate failure within both isotropic and orthotropic materials efficiently and to explore the possibility of predicting the crack path, therein. Simulation results obtained from Mode-I and Mode-II fracture compare the proposed approach with the cohesive element approach and Virtual Crack Closure Techniques (VCCT) available within the ABAQUS (ABAQUS, Inc.) finite element software. Furthermore, an eccentrically loaded 3-point bend test has been simulated with the displacement discontinuity model, and the resulting crack path prediction has been compared with a prediction based on the extended finite element model (XFEM) approach.

  19. A RONI Based Visible Watermarking Approach for Medical Image Authentication.

    PubMed

    Thanki, Rohit; Borra, Surekha; Dwivedi, Vedvyas; Borisagar, Komal

    2017-08-09

    Nowadays medical data in terms of image files are often exchanged between different hospitals for use in telemedicine and diagnosis. Visible watermarking being extensively used for Intellectual Property identification of such medical images, leads to serious issues if failed to identify proper regions for watermark insertion. In this paper, the Region of Non-Interest (RONI) based visible watermarking for medical image authentication is proposed. In this technique, to RONI of the cover medical image is first identified using Human Visual System (HVS) model. Later, watermark logo is visibly inserted into RONI of the cover medical image to get watermarked medical image. Finally, the watermarked medical image is compared with the original medical image for measurement of imperceptibility and authenticity of proposed scheme. The experimental results showed that this proposed scheme reduces the computational complexity and improves the PSNR when compared to many existing schemes.

  20. Comparing optical test methods for a lightweight primary mirror of a space-borne Cassegrain telescope

    NASA Astrophysics Data System (ADS)

    Lin, Wei-Cheng; Chang, Shenq-Tsong; Yu, Zong-Ru; Lin, Yu-Chuan; Ho, Cheng-Fong; Huang, Ting-Ming; Chen, Cheng-Huan

    2014-09-01

    A Cassegrain telescope with a 450 mm clear aperture was developed for use in a spaceborne optical remote-sensing instrument. Self-weight deformation and thermal distortion were considered: to this end, Zerodur was used to manufacture the primary mirror. The lightweight scheme adopted a hexagonal cell structure yielding a lightweight ratio of 50%. In general, optical testing on a lightweight mirror is a critical technique during both the manufacturing and assembly processes. To prevent unexpected measurement errors that cause erroneous judgment, this paper proposes a novel and reliable analytical method for optical testing, called the bench test. The proposed algorithm was used to distinguish the manufacturing form error from surface deformation caused by the mounting, supporter and gravity effects for the optical testing. The performance of the proposed bench test was compared with a conventional vertical setup for optical testing during the manufacturing process of the lightweight mirror.

  1. Towards Effective Clustering Techniques for the Analysis of Electric Power Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogan, Emilie A.; Cotilla Sanchez, Jose E.; Halappanavar, Mahantesh

    2013-11-30

    Clustering is an important data analysis technique with numerous applications in the analysis of electric power grids. Standard clustering techniques are oblivious to the rich structural and dynamic information available for power grids. Therefore, by exploiting the inherent topological and electrical structure in the power grid data, we propose new methods for clustering with applications to model reduction, locational marginal pricing, phasor measurement unit (PMU or synchrophasor) placement, and power system protection. We focus our attention on model reduction for analysis based on time-series information from synchrophasor measurement devices, and spectral techniques for clustering. By comparing different clustering techniques onmore » two instances of realistic power grids we show that the solutions are related and therefore one could leverage that relationship for a computational advantage. Thus, by contrasting different clustering techniques we make a case for exploiting structure inherent in the data with implications for several domains including power systems.« less

  2. Measuring multi-joint stiffness during single movements: numerical validation of a novel time-frequency approach.

    PubMed

    Piovesan, Davide; Pierobon, Alberto; DiZio, Paul; Lackner, James R

    2012-01-01

    This study presents and validates a Time-Frequency technique for measuring 2-dimensional multijoint arm stiffness throughout a single planar movement as well as during static posture. It is proposed as an alternative to current regressive methods which require numerous repetitions to obtain average stiffness on a small segment of the hand trajectory. The method is based on the analysis of the reassigned spectrogram of the arm's response to impulsive perturbations and can estimate arm stiffness on a trial-by-trial basis. Analytic and empirical methods are first derived and tested through modal analysis on synthetic data. The technique's accuracy and robustness are assessed by modeling the estimation of stiffness time profiles changing at different rates and affected by different noise levels. Our method obtains results comparable with two well-known regressive techniques. We also test how the technique can identify the viscoelastic component of non-linear and higher than second order systems with a non-parametrical approach. The technique proposed here is very impervious to noise and can be used easily for both postural and movement tasks. Estimations of stiffness profiles are possible with only one perturbation, making our method a useful tool for estimating limb stiffness during motor learning and adaptation tasks, and for understanding the modulation of stiffness in individuals with neurodegenerative diseases.

  3. HMM-based lexicon-driven and lexicon-free word recognition for online handwritten Indic scripts.

    PubMed

    Bharath, A; Madhvanath, Sriganesh

    2012-04-01

    Research for recognizing online handwritten words in Indic scripts is at its early stages when compared to Latin and Oriental scripts. In this paper, we address this problem specifically for two major Indic scripts--Devanagari and Tamil. In contrast to previous approaches, the techniques we propose are largely data driven and script independent. We propose two different techniques for word recognition based on Hidden Markov Models (HMM): lexicon driven and lexicon free. The lexicon-driven technique models each word in the lexicon as a sequence of symbol HMMs according to a standard symbol writing order derived from the phonetic representation. The lexicon-free technique uses a novel Bag-of-Symbols representation of the handwritten word that is independent of symbol order and allows rapid pruning of the lexicon. On handwritten Devanagari word samples featuring both standard and nonstandard symbol writing orders, a combination of lexicon-driven and lexicon-free recognizers significantly outperforms either of them used in isolation. In contrast, most Tamil word samples feature the standard symbol order, and the lexicon-driven recognizer outperforms the lexicon free one as well as their combination. The best recognition accuracies obtained for 20,000 word lexicons are 87.13 percent for Devanagari when the two recognizers are combined, and 91.8 percent for Tamil using the lexicon-driven technique.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raymund, T.D.

    Recently, several tomographic techniques for ionospheric electron density imaging have been proposed. These techniques reconstruct a vertical slice image of electron density using total electron content data. The data are measured between a low orbit beacon satellite and fixed receivers located along the projected orbital path of the satellite. By using such tomographic techniques, it may be possible to inexpensively (relative to incoherent scatter techniques) image the ionospheric electron density in a vertical plane several times per day. The satellite and receiver geometry used to measure the total electron content data causes the data to be incomplete; that is, themore » measured data do not contain enough information to completely specify the ionospheric electron density distribution in the region between the satellite and the receivers. A new algorithm is proposed which allows the incorporation of other complementary measurements, such as those from ionosondes, and also includes ways to include a priori information about the unknown electron density distribution in the reconstruction process. The algorithm makes use of two-dimensional basis functions. Illustrative application of this algorithm is made to simulated cases with good results. The technique is also applied to real total electron content (TEC) records collected in Scandinavia in conjunction with the EISCAT incoherent scatter radar. The tomographic reconstructions are compared with the incoherent scatter electron density images of the same region of the ionosphere.« less

  5. A LSQR-type method provides a computationally efficient automated optimal choice of regularization parameter in diffuse optical tomography.

    PubMed

    Prakash, Jaya; Yalavarthy, Phaneendra K

    2013-03-01

    Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.

  6. Proposed evaluation framework for assessing operator performance with multisensor displays

    NASA Technical Reports Server (NTRS)

    Foyle, David C.

    1992-01-01

    Despite aggressive work on the development of sensor fusion algorithms and techniques, no formal evaluation procedures have been proposed. Based on existing integration models in the literature, an evaluation framework is developed to assess an operator's ability to use multisensor, or sensor fusion, displays. The proposed evaluation framework for evaluating the operator's ability to use such systems is a normative approach: The operator's performance with the sensor fusion display can be compared to the models' predictions based on the operator's performance when viewing the original sensor displays prior to fusion. This allows for the determination as to when a sensor fusion system leads to: 1) poorer performance than one of the original sensor displays (clearly an undesirable system in which the fused sensor system causes some distortion or interference); 2) better performance than with either single sensor system alone, but at a sub-optimal (compared to the model predictions) level; 3) optimal performance (compared to model predictions); or, 4) super-optimal performance, which may occur if the operator were able to use some highly diagnostic 'emergent features' in the sensor fusion display, which were unavailable in the original sensor displays. An experiment demonstrating the usefulness of the proposed evaluation framework is discussed.

  7. Automatic ICD-10 multi-class classification of cause of death from plaintext autopsy reports through expert-driven feature selection.

    PubMed

    Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa; Al-Garadi, Mohammed Ali

    2017-01-01

    Widespread implementation of electronic databases has improved the accessibility of plaintext clinical information for supplementary use. Numerous machine learning techniques, such as supervised machine learning approaches or ontology-based approaches, have been employed to obtain useful information from plaintext clinical data. This study proposes an automatic multi-class classification system to predict accident-related causes of death from plaintext autopsy reports through expert-driven feature selection with supervised automatic text classification decision models. Accident-related autopsy reports were obtained from one of the largest hospital in Kuala Lumpur. These reports belong to nine different accident-related causes of death. Master feature vector was prepared by extracting features from the collected autopsy reports by using unigram with lexical categorization. This master feature vector was used to detect cause of death [according to internal classification of disease version 10 (ICD-10) classification system] through five automated feature selection schemes, proposed expert-driven approach, five subset sizes of features, and five machine learning classifiers. Model performance was evaluated using precisionM, recallM, F-measureM, accuracy, and area under ROC curve. Four baselines were used to compare the results with the proposed system. Random forest and J48 decision models parameterized using expert-driven feature selection yielded the highest evaluation measure approaching (85% to 90%) for most metrics by using a feature subset size of 30. The proposed system also showed approximately 14% to 16% improvement in the overall accuracy compared with the existing techniques and four baselines. The proposed system is feasible and practical to use for automatic classification of ICD-10-related cause of death from autopsy reports. The proposed system assists pathologists to accurately and rapidly determine underlying cause of death based on autopsy findings. Furthermore, the proposed expert-driven feature selection approach and the findings are generally applicable to other kinds of plaintext clinical reports.

  8. Automatic ICD-10 multi-class classification of cause of death from plaintext autopsy reports through expert-driven feature selection

    PubMed Central

    Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa; Al-Garadi, Mohammed Ali

    2017-01-01

    Objectives Widespread implementation of electronic databases has improved the accessibility of plaintext clinical information for supplementary use. Numerous machine learning techniques, such as supervised machine learning approaches or ontology-based approaches, have been employed to obtain useful information from plaintext clinical data. This study proposes an automatic multi-class classification system to predict accident-related causes of death from plaintext autopsy reports through expert-driven feature selection with supervised automatic text classification decision models. Methods Accident-related autopsy reports were obtained from one of the largest hospital in Kuala Lumpur. These reports belong to nine different accident-related causes of death. Master feature vector was prepared by extracting features from the collected autopsy reports by using unigram with lexical categorization. This master feature vector was used to detect cause of death [according to internal classification of disease version 10 (ICD-10) classification system] through five automated feature selection schemes, proposed expert-driven approach, five subset sizes of features, and five machine learning classifiers. Model performance was evaluated using precisionM, recallM, F-measureM, accuracy, and area under ROC curve. Four baselines were used to compare the results with the proposed system. Results Random forest and J48 decision models parameterized using expert-driven feature selection yielded the highest evaluation measure approaching (85% to 90%) for most metrics by using a feature subset size of 30. The proposed system also showed approximately 14% to 16% improvement in the overall accuracy compared with the existing techniques and four baselines. Conclusion The proposed system is feasible and practical to use for automatic classification of ICD-10-related cause of death from autopsy reports. The proposed system assists pathologists to accurately and rapidly determine underlying cause of death based on autopsy findings. Furthermore, the proposed expert-driven feature selection approach and the findings are generally applicable to other kinds of plaintext clinical reports. PMID:28166263

  9. Towards scar-free surgery: An analysis of the increasing complexity from laparoscopic surgery to NOTES

    PubMed Central

    Chellali, Amine; Schwaitzberg, Steven D.; Jones, Daniel B.; Romanelli, John; Miller, Amie; Rattner, David; Roberts, Kurt E.; Cao, Caroline G.L.

    2014-01-01

    Background NOTES is an emerging technique for performing surgical procedures, such as cholecystectomy. Debate about its real benefit over the traditional laparoscopic technique is on-going. There have been several clinical studies comparing NOTES to conventional laparoscopic surgery. However, no work has been done to compare these techniques from a Human Factors perspective. This study presents a systematic analysis describing and comparing different existing NOTES methods to laparoscopic cholecystectomy. Methods Videos of endoscopic/laparoscopic views from fifteen live cholecystectomies were analyzed to conduct a detailed task analysis of the NOTES technique. A hierarchical task analysis of laparoscopic cholecystectomy and several hybrid transvaginal NOTES cholecystectomies was performed and validated by expert surgeons. To identify similarities and differences between these techniques, their hierarchical decomposition trees were compared. Finally, a timeline analysis was conducted to compare the steps and substeps. Results At least three variations of the NOTES technique were used for cholecystectomy. Differences between the observed techniques at the substep level of hierarchy and on the instruments being used were found. The timeline analysis showed an increase in time to perform some surgical steps and substeps in NOTES compared to laparoscopic cholecystectomy. Conclusion As pure NOTES is extremely difficult given the current state of development in instrumentation design, most surgeons utilize different hybrid methods – combination of endoscopic and laparoscopic instruments/optics. Results of our hierarchical task analysis yielded an identification of three different hybrid methods to perform cholecystectomy with significant variability amongst them. The varying degrees to which laparoscopic instruments are utilized to assist in NOTES methods appear to introduce different technical issues and additional tasks leading to an increase in the surgical time. The NOTES continuum of invasiveness is proposed here as a classification scheme for these methods, which was used to construct a clear roadmap for training and technology development. PMID:24902811

  10. A hybrid approach for efficient anomaly detection using metaheuristic methods

    PubMed Central

    Ghanem, Tamer F.; Elkilani, Wail S.; Abdul-kader, Hatem M.

    2014-01-01

    Network intrusion detection based on anomaly detection techniques has a significant role in protecting networks and systems against harmful activities. Different metaheuristic techniques have been used for anomaly detector generation. Yet, reported literature has not studied the use of the multi-start metaheuristic method for detector generation. This paper proposes a hybrid approach for anomaly detection in large scale datasets using detectors generated based on multi-start metaheuristic method and genetic algorithms. The proposed approach has taken some inspiration of negative selection-based detector generation. The evaluation of this approach is performed using NSL-KDD dataset which is a modified version of the widely used KDD CUP 99 dataset. The results show its effectiveness in generating a suitable number of detectors with an accuracy of 96.1% compared to other competitors of machine learning algorithms. PMID:26199752

  11. The generalised isodamping approach for robust fractional PID controllers design

    NASA Astrophysics Data System (ADS)

    Beschi, M.; Padula, F.; Visioli, A.

    2017-06-01

    In this paper, we present a novel methodology to design fractional-order proportional-integral-derivative controllers. Based on the description of the controlled system by means of a family of linear models parameterised with respect to a free variable that describes the real process operating point, we design the controller by solving a constrained min-max optimisation problem where the maximum sensitivity has to be minimised. Among the imposed constraints, the most important one is the new generalised isodamping condition, that defines the invariancy of the phase margin with respect to the free parameter variations. It is also shown that the well-known classical isodamping condition is a special case of the new technique proposed in this paper. Simulation results show the effectiveness of the proposed technique and the superiority of the fractional-order controller compared to its integer counterpart.

  12. A Filter Feature Selection Method Based on MFA Score and Redundancy Excluding and It's Application to Tumor Gene Expression Data Analysis.

    PubMed

    Li, Jiangeng; Su, Lei; Pang, Zenan

    2015-12-01

    Feature selection techniques have been widely applied to tumor gene expression data analysis in recent years. A filter feature selection method named marginal Fisher analysis score (MFA score) which is based on graph embedding has been proposed, and it has been widely used mainly because it is superior to Fisher score. Considering the heavy redundancy in gene expression data, we proposed a new filter feature selection technique in this paper. It is named MFA score+ and is based on MFA score and redundancy excluding. We applied it to an artificial dataset and eight tumor gene expression datasets to select important features and then used support vector machine as the classifier to classify the samples. Compared with MFA score, t test and Fisher score, it achieved higher classification accuracy.

  13. Selection of Hidden Layer Neurons and Best Training Method for FFNN in Application of Long Term Load Forecasting

    NASA Astrophysics Data System (ADS)

    Singh, Navneet K.; Singh, Asheesh K.; Tripathy, Manoj

    2012-05-01

    For power industries electricity load forecast plays an important role for real-time control, security, optimal unit commitment, economic scheduling, maintenance, energy management, and plant structure planning etc. A new technique for long term load forecasting (LTLF) using optimized feed forward artificial neural network (FFNN) architecture is presented in this paper, which selects optimal number of neurons in the hidden layer as well as the best training method for the case study. The prediction performance of proposed technique is evaluated using mean absolute percentage error (MAPE) of Thailand private electricity consumption and forecasted data. The results obtained are compared with the results of classical auto-regressive (AR) and moving average (MA) methods. It is, in general, observed that the proposed method is prediction wise more accurate.

  14. A hybrid approach for efficient anomaly detection using metaheuristic methods.

    PubMed

    Ghanem, Tamer F; Elkilani, Wail S; Abdul-Kader, Hatem M

    2015-07-01

    Network intrusion detection based on anomaly detection techniques has a significant role in protecting networks and systems against harmful activities. Different metaheuristic techniques have been used for anomaly detector generation. Yet, reported literature has not studied the use of the multi-start metaheuristic method for detector generation. This paper proposes a hybrid approach for anomaly detection in large scale datasets using detectors generated based on multi-start metaheuristic method and genetic algorithms. The proposed approach has taken some inspiration of negative selection-based detector generation. The evaluation of this approach is performed using NSL-KDD dataset which is a modified version of the widely used KDD CUP 99 dataset. The results show its effectiveness in generating a suitable number of detectors with an accuracy of 96.1% compared to other competitors of machine learning algorithms.

  15. Enhanced protocol for real-time transmission of echocardiograms over wireless channels.

    PubMed

    Cavero, Eva; Alesanco, Alvaro; García, Jose

    2012-11-01

    This paper presents a methodology to transmit clinical video over wireless networks in real-time. A 3-D set partitioning in hierarchical trees compression prior to transmission is proposed. In order to guarantee the clinical quality of the compressed video, a clinical evaluation specific to each video modality has to be made. This evaluation indicates the minimal transmission rate necessary for an accurate diagnosis. However, the channel conditions produce errors and distort the video. A reliable application protocol is therefore proposed using a hybrid solution in which either retransmission or retransmission combined with forward error correction (FEC) techniques are used, depending on the channel conditions. In order to analyze the proposed methodology, the 2-D mode of an echocardiogram has been assessed. A bandwidth of 200 kbps is necessary to guarantee its clinical quality. The transmission using the proposed solution and retransmission and FEC techniques working separately have been simulated and compared in high-speed uplink packet access (HSUPA) and worldwide interoperability for microwave access (WiMAX) networks. The proposed protocol achieves guaranteed clinical quality for bit error rates higher than with the other protocols, being for a mobile speed of 60 km/h up to 3.3 times higher for HSUPA and 10 times for WiMAX.

  16. Unmitigated numerical solution to the diffraction term in the parabolic nonlinear ultrasound wave equation.

    PubMed

    Hasani, Mojtaba H; Gharibzadeh, Shahriar; Farjami, Yaghoub; Tavakkoli, Jahan

    2013-09-01

    Various numerical algorithms have been developed to solve the Khokhlov-Kuznetsov-Zabolotskaya (KZK) parabolic nonlinear wave equation. In this work, a generalized time-domain numerical algorithm is proposed to solve the diffraction term of the KZK equation. This algorithm solves the transverse Laplacian operator of the KZK equation in three-dimensional (3D) Cartesian coordinates using a finite-difference method based on the five-point implicit backward finite difference and the five-point Crank-Nicolson finite difference discretization techniques. This leads to a more uniform discretization of the Laplacian operator which in turn results in fewer calculation gridding nodes without compromising accuracy in the diffraction term. In addition, a new empirical algorithm based on the LU decomposition technique is proposed to solve the system of linear equations obtained from this discretization. The proposed empirical algorithm improves the calculation speed and memory usage, while the order of computational complexity remains linear in calculation of the diffraction term in the KZK equation. For evaluating the accuracy of the proposed algorithm, two previously published algorithms are used as comparison references: the conventional 2D Texas code and its generalization for 3D geometries. The results show that the accuracy/efficiency performance of the proposed algorithm is comparable with the established time-domain methods.

  17. Enhanced electrocatalytic oxidation of isoniazid at electrochemically modified rhodium electrode for biological and pharmaceutical analysis.

    PubMed

    Cheemalapati, Srikanth; Chen, Shen-Ming; Ali, M Ajmal; Al-Hemaid, Fahad M A

    2014-09-01

    A simple and sensitive electrochemical method has been proposed for the determination of isoniazid (INZ). For the first time, rhodium (Rh) modified glassy carbon electrode (GCE) has been employed for the determination of INZ by linear sweep voltammetry technique (LSV). Compared with the unmodified electrode, the proposed Rh modified electrode provides strong electrocatalytic activity toward INZ with significant enhancement in the anodic peak current. Scanning electron microscopy (SEM) and field emission scanning electron microscopy (FESEM) results reveal the morphology of Rh particles. With the advantages of wide linearity (70-1300μM), good sensitivity (0.139μAμM(-1)cm(-2)) and low detection limit (13μM), this proposed sensor holds great potential for the determination of INZ in real samples. The practicality of the proposed electrode for the detection of INZ in human urine and blood plasma samples has been successfully demonstrated using LSV technique. Through the determination of INZ in commercially available pharmaceutical tablets, the practical applicability of the proposed method has been validated. The recovery results are found to be in good agreement with the labeled amounts of INZ in tablets, thus showing its great potential for use in clinical and pharmaceutical analysis. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. [Management of spinal metastasis by minimal invasive surgery technique: Surgical principles, indications: A literature review].

    PubMed

    Toquart, A; Graillon, T; Mansouri, N; Adetchessi, T; Blondel, B; Fuentes, S

    2016-06-01

    Spinal metastasis are getting more frequent. This raises the question of pain and neurological complications, which worsen the functional and survival prognosis of this oncological population patients. The surgical treatment must be the most complete as possible: to decompress and stabilize without delaying the management of the oncological disease. Minimal invasive surgery techniques are by definition, less harmful on musculocutaneous plan than opened ones, with a comparable efficiency demonstrated in degenerative and traumatic surgery. So they seem to be applicable and appropriate to this patient population. We detailed different minimal invasive techniques proposed in the management of spinal metastasis. For this, we used our experience developed in degenerative and traumatic pathologies, and we also referred to many authors, establishing a literature review thanks to Pubmed, Embase. Thirty eight articles were selected and allowed us to describe different techniques: percutaneous methods such as vertebro-/kyphoplasty and osteosynthesis, as well as mini-opened surgery, through a posterior or anterior way. We propose a surgical approach using these minimal invasive techniques, first according to the predominant symptom (pain or neurologic failure), then characteristics of the lesions (number, topography, type…) and the deformity degree. Whatever the technique, the main goal is to stabilize and decompress, in order to maintain a good quality of life for these fragile patients, without delaying the medical management of the oncological disease. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  19. Local Linear Regression for Data with AR Errors.

    PubMed

    Li, Runze; Li, Yan

    2009-07-01

    In many statistical applications, data are collected over time, and they are likely correlated. In this paper, we investigate how to incorporate the correlation information into the local linear regression. Under the assumption that the error process is an auto-regressive process, a new estimation procedure is proposed for the nonparametric regression by using local linear regression method and the profile least squares techniques. We further propose the SCAD penalized profile least squares method to determine the order of auto-regressive process. Extensive Monte Carlo simulation studies are conducted to examine the finite sample performance of the proposed procedure, and to compare the performance of the proposed procedures with the existing one. From our empirical studies, the newly proposed procedures can dramatically improve the accuracy of naive local linear regression with working-independent error structure. We illustrate the proposed methodology by an analysis of real data set.

  20. A novel hybrid meta-heuristic technique applied to the well-known benchmark optimization problems

    NASA Astrophysics Data System (ADS)

    Abtahi, Amir-Reza; Bijari, Afsane

    2017-03-01

    In this paper, a hybrid meta-heuristic algorithm, based on imperialistic competition algorithm (ICA), harmony search (HS), and simulated annealing (SA) is presented. The body of the proposed hybrid algorithm is based on ICA. The proposed hybrid algorithm inherits the advantages of the process of harmony creation in HS algorithm to improve the exploitation phase of the ICA algorithm. In addition, the proposed hybrid algorithm uses SA to make a balance between exploration and exploitation phases. The proposed hybrid algorithm is compared with several meta-heuristic methods, including genetic algorithm (GA), HS, and ICA on several well-known benchmark instances. The comprehensive experiments and statistical analysis on standard benchmark functions certify the superiority of the proposed method over the other algorithms. The efficacy of the proposed hybrid algorithm is promising and can be used in several real-life engineering and management problems.

  1. Assessment of traffic noise levels in urban areas using different soft computing techniques.

    PubMed

    Tomić, J; Bogojević, N; Pljakić, M; Šumarac-Pavlović, D

    2016-10-01

    Available traffic noise prediction models are usually based on regression analysis of experimental data, and this paper presents the application of soft computing techniques in traffic noise prediction. Two mathematical models are proposed and their predictions are compared to data collected by traffic noise monitoring in urban areas, as well as to predictions of commonly used traffic noise models. The results show that application of evolutionary algorithms and neural networks may improve process of development, as well as accuracy of traffic noise prediction.

  2. Two-dimensional surface strain measurement based on a variation of Yamaguchi's laser-speckle strain gauge

    NASA Technical Reports Server (NTRS)

    Barranger, John P.

    1990-01-01

    A novel optical method of measuring 2-D surface strain is proposed. Two linear strains along orthogonal axes and the shear strain between those axes is determined by a variation of Yamaguchi's laser-speckle strain gage technique. It offers the advantages of shorter data acquisition times, less stringent alignment requirements, and reduced decorrelation effects when compared to a previously implemented optical strain rosette technique. The method automatically cancels the translational and rotational components of rigid body motion while simplifying the optical system and improving the speed of response.

  3. Performance Evaluation of EnKF-based Hydrogeological Site Characterization using Color Coherent Vectors

    NASA Astrophysics Data System (ADS)

    Moslehi, M.; de Barros, F.

    2017-12-01

    Complexity of hydrogeological systems arises from the multi-scale heterogeneity and insufficient measurements of their underlying parameters such as hydraulic conductivity and porosity. An inadequate characterization of hydrogeological properties can significantly decrease the trustworthiness of numerical models that predict groundwater flow and solute transport. Therefore, a variety of data assimilation methods have been proposed in order to estimate hydrogeological parameters from spatially scarce data by incorporating the governing physical models. In this work, we propose a novel framework for evaluating the performance of these estimation methods. We focus on the Ensemble Kalman Filter (EnKF) approach that is a widely used data assimilation technique. It reconciles multiple sources of measurements to sequentially estimate model parameters such as the hydraulic conductivity. Several methods have been used in the literature to quantify the accuracy of the estimations obtained by EnKF, including Rank Histograms, RMSE and Ensemble Spread. However, these commonly used methods do not regard the spatial information and variability of geological formations. This can cause hydraulic conductivity fields with very different spatial structures to have similar histograms or RMSE. We propose a vision-based approach that can quantify the accuracy of estimations by considering the spatial structure embedded in the estimated fields. Our new approach consists of adapting a new metric, Color Coherent Vectors (CCV), to evaluate the accuracy of estimated fields achieved by EnKF. CCV is a histogram-based technique for comparing images that incorporate spatial information. We represent estimated fields as digital three-channel images and use CCV to compare and quantify the accuracy of estimations. The sensitivity of CCV to spatial information makes it a suitable metric for assessing the performance of spatial data assimilation techniques. Under various factors of data assimilation methods such as number, layout, and type of measurements, we compare the performance of CCV with other metrics such as RMSE. By simulating hydrogeological processes using estimated and true fields, we observe that CCV outperforms other existing evaluation metrics.

  4. Source-space ICA for MEG source imaging.

    PubMed

    Jonmohamadi, Yaqub; Jones, Richard D

    2016-02-01

    One of the most widely used approaches in electroencephalography/magnetoencephalography (MEG) source imaging is application of an inverse technique (such as dipole modelling or sLORETA) on the component extracted by independent component analysis (ICA) (sensor-space ICA + inverse technique). The advantage of this approach over an inverse technique alone is that it can identify and localize multiple concurrent sources. Among inverse techniques, the minimum-variance beamformers offer a high spatial resolution. However, in order to have both high spatial resolution of beamformer and be able to take on multiple concurrent sources, sensor-space ICA + beamformer is not an ideal combination. We propose source-space ICA for MEG as a powerful alternative approach which can provide the high spatial resolution of the beamformer and handle multiple concurrent sources. The concept of source-space ICA for MEG is to apply the beamformer first and then singular value decomposition + ICA. In this paper we have compared source-space ICA with sensor-space ICA both in simulation and real MEG. The simulations included two challenging scenarios of correlated/concurrent cluster sources. Source-space ICA provided superior performance in spatial reconstruction of source maps, even though both techniques performed equally from a temporal perspective. Real MEG from two healthy subjects with visual stimuli were also used to compare performance of sensor-space ICA and source-space ICA. We have also proposed a new variant of minimum-variance beamformer called weight-normalized linearly-constrained minimum-variance with orthonormal lead-field. As sensor-space ICA-based source reconstruction is popular in EEG and MEG imaging, and given that source-space ICA has superior spatial performance, it is expected that source-space ICA will supersede its predecessor in many applications.

  5. Detection of circuit-board components with an adaptive multiclass correlation filter

    NASA Astrophysics Data System (ADS)

    Diaz-Ramirez, Victor H.; Kober, Vitaly

    2008-08-01

    A new method for reliable detection of circuit-board components is proposed. The method is based on an adaptive multiclass composite correlation filter. The filter is designed with the help of an iterative algorithm using complex synthetic discriminant functions. The impulse response of the filter contains information needed to localize and classify geometrically distorted circuit-board components belonging to different classes. Computer simulation results obtained with the proposed method are provided and compared with those of known multiclass correlation based techniques in terms of performance criteria for recognition and classification of objects.

  6. Performance evaluation of Olympic weightlifters.

    PubMed

    Garhammer, J

    1979-01-01

    The comparison of weights lifted by athletes in different bodyweight categories is a continuing problem for the sport of olympic weightlifting. An objective mechanical evaluation procedure was developed using basic ideas from a model proposed by Ranta in 1975. This procedure was based on more realistic assumptions than the original model and considered both vertical and horizontal bar movements. Utilization of data obtained from film of national caliber lifters indicated that the proposed method was workable, and that the evaluative indices ranked lifters in reasonable order relative to other comparative techniques.

  7. Vibrations Detection in Industrial Pumps Based on Spectral Analysis to Increase Their Efficiency

    NASA Astrophysics Data System (ADS)

    Rachid, Belhadef; Hafaifa, Ahmed; Boumehraz, Mohamed

    2016-03-01

    Spectral analysis is the key tool for the study of vibration signals in rotating machinery. In this work, the vibration analysis applied for conditional preventive maintenance of such machines is proposed, as part of resolved problems related to vibration detection on the organs of these machines. The vibration signal of a centrifugal pump was treated to mount the benefits of the approach proposed. The obtained results present the signal estimation of a pump vibration using Fourier transform technique compared by the spectral analysis methods based on Prony approach.

  8. A near-optimal low complexity sensor fusion technique for accurate indoor localization based on ultrasound time of arrival measurements from low-quality sensors

    NASA Astrophysics Data System (ADS)

    Mitilineos, Stelios A.; Argyreas, Nick D.; Thomopoulos, Stelios C. A.

    2009-05-01

    A fusion-based localization technique for location-based services in indoor environments is introduced herein, based on ultrasound time-of-arrival measurements from multiple off-the-shelf range estimating sensors which are used in a market-available localization system. In-situ field measurements results indicated that the respective off-the-shelf system was unable to estimate position in most of the cases, while the underlying sensors are of low-quality and yield highly inaccurate range and position estimates. An extensive analysis is performed and a model of the sensor-performance characteristics is established. A low-complexity but accurate sensor fusion and localization technique is then developed, which consists inof evaluating multiple sensor measurements and selecting the one that is considered most-accurate based on the underlying sensor model. Optimality, in the sense of a genie selecting the optimum sensor, is subsequently evaluated and compared to the proposed technique. The experimental results indicate that the proposed fusion method exhibits near-optimal performance and, albeit being theoretically suboptimal, it largely overcomes most flaws of the underlying single-sensor system resulting in a localization system of increased accuracy, robustness and availability.

  9. Implementation speed of deterministic population passages compared to that of Rabi pulses

    NASA Astrophysics Data System (ADS)

    Chen, Jingwei; Wei, L. F.

    2015-02-01

    Fast Rabi π -pulse technique has been widely applied to various coherent quantum manipulations, although it requires precise designs of the pulse areas. Relaxing the precise pulse designs, various rapid adiabatic passage (RAP) approaches have been alternatively utilized to implement various population passages deterministically. However, the usual RAP protocol could not be implemented desirably fast, as the relevant adiabatic condition should be robustly satisfied during the passage. Here, we propose a modified shortcut to adiabaticity (STA) technique to accelerate significantly the desired deterministic quantum state population passages. This transitionless technique is beyond the usual rotating wave approximation (RWA) performed in the recent STA protocols, and thus can be applied to deliver various fast quantum evolutions wherein the relevant counter-rotating effects cannot be neglected. The proposal is demonstrated specifically with the driven two- and three-level systems. Numerical results show that with the present STA technique beyond the RWA the usual Stark-chirped RAPs and stimulated Raman adiabatic passages could be significantly speeded up; the deterministic population passages could be implemented as fast as the widely used fast Rabi π pulses, but are insensitive to the applied pulse areas.

  10. Localization of a continuous CO2 leak from an isotropic flat-surface structure using acoustic emission detection and near-field beamforming techniques

    NASA Astrophysics Data System (ADS)

    Yan, Yong; Cui, Xiwang; Guo, Miao; Han, Xiaojuan

    2016-11-01

    Seal capacity is of great importance for the safety operation of pressurized vessels. It is crucial to locate the leak hole timely and accurately for reasons of safety and maintenance. This paper presents the principle and application of a linear acoustic emission sensor array and a near-field beamforming technique to identify the location of a continuous CO2 leak from an isotropic flat-surface structure on a pressurized vessel in the carbon capture and storage system. Acoustic signals generated by the leak hole are collected using a linear high-frequency sensor array. Time-frequency analysis and a narrow-band filtering technique are deployed to extract effective information about the leak. The impacts of various factors on the performance of the localization technique are simulated, compared and discussed, including the number of sensors, distance between the leak hole and sensor array and spacing between adjacent sensors. Experiments were carried out on a laboratory-scale test rig to assess the effectiveness and operability of the proposed method. The results obtained suggest that the proposed method is capable of providing accurate and reliable localization of a continuous CO2 leak.

  11. Highly Sensitive and Wide-Dynamic-Range Multichannel Optical-Fiber pH Sensor Based on PWM Technique.

    PubMed

    Khan, Md Rajibur Rahaman; Kang, Shin-Won

    2016-11-09

    In this study, we propose a highly sensitive multichannel pH sensor that is based on an optical-fiber pulse width modulation (PWM) technique. According to the optical-fiber PWM method, the received sensing signal's pulse width changes when the optical-fiber pH sensing-element of the array comes into contact with pH buffer solutions. The proposed optical-fiber PWM pH-sensing system offers a linear sensing response over a wide range of pH values from 2 to 12, with a high pH-sensing ability. The sensitivity of the proposed pH sensor is 0.46 µs/pH, and the correlation coefficient R² is approximately 0.997. Additional advantages of the proposed optical-fiber PWM pH sensor include a short/fast response-time of about 8 s, good reproducibility properties with a relative standard deviation (RSD) of about 0.019, easy fabrication, low cost, small size, reusability of the optical-fiber sensing-element, and the capability of remote sensing. Finally, the performance of the proposed PWM pH sensor was compared with that of potentiometric, optical-fiber modal interferometer, and optical-fiber Fabry-Perot interferometer pH sensors with respect to dynamic range width, linearity as well as response and recovery times. We observed that the proposed sensing systems have better sensing abilities than the above-mentioned pH sensors.

  12. Highly Sensitive and Wide-Dynamic-Range Multichannel Optical-Fiber pH Sensor Based on PWM Technique

    PubMed Central

    Khan, Md. Rajibur Rahaman; Kang, Shin-Won

    2016-01-01

    In this study, we propose a highly sensitive multichannel pH sensor that is based on an optical-fiber pulse width modulation (PWM) technique. According to the optical-fiber PWM method, the received sensing signal’s pulse width changes when the optical-fiber pH sensing-element of the array comes into contact with pH buffer solutions. The proposed optical-fiber PWM pH-sensing system offers a linear sensing response over a wide range of pH values from 2 to 12, with a high pH-sensing ability. The sensitivity of the proposed pH sensor is 0.46 µs/pH, and the correlation coefficient R2 is approximately 0.997. Additional advantages of the proposed optical-fiber PWM pH sensor include a short/fast response-time of about 8 s, good reproducibility properties with a relative standard deviation (RSD) of about 0.019, easy fabrication, low cost, small size, reusability of the optical-fiber sensing-element, and the capability of remote sensing. Finally, the performance of the proposed PWM pH sensor was compared with that of potentiometric, optical-fiber modal interferometer, and optical-fiber Fabry–Perot interferometer pH sensors with respect to dynamic range width, linearity as well as response and recovery times. We observed that the proposed sensing systems have better sensing abilities than the above-mentioned pH sensors. PMID:27834865

  13. Results of a joint NOAA/NASA sounder simulation study

    NASA Technical Reports Server (NTRS)

    Phillips, N.; Susskind, Joel; Mcmillin, L.

    1988-01-01

    This paper presents the results of a joint NOAA and NASA sounder simulation study in which the accuracies of atmospheric temperature profiles and surface skin temperature measuremnents retrieved from two sounders were compared: (1) the currently used IR temperature sounder HIRS2 (High-resolution Infrared Radiation Sounder 2); and (2) the recently proposed high-spectral-resolution IR sounder AMTS (Advanced Moisture and Temperature Sounder). Simulations were conducted for both clear and partial cloud conditions. Data were analyzed at NASA using a physical inversion technique and at NOAA using a statistical technique. Results show significant improvement of AMTS compared to HIRS2 for both clear and cloudy conditions. The improvements are indicated by both methods of data analysis, but the physical retrievals outperform the statistical retrievals.

  14. Tecnical Note: Analysis of non-regulated vehicular emissions by extractive FTIR spectrometry: tests on a hybrid car in Mexico City

    NASA Astrophysics Data System (ADS)

    Reyes, F.; Grutter, M.; Jazcilevich, A.; González-Oropeza, R.

    2006-11-01

    A methodology to acquire valuable information on the chemical composition and evolution of vehicular emissions is presented. The analysis of the gases is performed by passing a constant flow of a sample gas from the tail-pipe into a 10 L multi-pass cell. The absorption spectra within the cell are obtained using an FTIR spectrometer at 0.5 cm-1 resolution along a 13.1 m optical path. Additionally, the total flow from the exhaust is continuously measured from a differential pressure sensor on a textit{Pitot} tube installed at the exit of the exhaust. This configuration aims to obtain a good speciation capability by coadding spectra during 30 s and reporting the emission (in g/km) of both criteria and non-regulated pollutants, such as CO2, CO, NO, SO2, NH3, HCHO and some NMHC, during predetermined driving cycles. The advantages and disadvantages of increasing the measurement frequency, as well as the effect of other parameters such as spectral resolution, cell volume and flow rate, are discussed. To test and evaluate the proposed technique, experiments were performed on a dynamometer running FTP-75 and typical driving cycles for the Mexico City Metropolitan Area (MCMA) on a Toyota Prius hybrid vehicle. This car is an example of recent marketed automotive technology dedicated to reduced emissions, increasing the need for sensitive detection techniques. This study shows the potential of the proposed technique to measure and report in real time the emissions of a large variety of pollutants, even from a super ultra-low emission vehicle (SULEV). The emissions of HC's, NOx, CO and CO2 obtained here were compared to experiments performed in other locations with the same model vehicle. The proposed technique provides a tool for future studies comparing in detail the emissions of vehicles using alternative fuels and emission control systems.

  15. Online approximation of the multichannel Wiener filter with preservation of interaural level difference for binaural hearing-aids.

    PubMed

    Marques do Carmo, Diego; Costa, Márcio Holsbach

    2018-04-01

    This work presents an online approximation method for the multichannel Wiener filter (MWF) noise reduction technique with preservation of the noise interaural level difference (ILD) for binaural hearing-aids. The steepest descent method is applied to a previously proposed MWF-ILD cost function to both approximate the optimal linear estimator of the desired speech and keep the subjective perception of the original acoustic scenario. The computational cost of the resulting algorithm is estimated in terms of multiply and accumulate operations, whose number can be controlled by setting the number of iterations at each time frame. Simulation results for the particular case of one speech and one-directional noise source show that the proposed method increases the signal-to-noise ratio SNR of the originally acquired speech by up to 16.9 dB in the assessed scenarios. As compared to the online implementation of the conventional MWF technique, the proposed technique provides a reduction of up to 7 dB in the noise ILD error at the price of a reduction of up 3 dB in the output SNR. Subjective experiments with volunteers complement these objective measures with psychoacoustic results, which corroborate the expected spatial preservation of the original acoustic scenario. The proposed method allows practical online implementation of the MWF-ILD noise reduction technique under constrained computational resources. Predicted SNR improvements from 12 dB to 16.9 dB can be obtained in application-specific integrated circuits for hearing-aids and state-of-the-art digital signal processors. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Automated collimation testing by determining the statistical correlation coefficient of Talbot self-images.

    PubMed

    Rana, Santosh; Dhanotia, Jitendra; Bhatia, Vimal; Prakash, Shashi

    2018-04-01

    In this paper, we propose a simple, fast, and accurate technique for detection of collimation position of an optical beam using the self-imaging phenomenon and correlation analysis. Herrera-Fernandez et al. [J. Opt.18, 075608 (2016)JOOPDB0150-536X10.1088/2040-8978/18/7/075608] proposed an experimental arrangement for collimation testing by comparing the period of two different self-images produced by a single diffraction grating. Following their approach, we propose a testing procedure based on correlation coefficient (CC) for efficient detection of variation in the size and fringe width of the Talbot self-images and thereby the collimation position. When the beam is collimated, the physical properties of the self-images of the grating, such as its size and fringe width, do not vary from one Talbot plane to the other and are identical; the CC is maximum in such a situation. For the de-collimated position, the size and fringe width of the self-images vary, and correspondingly the CC decreases. Hence, the magnitude of CC is a measure of degree of collimation. Using the method, we could set the collimation position to a resolution of 1 μm, which relates to ±0.25   μ    radians in terms of collimation angle (for testing a collimating lens of diameter 46 mm and focal length 300 mm). In contrast to most collimation techniques reported to date, the proposed technique does not require a translation/rotation of the grating, use of complicated phase evaluation algorithms, or an intricate method for determination of period of the grating or its self-images. The technique is fully automated and provides high resolution and precision.

  17. Sparse alignment for robust tensor learning.

    PubMed

    Lai, Zhihui; Wong, Wai Keung; Xu, Yong; Zhao, Cairong; Sun, Mingming

    2014-10-01

    Multilinear/tensor extensions of manifold learning based algorithms have been widely used in computer vision and pattern recognition. This paper first provides a systematic analysis of the multilinear extensions for the most popular methods by using alignment techniques, thereby obtaining a general tensor alignment framework. From this framework, it is easy to show that the manifold learning based tensor learning methods are intrinsically different from the alignment techniques. Based on the alignment framework, a robust tensor learning method called sparse tensor alignment (STA) is then proposed for unsupervised tensor feature extraction. Different from the existing tensor learning methods, L1- and L2-norms are introduced to enhance the robustness in the alignment step of the STA. The advantage of the proposed technique is that the difficulty in selecting the size of the local neighborhood can be avoided in the manifold learning based tensor feature extraction algorithms. Although STA is an unsupervised learning method, the sparsity encodes the discriminative information in the alignment step and provides the robustness of STA. Extensive experiments on the well-known image databases as well as action and hand gesture databases by encoding object images as tensors demonstrate that the proposed STA algorithm gives the most competitive performance when compared with the tensor-based unsupervised learning methods.

  18. Distributed Generation Planning using Peer Enhanced Multi-objective Teaching-Learning based Optimization in Distribution Networks

    NASA Astrophysics Data System (ADS)

    Selvam, Kayalvizhi; Vinod Kumar, D. M.; Siripuram, Ramakanth

    2017-04-01

    In this paper, an optimization technique called peer enhanced teaching learning based optimization (PeTLBO) algorithm is used in multi-objective problem domain. The PeTLBO algorithm is parameter less so it reduced the computational burden. The proposed peer enhanced multi-objective based TLBO (PeMOTLBO) algorithm has been utilized to find a set of non-dominated optimal solutions [distributed generation (DG) location and sizing in distribution network]. The objectives considered are: real power loss and the voltage deviation subjected to voltage limits and maximum penetration level of DG in distribution network. Since the DG considered is capable of injecting real and reactive power to the distribution network the power factor is considered as 0.85 lead. The proposed peer enhanced multi-objective optimization technique provides different trade-off solutions in order to find the best compromise solution a fuzzy set theory approach has been used. The effectiveness of this proposed PeMOTLBO is tested on IEEE 33-bus and Indian 85-bus distribution system. The performance is validated with Pareto fronts and two performance metrics (C-metric and S-metric) by comparing with robust multi-objective technique called non-dominated sorting genetic algorithm-II and also with the basic TLBO.

  19. Full-frame, high-speed 3D shape and deformation measurements using stereo-digital image correlation and a single color high-speed camera

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2017-08-01

    Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.

  20. Description and interpretation of the bracts epidermis of Gramineae (Poaceae) with rotated image with maximum average power spectrum (RIMAPS) technique.

    PubMed

    Favret, Eduardo A; Fuentes, Néstor O; Molina, Ana M; Setten, Lorena M

    2008-10-01

    During the last few years, RIMAPS technique has been used to characterize the micro-relief of metallic surfaces and recently also applied to biological surfaces. RIMAPS is an image analysis technique which uses the rotation of an image and calculates its average power spectrum. Here, it is presented as a tool for describing the morphology of the trichodium net found in some grasses, which is developed on the epidermal cells of the lemma. Three different species of grasses (herbarium samples) are analyzed: Podagrostis aequivalvis (Trin.) Scribn. & Merr., Bromidium hygrometricum (Nees) Nees & Meyen and Bromidium ramboi (Parodi) Rúgolo. Simple schemes representing the real microstructure of the lemma are proposed and studied. RIMAPS spectra of both the schemes and the real microstructures are compared. These results allow inferring how similar the proposed geometrical schemes are to the real microstructures. Each geometrical pattern could be used as a reference for classifying other species. Finally, this kind of analysis is used to determine the morphology of the trichodium net of Agrostis breviculmis Hitchc. As the dried sample had shrunk and the microstructure was not clear, two kinds of morphology are proposed for the trichodium net of Agrostis L., one elliptical and the other rectilinear, the former being the most suitable.

  1. Robust and Accurate Anomaly Detection in ECG Artifacts Using Time Series Motif Discovery

    PubMed Central

    Sivaraks, Haemwaan

    2015-01-01

    Electrocardiogram (ECG) anomaly detection is an important technique for detecting dissimilar heartbeats which helps identify abnormal ECGs before the diagnosis process. Currently available ECG anomaly detection methods, ranging from academic research to commercial ECG machines, still suffer from a high false alarm rate because these methods are not able to differentiate ECG artifacts from real ECG signal, especially, in ECG artifacts that are similar to ECG signals in terms of shape and/or frequency. The problem leads to high vigilance for physicians and misinterpretation risk for nonspecialists. Therefore, this work proposes a novel anomaly detection technique that is highly robust and accurate in the presence of ECG artifacts which can effectively reduce the false alarm rate. Expert knowledge from cardiologists and motif discovery technique is utilized in our design. In addition, every step of the algorithm conforms to the interpretation of cardiologists. Our method can be utilized to both single-lead ECGs and multilead ECGs. Our experiment results on real ECG datasets are interpreted and evaluated by cardiologists. Our proposed algorithm can mostly achieve 100% of accuracy on detection (AoD), sensitivity, specificity, and positive predictive value with 0% false alarm rate. The results demonstrate that our proposed method is highly accurate and robust to artifacts, compared with competitive anomaly detection methods. PMID:25688284

  2. The cascaded moving k-means and fuzzy c-means clustering algorithms for unsupervised segmentation of malaria images

    NASA Astrophysics Data System (ADS)

    Abdul-Nasir, Aimi Salihah; Mashor, Mohd Yusoff; Halim, Nurul Hazwani Abd; Mohamed, Zeehaida

    2015-05-01

    Malaria is a life-threatening parasitic infectious disease that corresponds for nearly one million deaths each year. Due to the requirement of prompt and accurate diagnosis of malaria, the current study has proposed an unsupervised pixel segmentation based on clustering algorithm in order to obtain the fully segmented red blood cells (RBCs) infected with malaria parasites based on the thin blood smear images of P. vivax species. In order to obtain the segmented infected cell, the malaria images are first enhanced by using modified global contrast stretching technique. Then, an unsupervised segmentation technique based on clustering algorithm has been applied on the intensity component of malaria image in order to segment the infected cell from its blood cells background. In this study, cascaded moving k-means (MKM) and fuzzy c-means (FCM) clustering algorithms has been proposed for malaria slide image segmentation. After that, median filter algorithm has been applied to smooth the image as well as to remove any unwanted regions such as small background pixels from the image. Finally, seeded region growing area extraction algorithm has been applied in order to remove large unwanted regions that are still appeared on the image due to their size in which cannot be cleaned by using median filter. The effectiveness of the proposed cascaded MKM and FCM clustering algorithms has been analyzed qualitatively and quantitatively by comparing the proposed cascaded clustering algorithm with MKM and FCM clustering algorithms. Overall, the results indicate that segmentation using the proposed cascaded clustering algorithm has produced the best segmentation performances by achieving acceptable sensitivity as well as high specificity and accuracy values compared to the segmentation results provided by MKM and FCM algorithms.

  3. Image processing pipeline for segmentation and material classification based on multispectral high dynamic range polarimetric images.

    PubMed

    Martínez-Domingo, Miguel Ángel; Valero, Eva M; Hernández-Andrés, Javier; Tominaga, Shoji; Horiuchi, Takahiko; Hirai, Keita

    2017-11-27

    We propose a method for the capture of high dynamic range (HDR), multispectral (MS), polarimetric (Pol) images of indoor scenes using a liquid crystal tunable filter (LCTF). We have included the adaptive exposure estimation (AEE) method to fully automatize the capturing process. We also propose a pre-processing method which can be applied for the registration of HDR images after they are already built as the result of combining different low dynamic range (LDR) images. This method is applied to ensure a correct alignment of the different polarization HDR images for each spectral band. We have focused our efforts in two main applications: object segmentation and classification into metal and dielectric classes. We have simplified the segmentation using mean shift combined with cluster averaging and region merging techniques. We compare the performance of our segmentation with that of Ncut and Watershed methods. For the classification task, we propose to use information not only in the highlight regions but also in their surrounding area, extracted from the degree of linear polarization (DoLP) maps. We present experimental results which proof that the proposed image processing pipeline outperforms previous techniques developed specifically for MSHDRPol image cubes.

  4. Automatic segmentation of the left ventricle in a cardiac MR short axis image using blind morphological operation

    NASA Astrophysics Data System (ADS)

    Irshad, Mehreen; Muhammad, Nazeer; Sharif, Muhammad; Yasmeen, Mussarat

    2018-04-01

    Conventionally, cardiac MR image analysis is done manually. Automatic examination for analyzing images can replace the monotonous tasks of massive amounts of data to analyze the global and regional functions of the cardiac left ventricle (LV). This task is performed using MR images to calculate the analytic cardiac parameter like end-systolic volume, end-diastolic volume, ejection fraction, and myocardial mass, respectively. These analytic parameters depend upon genuine delineation of epicardial, endocardial, papillary muscle, and trabeculations contours. In this paper, we propose an automatic segmentation method using the sum of absolute differences technique to localize the left ventricle. Blind morphological operations are proposed to segment and detect the LV contours of the epicardium and endocardium, automatically. We test the benchmark Sunny Brook dataset for evaluation of the proposed work. Contours of epicardium and endocardium are compared quantitatively to determine contour's accuracy and observe high matching values. Similarity or overlapping of an automatic examination to the given ground truth analysis by an expert are observed with high accuracy as with an index value of 91.30% . The proposed method for automatic segmentation gives better performance relative to existing techniques in terms of accuracy.

  5. Nonlinear adaptive control of grid-connected three-phase inverters for renewable energy applications

    NASA Astrophysics Data System (ADS)

    Mahdian-Dehkordi, N.; Namvar, M.; Karimi, H.; Piya, P.; Karimi-Ghartemani, M.

    2017-01-01

    Distributed generation (DG) units are often interfaced to the main grid using power electronic converters including voltage-source converters (VSCs). A VSC offers dc/ac power conversion, high controllability, and fast dynamic response. Because of nonlinearities, uncertainties, and system parameters' changes involved in the nature of a grid-connected renewable DG system, conventional linear control methods cannot completely and efficiently address all control objectives. In this paper, a nonlinear adaptive control scheme based on adaptive backstepping strategy is presented to control the operation of a grid-connected renewable DG unit. As compared to the popular vector control technique, the proposed controller offers smoother transient responses, and lower level of current distortions. The Lyapunov approach is used to establish global asymptotic stability of the proposed control system. Linearisation technique is employed to develop guidelines for parameters tuning of the controller. Extensive time-domain digital simulations are performed and presented to verify the performance of the proposed controller when employed in a VSC to control the operation of a two-stage DG unit and also that of a single-stage solar photovoltaic system. Desirable and superior performance of the proposed controller is observed.

  6. Physical-level synthesis for digital lab-on-a-chip considering variation, contamination, and defect.

    PubMed

    Liao, Chen; Hu, Shiyan

    2014-03-01

    Microfluidic lab-on-a-chips have been widely utilized in biochemical analysis and human health studies due to high detection accuracy, high timing efficiency, and low cost. The increasing design complexity of lab-on-a-chips necessitates the computer-aided design (CAD) methodology in contrast to the classical manual design methodology. A key part in lab-on-a-chip CAD is physical-level synthesis. It includes the lab-on-a-chip placement and routing, where placement is to determine the physical location and the starting time of each operation and routing is to transport each droplet from the source to the destination. In the lab-on-a-chip design, variation, contamination, and defect need to be considered. This work designs a physical-level synthesis flow which simultaneously considers variation, contamination, and defect of the lab-on-a-chip design. It proposes a maze routing based, variation, contamination, and defect aware droplet routing technique, which is seamlessly integrated into an existing placement technique. The proposed technique improves the placement solution for routing and achieves the placement and routing co-optimization to handle variation, contamination, and defect. The simulation results demonstrate that our technique does not use any defective/contaminated grids, while the technique without considering contamination and defect uses 17.0% of the defective/contaminated grids on average. In addition, our routing variation aware technique significantly improves the average routing yield by 51.2% with only 3.5% increase in completion time compared to a routing variation unaware technique.

  7. Implementation of a finite element analysis procedure for structural analysis of shape memory behaviour of fibre reinforced shape memory polymer composites

    NASA Astrophysics Data System (ADS)

    Azzawi, Wessam Al; Epaarachchi, J. A.; Islam, Mainul; Leng, Jinsong

    2017-12-01

    Shape memory polymers (SMPs) offer a unique ability to undergo a substantial shape deformation and subsequently recover the original shape when exposed to a particular external stimulus. Comparatively low mechanical properties being the major drawback for extended use of SMPs in engineering applications. However the inclusion of reinforcing fibres in to SMPs improves mechanical properties significantly while retaining intrinsic shape memory effects. The implementation of shape memory polymer composites (SMPCs) in any engineering application is a unique task which requires profound materials and design optimization. However currently available analytical tools have critical limitations to undertake accurate analysis/simulations of SMPC structures and slower derestrict transformation of breakthrough research outcomes to real-life applications. Many finite element (FE) models have been presented. But majority of them require a complicated user-subroutines to integrate with standard FE software packages. Furthermore, those subroutines are problem specific and difficult to use for a wider range of SMPC materials and related structures. This paper presents a FE simulation technique to model the thermomechanical behaviour of the SMPCs using commercial FE software ABAQUS. Proposed technique incorporates material time-dependent viscoelastic behaviour. The ability of the proposed technique to predict the shape fixity and shape recovery was evaluated by experimental data acquired by a bending of a SMPC cantilever beam. The excellent correlation between the experimental and FE simulation results has confirmed the robustness of the proposed technique.

  8. Towards a Quality Assessment Method for Learning Preference Profiles in Negotiation

    NASA Astrophysics Data System (ADS)

    Hindriks, Koen V.; Tykhonov, Dmytro

    In automated negotiation, information gained about an opponent's preference profile by means of learning techniques may significantly improve an agent's negotiation performance. It therefore is useful to gain a better understanding of how various negotiation factors influence the quality of learning. The quality of learning techniques in negotiation are typically assessed indirectly by means of comparing the utility levels of agreed outcomes and other more global negotiation parameters. An evaluation of learning based on such general criteria, however, does not provide any insight into the influence of various aspects of negotiation on the quality of the learned model itself. The quality may depend on such aspects as the domain of negotiation, the structure of the preference profiles, the negotiation strategies used by the parties, and others. To gain a better understanding of the performance of proposed learning techniques in the context of negotiation and to be able to assess the potential to improve the performance of such techniques a more systematic assessment method is needed. In this paper we propose such a systematic method to analyse the quality of the information gained about opponent preferences by learning in single-instance negotiations. The method includes measures to assess the quality of a learned preference profile and proposes an experimental setup to analyse the influence of various negotiation aspects on the quality of learning. We apply the method to a Bayesian learning approach for learning an opponent's preference profile and discuss our findings.

  9. High-quality 3D correction of ring and radiant artifacts in flat panel detector-based cone beam volume CT imaging

    NASA Astrophysics Data System (ADS)

    Abu Anas, Emran Mohammad; Kim, Jae Gon; Lee, Soo Yeol; Kamrul Hasan, Md

    2011-10-01

    The use of an x-ray flat panel detector is increasingly becoming popular in 3D cone beam volume CT machines. Due to the deficient semiconductor array manufacturing process, the cone beam projection data are often corrupted by different types of abnormalities, which cause severe ring and radiant artifacts in a cone beam reconstruction image, and as a result, the diagnostic image quality is degraded. In this paper, a novel technique is presented for the correction of error in the 2D cone beam projections due to abnormalities often observed in 2D x-ray flat panel detectors. Template images are derived from the responses of the detector pixels using their statistical properties and then an effective non-causal derivative-based detection algorithm in 2D space is presented for the detection of defective and mis-calibrated detector elements separately. An image inpainting-based 3D correction scheme is proposed for the estimation of responses of defective detector elements, and the responses of the mis-calibrated detector elements are corrected using the normalization technique. For real-time implementation, a simplification of the proposed off-line method is also suggested. Finally, the proposed algorithms are tested using different real cone beam volume CT images and the experimental results demonstrate that the proposed methods can effectively remove ring and radiant artifacts from cone beam volume CT images compared to other reported techniques in the literature.

  10. A constrained reconstruction technique of hyperelasticity parameters for breast cancer assessment

    NASA Astrophysics Data System (ADS)

    Mehrabian, Hatef; Campbell, Gordon; Samani, Abbas

    2010-12-01

    In breast elastography, breast tissue usually undergoes large compression resulting in significant geometric and structural changes. This implies that breast elastography is associated with tissue nonlinear behavior. In this study, an elastography technique is presented and an inverse problem formulation is proposed to reconstruct parameters characterizing tissue hyperelasticity. Such parameters can potentially be used for tumor classification. This technique can also have other important clinical applications such as measuring normal tissue hyperelastic parameters in vivo. Such parameters are essential in planning and conducting computer-aided interventional procedures. The proposed parameter reconstruction technique uses a constrained iterative inversion; it can be viewed as an inverse problem. To solve this problem, we used a nonlinear finite element model corresponding to its forward problem. In this research, we applied Veronda-Westmann, Yeoh and polynomial models to model tissue hyperelasticity. To validate the proposed technique, we conducted studies involving numerical and tissue-mimicking phantoms. The numerical phantom consisted of a hemisphere connected to a cylinder, while we constructed the tissue-mimicking phantom from polyvinyl alcohol with freeze-thaw cycles that exhibits nonlinear mechanical behavior. Both phantoms consisted of three types of soft tissues which mimic adipose, fibroglandular tissue and a tumor. The results of the simulations and experiments show feasibility of accurate reconstruction of tumor tissue hyperelastic parameters using the proposed method. In the numerical phantom, all hyperelastic parameters corresponding to the three models were reconstructed with less than 2% error. With the tissue-mimicking phantom, we were able to reconstruct the ratio of the hyperelastic parameters reasonably accurately. Compared to the uniaxial test results, the average error of the ratios of the parameters reconstructed for inclusion to the middle and external layers were 13% and 9.6%, respectively. Given that the parameter ratios of the abnormal tissues to the normal ones range from three times to more than ten times, this accuracy is sufficient for tumor classification.

  11. A novel class sensitive hashing technique for large-scale content-based remote sensing image retrieval

    NASA Astrophysics Data System (ADS)

    Reato, Thomas; Demir, Begüm; Bruzzone, Lorenzo

    2017-10-01

    This paper presents a novel class sensitive hashing technique in the framework of large-scale content-based remote sensing (RS) image retrieval. The proposed technique aims at representing each image with multi-hash codes, each of which corresponds to a primitive (i.e., land cover class) present in the image. To this end, the proposed method consists of a three-steps algorithm. The first step is devoted to characterize each image by primitive class descriptors. These descriptors are obtained through a supervised approach, which initially extracts the image regions and their descriptors that are then associated with primitives present in the images. This step requires a set of annotated training regions to define primitive classes. A correspondence between the regions of an image and the primitive classes is built based on the probability of each primitive class to be present at each region. All the regions belonging to the specific primitive class with a probability higher than a given threshold are highly representative of that class. Thus, the average value of the descriptors of these regions is used to characterize that primitive. In the second step, the descriptors of primitive classes are transformed into multi-hash codes to represent each image. This is achieved by adapting the kernel-based supervised locality sensitive hashing method to multi-code hashing problems. The first two steps of the proposed technique, unlike the standard hashing methods, allow one to represent each image by a set of primitive class sensitive descriptors and their hash codes. Then, in the last step, the images in the archive that are very similar to a query image are retrieved based on a multi-hash-code-matching scheme. Experimental results obtained on an archive of aerial images confirm the effectiveness of the proposed technique in terms of retrieval accuracy when compared to the standard hashing methods.

  12. [Application of THz technology to nondestructive detection of agricultural product quality].

    PubMed

    Jiang, Yu-ying; Ge, Hong-yi; Lian, Fei-yu; Zhang, Yuan; Xia, Shan-hong

    2014-08-01

    With recent development of THz sources and detector, applications of THz radiation to nondestructive testing and quality control have expanded in many fields, such as agriculture, safety inspection and quality control, medicine, biochemistry, communication etc. Compared with other detection technique, being a new kind of technique, THz radiation has low energy, good perspectivity, and high signal-to-noise ratio, and thus can obtain physical, chemical and biological information. This paper first introduces the basic concept of THz radiation and the major properties, then gives an extensive review of recent research progress in detection of the quality of agricultural products via THz technique, analyzes the existing shortcomings of THz detection and discusses the outlook of potential application, finally proposes the new application of THz technique to detection of quality of stored grain.

  13. Integrality and separability of multitouch interaction techniques in 3D manipulation tasks.

    PubMed

    Martinet, Anthony; Casiez, Géry; Grisoni, Laurent

    2012-03-01

    Multitouch displays represent a promising technology for the display and manipulation of data. While the manipulation of 2D data has been widely explored, 3D manipulation with multitouch displays remains largely unexplored. Based on an analysis of the integration and separation of degrees of freedom, we propose a taxonomy for 3D manipulation techniques with multitouch displays. Using that taxonomy, we introduce Depth-Separated Screen-Space (DS3), a new 3D manipulation technique based on the separation of translation and rotation. In a controlled experiment, we compared DS3 with Sticky Tools and Screen-Space. Results show that separating the control of translation and rotation significantly affects performance for 3D manipulation, with DS3 performing faster than the two other techniques.

  14. Improving KPCA Online Extraction by Orthonormalization in the Feature Space.

    PubMed

    Souza Filho, Joao B O; Diniz, Paulo S R

    2018-04-01

    Recently, some online kernel principal component analysis (KPCA) techniques based on the generalized Hebbian algorithm (GHA) were proposed for use in large data sets, defining kernel components using concise dictionaries automatically extracted from data. This brief proposes two new online KPCA extraction algorithms, exploiting orthogonalized versions of the GHA rule. In both the cases, the orthogonalization of kernel components is achieved by the inclusion of some low complexity additional steps to the kernel Hebbian algorithm, thus not substantially affecting the computational cost of the algorithm. Results show improved convergence speed and accuracy of components extracted by the proposed methods, as compared with the state-of-the-art online KPCA extraction algorithms.

  15. Real-Valued Covariance Vector Sparsity-Inducing DOA Estimation for Monostatic MIMO Radar

    PubMed Central

    Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Jing

    2015-01-01

    In this paper, a real-valued covariance vector sparsity-inducing method for direction of arrival (DOA) estimation is proposed in monostatic multiple-input multiple-output (MIMO) radar. Exploiting the special configuration of monostatic MIMO radar, low-dimensional real-valued received data can be obtained by using the reduced-dimensional transformation and unitary transformation technique. Then, based on the Khatri–Rao product, a real-valued sparse representation framework of the covariance vector is formulated to estimate DOA. Compared to the existing sparsity-inducing DOA estimation methods, the proposed method provides better angle estimation performance and lower computational complexity. Simulation results verify the effectiveness and advantage of the proposed method. PMID:26569241

  16. Real-Valued Covariance Vector Sparsity-Inducing DOA Estimation for Monostatic MIMO Radar.

    PubMed

    Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Jing

    2015-11-10

    In this paper, a real-valued covariance vector sparsity-inducing method for direction of arrival (DOA) estimation is proposed in monostatic multiple-input multiple-output (MIMO) radar. Exploiting the special configuration of monostatic MIMO radar, low-dimensional real-valued received data can be obtained by using the reduced-dimensional transformation and unitary transformation technique. Then, based on the Khatri-Rao product, a real-valued sparse representation framework of the covariance vector is formulated to estimate DOA. Compared to the existing sparsity-inducing DOA estimation methods, the proposed method provides better angle estimation performance and lower computational complexity. Simulation results verify the effectiveness and advantage of the proposed method.

  17. Efficient Jacobi-Gauss collocation method for solving initial value problems of Bratu type

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Bhrawy, A. H.; Baleanu, D.; Hafez, R. M.

    2013-09-01

    In this paper, we propose the shifted Jacobi-Gauss collocation spectral method for solving initial value problems of Bratu type, which is widely applicable in fuel ignition of the combustion theory and heat transfer. The spatial approximation is based on shifted Jacobi polynomials J {/n (α,β)}( x) with α, β ∈ (-1, ∞), x ∈ [0, 1] and n the polynomial degree. The shifted Jacobi-Gauss points are used as collocation nodes. Illustrative examples have been discussed to demonstrate the validity and applicability of the proposed technique. Comparing the numerical results of the proposed method with some well-known results show that the method is efficient and gives excellent numerical results.

  18. A Modified Artificial Bee Colony Algorithm for p-Center Problems

    PubMed Central

    Yurtkuran, Alkın

    2014-01-01

    The objective of the p-center problem is to locate p-centers on a network such that the maximum of the distances from each node to its nearest center is minimized. The artificial bee colony algorithm is a swarm-based meta-heuristic algorithm that mimics the foraging behavior of honey bee colonies. This study proposes a modified ABC algorithm that benefits from a variety of search strategies to balance exploration and exploitation. Moreover, random key-based coding schemes are used to solve the p-center problem effectively. The proposed algorithm is compared to state-of-the-art techniques using different benchmark problems, and computational results reveal that the proposed approach is very efficient. PMID:24616648

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacFarlane, Michael; Battista, Jerry; Chen, Jeff

    Purpose: To develop a radiotherapy dose tracking and plan evaluation technique using cone-beam computed tomography (CBCT) images. Methods: We developed a patient-specific method of calibrating CBCT image sets for dose calculation. The planning CT was first registered with the CBCT using deformable image registration (DIR). A scatter plot was generated between the CT numbers of the planning CT and CBCT for each slice. The CBCT calibration curve was obtained by least-square fitting of the data, and applied to each CBCT slice. The calibrated CBCT was then merged with original planning CT to extend the small field of view of CBCT.more » Finally, the treatment plan was copied to the merged CT for dose tracking and plan evaluation. The proposed patient-specific calibration method was also compared to two methods proposed in literature. To evaluate the accuracy of each technique, 15 head-and-neck patients requiring plan adaptation were arbitrarily selected from our institution. The original plan was calculated on each method’s data set, including a second planning CT acquired within 48 hours of the CBCT (serving as gold standard). Clinically relevant dose metrics and 3D gamma analysis of dose distributions were compared between the different techniques. Results: Compared to the gold standard of using planning CTs, the patient-specific CBCT calibration method was shown to provide promising results with gamma pass rates above 95% and average dose metric agreement within 2.5%. Conclusions: The patient-specific CBCT calibration method could potentially be used for on-line dose tracking and plan evaluation, without requiring a re-planning CT session.« less

  20. Sparsity-Aware DOA Estimation Scheme for Noncircular Source in MIMO Radar

    PubMed Central

    Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Qi; Liu, Jing

    2016-01-01

    In this paper, a novel sparsity-aware direction of arrival (DOA) estimation scheme for a noncircular source is proposed in multiple-input multiple-output (MIMO) radar. In the proposed method, the reduced-dimensional transformation technique is adopted to eliminate the redundant elements. Then, exploiting the noncircularity of signals, a joint sparsity-aware scheme based on the reweighted l1 norm penalty is formulated for DOA estimation, in which the diagonal elements of the weight matrix are the coefficients of the noncircular MUSIC-like (NC MUSIC-like) spectrum. Compared to the existing l1 norm penalty-based methods, the proposed scheme provides higher angular resolution and better DOA estimation performance. Results from numerical experiments are used to show the effectiveness of our proposed method. PMID:27089345

  1. Reference Groups for Female Attractiveness Among Black and White College Females.

    ERIC Educational Resources Information Center

    Harrison, Algea O.; Stoner, David M.

    This study is concerned with the comparative reference group used by black and white women when evaluating female attractiveness. The study's examination of this issue is part of a larger report on the relationship between self-concept, attractiveness and reference group for black and white women. The study used the technique proposed by McGuire…

  2. Teaching Authorial Style and Literary Technique: "Exemplo XI" of "El Conde Lucanor"

    ERIC Educational Resources Information Center

    Bryant, Stacy

    2016-01-01

    This current study proposes a comparative method of teaching authorial style, using four versions of "Exemplo XI," an often-anthologized tale about the "mago" of Toledo, Don Illán, from the "Conde Lucanor," a series of interlinked tales by the early fourteenth-century author Don Juan Manuel. Teaching a medieval text…

  3. Comparison of estimated and measured sediment yield in the Gualala River

    Treesearch

    Matthew O’Connor; Jack Lewis; Robert Pennington

    2012-01-01

    This study compares quantitative erosion rate estimates developed at different spatial and temporal scales. It is motivated by the need to assess potential water quality impacts of a proposed vineyard development project in the Gualala River watershed. Previous erosion rate estimates were developed using sediment source assessment techniques by the North Coast Regional...

  4. Visual Hybrid Development Learning System (VHDLS) Framework for Children with Autism

    ERIC Educational Resources Information Center

    Banire, Bilikis; Jomhari, Nazean; Ahmad, Rodina

    2015-01-01

    The effect of education on children with autism serves as a relative cure for their deficits. As a result of this, they require special techniques to gain their attention and interest in learning as compared to typical children. Several studies have shown that these children are visual learners. In this study, we proposed a Visual Hybrid…

  5. The constrained discrete-time state-dependent Riccati equation technique for uncertain nonlinear systems

    NASA Astrophysics Data System (ADS)

    Chang, Insu

    The objective of the thesis is to introduce a relatively general nonlinear controller/estimator synthesis framework using a special type of the state-dependent Riccati equation technique. The continuous time state-dependent Riccati equation (SDRE) technique is extended to discrete-time under input and state constraints, yielding constrained (C) discrete-time (D) SDRE, referred to as CD-SDRE. For the latter, stability analysis and calculation of a region of attraction are carried out. The derivation of the D-SDRE under state-dependent weights is provided. Stability of the D-SDRE feedback system is established using Lyapunov stability approach. Receding horizon strategy is used to take into account the constraints on D-SDRE controller. Stability condition of the CD-SDRE controller is analyzed by using a switched system. The use of CD-SDRE scheme in the presence of constraints is then systematically demonstrated by applying this scheme to problems of spacecraft formation orbit reconfiguration under limited performance on thrusters. Simulation results demonstrate the efficacy and reliability of the proposed CD-SDRE. The CD-SDRE technique is further investigated in a case where there are uncertainties in nonlinear systems to be controlled. First, the system stability under each of the controllers in the robust CD-SDRE technique is separately established. The stability of the closed-loop system under the robust CD-SDRE controller is then proven based on the stability of each control system comprising switching configuration. A high fidelity dynamical model of spacecraft attitude motion in 3-dimensional space is derived with a partially filled fuel tank, assumed to have the first fuel slosh mode. The proposed robust CD-SDRE controller is then applied to the spacecraft attitude control system to stabilize its motion in the presence of uncertainties characterized by the first fuel slosh mode. The performance of the robust CD-SDRE technique is discussed. Subsequently, filtering techniques are investigated by using the D-SDRE technique. Detailed derivation of the D-SDRE-based filter (D-SDREF) is provided under the assumption of Gaussian noises and the stability condition of the error signal between the measured signal and the estimated signals is proven to be input-to-state stable. For the non-Gaussian distributed noises, we propose a filter by combining the D-SDREF and the particle filter (PF), named the combined D-SDRE/PF. Two algorithms for the filtering techniques are provided. Several filtering techniques are compared with challenging numerical examples to show the reliability and efficacy of the proposed D-SDREF and the combined D-SDRE/PF.

  6. Random Forest-Based Approach for Maximum Power Point Tracking of Photovoltaic Systems Operating under Actual Environmental Conditions.

    PubMed

    Shareef, Hussain; Mutlag, Ammar Hussein; Mohamed, Azah

    2017-01-01

    Many maximum power point tracking (MPPT) algorithms have been developed in recent years to maximize the produced PV energy. These algorithms are not sufficiently robust because of fast-changing environmental conditions, efficiency, accuracy at steady-state value, and dynamics of the tracking algorithm. Thus, this paper proposes a new random forest (RF) model to improve MPPT performance. The RF model has the ability to capture the nonlinear association of patterns between predictors, such as irradiance and temperature, to determine accurate maximum power point. A RF-based tracker is designed for 25 SolarTIFSTF-120P6 PV modules, with the capacity of 3 kW peak using two high-speed sensors. For this purpose, a complete PV system is modeled using 300,000 data samples and simulated using the MATLAB/SIMULINK package. The proposed RF-based MPPT is then tested under actual environmental conditions for 24 days to validate the accuracy and dynamic response. The response of the RF-based MPPT model is also compared with that of the artificial neural network and adaptive neurofuzzy inference system algorithms for further validation. The results show that the proposed MPPT technique gives significant improvement compared with that of other techniques. In addition, the RF model passes the Bland-Altman test, with more than 95 percent acceptability.

  7. Chinese social media analysis for disease surveillance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Xiaohui; Yang, Nanhai; Wang, Zhibo

    Here, it is reported that there are hundreds of thou- sands of deaths caused by seasonal flu all around the world every year. More other diseases such as chickenpox, malaria, etc. are also serious threats to people’s physical and mental health. There are 250,000–500,000 deaths every year around the world. Therefore proper techniques for disease surveillance are highly demanded. Recently, social media analysis is regarded as an efficient way to achieve this goal, which is feasible since growing number of people have been posting their health information on social media such as blogs, personal websites, etc. Previous work on socialmore » media analysis mainly focused on English materials but hardly considered Chinese materials, which hinders the application of such technique to Chinese peo- ple. In this paper, we proposed a new method of Chinese social media analysis for disease surveillance. More specifically, we compared different kinds of methods in the process of classification and then proposed a new way to process Chinese text data. The Chinese Sina micro-blog data collected from September to December 2013 are used to validate the effectiveness of the proposed method. The results show that a high classification precision of 87.49 % in average has been obtained. Comparing with the data from the authority, Chinese National Influenza Center, we can predict the outbreak time of flu 5 days earlier.« less

  8. A novel Bayesian respiratory motion model to estimate and resolve uncertainty in image-guided cardiac interventions.

    PubMed

    Peressutti, Devis; Penney, Graeme P; Housden, R James; Kolbitsch, Christoph; Gomez, Alberto; Rijkhorst, Erik-Jan; Barratt, Dean C; Rhode, Kawal S; King, Andrew P

    2013-05-01

    In image-guided cardiac interventions, respiratory motion causes misalignments between the pre-procedure roadmap of the heart used for guidance and the intra-procedure position of the heart, reducing the accuracy of the guidance information and leading to potentially dangerous consequences. We propose a novel technique for motion-correcting the pre-procedural information that combines a probabilistic MRI-derived affine motion model with intra-procedure real-time 3D echocardiography (echo) images in a Bayesian framework. The probabilistic model incorporates a measure of confidence in its motion estimates which enables resolution of the potentially conflicting information supplied by the model and the echo data. Unlike models proposed so far, our method allows the final motion estimate to deviate from the model-produced estimate according to the information provided by the echo images, so adapting to the complex variability of respiratory motion. The proposed method is evaluated using gold-standard MRI-derived motion fields and simulated 3D echo data for nine volunteers and real 3D live echo images for four volunteers. The Bayesian method is compared to 5 other motion estimation techniques and results show mean/max improvements in estimation accuracy of 10.6%/18.9% for simulated echo images and 20.8%/41.5% for real 3D live echo data, over the best comparative estimation method. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Random Forest-Based Approach for Maximum Power Point Tracking of Photovoltaic Systems Operating under Actual Environmental Conditions

    PubMed Central

    Shareef, Hussain; Mohamed, Azah

    2017-01-01

    Many maximum power point tracking (MPPT) algorithms have been developed in recent years to maximize the produced PV energy. These algorithms are not sufficiently robust because of fast-changing environmental conditions, efficiency, accuracy at steady-state value, and dynamics of the tracking algorithm. Thus, this paper proposes a new random forest (RF) model to improve MPPT performance. The RF model has the ability to capture the nonlinear association of patterns between predictors, such as irradiance and temperature, to determine accurate maximum power point. A RF-based tracker is designed for 25 SolarTIFSTF-120P6 PV modules, with the capacity of 3 kW peak using two high-speed sensors. For this purpose, a complete PV system is modeled using 300,000 data samples and simulated using the MATLAB/SIMULINK package. The proposed RF-based MPPT is then tested under actual environmental conditions for 24 days to validate the accuracy and dynamic response. The response of the RF-based MPPT model is also compared with that of the artificial neural network and adaptive neurofuzzy inference system algorithms for further validation. The results show that the proposed MPPT technique gives significant improvement compared with that of other techniques. In addition, the RF model passes the Bland–Altman test, with more than 95 percent acceptability. PMID:28702051

  10. Comparison of segmentation algorithms for fluorescence microscopy images of cells.

    PubMed

    Dima, Alden A; Elliott, John T; Filliben, James J; Halter, Michael; Peskin, Adele; Bernal, Javier; Kociolek, Marcin; Brady, Mary C; Tang, Hai C; Plant, Anne L

    2011-07-01

    The analysis of fluorescence microscopy of cells often requires the determination of cell edges. This is typically done using segmentation techniques that separate the cell objects in an image from the surrounding background. This study compares segmentation results from nine different segmentation techniques applied to two different cell lines and five different sets of imaging conditions. Significant variability in the results of segmentation was observed that was due solely to differences in imaging conditions or applications of different algorithms. We quantified and compared the results with a novel bivariate similarity index metric that evaluates the degree of underestimating or overestimating a cell object. The results show that commonly used threshold-based segmentation techniques are less accurate than k-means clustering with multiple clusters. Segmentation accuracy varies with imaging conditions that determine the sharpness of cell edges and with geometric features of a cell. Based on this observation, we propose a method that quantifies cell edge character to provide an estimate of how accurately an algorithm will perform. The results of this study will assist the development of criteria for evaluating interlaboratory comparability. Published 2011 Wiley-Liss, Inc.

  11. A Rigid Image Registration Based on the Nonsubsampled Contourlet Transform and Genetic Algorithms

    PubMed Central

    Meskine, Fatiha; Chikr El Mezouar, Miloud; Taleb, Nasreddine

    2010-01-01

    Image registration is a fundamental task used in image processing to match two or more images taken at different times, from different sensors or from different viewpoints. The objective is to find in a huge search space of geometric transformations, an acceptable accurate solution in a reasonable time to provide better registered images. Exhaustive search is computationally expensive and the computational cost increases exponentially with the number of transformation parameters and the size of the data set. In this work, we present an efficient image registration algorithm that uses genetic algorithms within a multi-resolution framework based on the Non-Subsampled Contourlet Transform (NSCT). An adaptable genetic algorithm for registration is adopted in order to minimize the search space. This approach is used within a hybrid scheme applying the two techniques fitness sharing and elitism. Two NSCT based methods are proposed for registration. A comparative study is established between these methods and a wavelet based one. Because the NSCT is a shift-invariant multidirectional transform, the second method is adopted for its search speeding up property. Simulation results clearly show that both proposed techniques are really promising methods for image registration compared to the wavelet approach, while the second technique has led to the best performance results of all. Moreover, to demonstrate the effectiveness of these methods, these registration techniques have been successfully applied to register SPOT, IKONOS and Synthetic Aperture Radar (SAR) images. The algorithm has been shown to work perfectly well for multi-temporal satellite images as well, even in the presence of noise. PMID:22163672

  12. A rigid image registration based on the nonsubsampled contourlet transform and genetic algorithms.

    PubMed

    Meskine, Fatiha; Chikr El Mezouar, Miloud; Taleb, Nasreddine

    2010-01-01

    Image registration is a fundamental task used in image processing to match two or more images taken at different times, from different sensors or from different viewpoints. The objective is to find in a huge search space of geometric transformations, an acceptable accurate solution in a reasonable time to provide better registered images. Exhaustive search is computationally expensive and the computational cost increases exponentially with the number of transformation parameters and the size of the data set. In this work, we present an efficient image registration algorithm that uses genetic algorithms within a multi-resolution framework based on the Non-Subsampled Contourlet Transform (NSCT). An adaptable genetic algorithm for registration is adopted in order to minimize the search space. This approach is used within a hybrid scheme applying the two techniques fitness sharing and elitism. Two NSCT based methods are proposed for registration. A comparative study is established between these methods and a wavelet based one. Because the NSCT is a shift-invariant multidirectional transform, the second method is adopted for its search speeding up property. Simulation results clearly show that both proposed techniques are really promising methods for image registration compared to the wavelet approach, while the second technique has led to the best performance results of all. Moreover, to demonstrate the effectiveness of these methods, these registration techniques have been successfully applied to register SPOT, IKONOS and Synthetic Aperture Radar (SAR) images. The algorithm has been shown to work perfectly well for multi-temporal satellite images as well, even in the presence of noise.

  13. An Improved Simulated Annealing Technique for Enhanced Mobility in Smart Cities.

    PubMed

    Amer, Hayder; Salman, Naveed; Hawes, Matthew; Chaqfeh, Moumena; Mihaylova, Lyudmila; Mayfield, Martin

    2016-06-30

    Vehicular traffic congestion is a significant problem that arises in many cities. This is due to the increasing number of vehicles that are driving on city roads of limited capacity. The vehicular congestion significantly impacts travel distance, travel time, fuel consumption and air pollution. Avoidance of traffic congestion and providing drivers with optimal paths are not trivial tasks. The key contribution of this work consists of the developed approach for dynamic calculation of optimal traffic routes. Two attributes (the average travel speed of the traffic and the roads' length) are utilized by the proposed method to find the optimal paths. The average travel speed values can be obtained from the sensors deployed in smart cities and communicated to vehicles via the Internet of Vehicles and roadside communication units. The performance of the proposed algorithm is compared to three other algorithms: the simulated annealing weighted sum, the simulated annealing technique for order preference by similarity to the ideal solution and the Dijkstra algorithm. The weighted sum and technique for order preference by similarity to the ideal solution methods are used to formulate different attributes in the simulated annealing cost function. According to the Sheffield scenario, simulation results show that the improved simulated annealing technique for order preference by similarity to the ideal solution method improves the traffic performance in the presence of congestion by an overall average of 19.22% in terms of travel time, fuel consumption and CO₂ emissions as compared to other algorithms; also, similar performance patterns were achieved for the Birmingham test scenario.

  14. A novel technique for fetal heart rate estimation from Doppler ultrasound signal

    PubMed Central

    2011-01-01

    Background The currently used fetal monitoring instrumentation that is based on Doppler ultrasound technique provides the fetal heart rate (FHR) signal with limited accuracy. It is particularly noticeable as significant decrease of clinically important feature - the variability of FHR signal. The aim of our work was to develop a novel efficient technique for processing of the ultrasound signal, which could estimate the cardiac cycle duration with accuracy comparable to a direct electrocardiography. Methods We have proposed a new technique which provides the true beat-to-beat values of the FHR signal through multiple measurement of a given cardiac cycle in the ultrasound signal. The method consists in three steps: the dynamic adjustment of autocorrelation window, the adaptive autocorrelation peak detection and determination of beat-to-beat intervals. The estimated fetal heart rate values and calculated indices describing variability of FHR, were compared to the reference data obtained from the direct fetal electrocardiogram, as well as to another method for FHR estimation. Results The results revealed that our method increases the accuracy in comparison to currently used fetal monitoring instrumentation, and thus enables to calculate reliable parameters describing the variability of FHR. Relating these results to the other method for FHR estimation we showed that in our approach a much lower number of measured cardiac cycles was rejected as being invalid. Conclusions The proposed method for fetal heart rate determination on a beat-to-beat basis offers a high accuracy of the heart interval measurement enabling reliable quantitative assessment of the FHR variability, at the same time reducing the number of invalid cardiac cycle measurements. PMID:21999764

  15. An Improved Simulated Annealing Technique for Enhanced Mobility in Smart Cities

    PubMed Central

    Amer, Hayder; Salman, Naveed; Hawes, Matthew; Chaqfeh, Moumena; Mihaylova, Lyudmila; Mayfield, Martin

    2016-01-01

    Vehicular traffic congestion is a significant problem that arises in many cities. This is due to the increasing number of vehicles that are driving on city roads of limited capacity. The vehicular congestion significantly impacts travel distance, travel time, fuel consumption and air pollution. Avoidance of traffic congestion and providing drivers with optimal paths are not trivial tasks. The key contribution of this work consists of the developed approach for dynamic calculation of optimal traffic routes. Two attributes (the average travel speed of the traffic and the roads’ length) are utilized by the proposed method to find the optimal paths. The average travel speed values can be obtained from the sensors deployed in smart cities and communicated to vehicles via the Internet of Vehicles and roadside communication units. The performance of the proposed algorithm is compared to three other algorithms: the simulated annealing weighted sum, the simulated annealing technique for order preference by similarity to the ideal solution and the Dijkstra algorithm. The weighted sum and technique for order preference by similarity to the ideal solution methods are used to formulate different attributes in the simulated annealing cost function. According to the Sheffield scenario, simulation results show that the improved simulated annealing technique for order preference by similarity to the ideal solution method improves the traffic performance in the presence of congestion by an overall average of 19.22% in terms of travel time, fuel consumption and CO2 emissions as compared to other algorithms; also, similar performance patterns were achieved for the Birmingham test scenario. PMID:27376289

  16. A Survey of Techniques for Modeling and Improving Reliability of Computing Systems

    DOE PAGES

    Mittal, Sparsh; Vetter, Jeffrey S.

    2015-04-24

    Recent trends of aggressive technology scaling have greatly exacerbated the occurrences and impact of faults in computing systems. This has made `reliability' a first-order design constraint. To address the challenges of reliability, several techniques have been proposed. In this study, we provide a survey of architectural techniques for improving resilience of computing systems. We especially focus on techniques proposed for microarchitectural components, such as processor registers, functional units, cache and main memory etc. In addition, we discuss techniques proposed for non-volatile memory, GPUs and 3D-stacked processors. To underscore the similarities and differences of the techniques, we classify them based onmore » their key characteristics. We also review the metrics proposed to quantify vulnerability of processor structures. Finally, we believe that this survey will help researchers, system-architects and processor designers in gaining insights into the techniques for improving reliability of computing systems.« less

  17. A Survey of Techniques for Modeling and Improving Reliability of Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh; Vetter, Jeffrey S.

    Recent trends of aggressive technology scaling have greatly exacerbated the occurrences and impact of faults in computing systems. This has made `reliability' a first-order design constraint. To address the challenges of reliability, several techniques have been proposed. In this study, we provide a survey of architectural techniques for improving resilience of computing systems. We especially focus on techniques proposed for microarchitectural components, such as processor registers, functional units, cache and main memory etc. In addition, we discuss techniques proposed for non-volatile memory, GPUs and 3D-stacked processors. To underscore the similarities and differences of the techniques, we classify them based onmore » their key characteristics. We also review the metrics proposed to quantify vulnerability of processor structures. Finally, we believe that this survey will help researchers, system-architects and processor designers in gaining insights into the techniques for improving reliability of computing systems.« less

  18. Comparison of Two Variants Of a Kata Technique (Unsu): The Neuromechanical Point of View

    PubMed Central

    Camomilla, Valentina; Sbriccoli, Paola; Mario, Alberto Di; Arpante, Alessandro; Felici, Francesco

    2009-01-01

    The objective of this work was to characterize from a neuromechanical point of view a jump performed within the sequence of Kata Unsu in International top level karateka. A modified jumping technique was proposed to improve the already acquired technique. The neuromechanical evaluation, paralleled by a refereeing judgment, was then used to compare modified and classic technique to test if the modification could lead to a better performance capacity, e.g. a higher score during an official competition. To this purpose, four high ranked karateka were recruited and instructed to perform the two jumps. Surface electromyographic signals were recorded in a bipolar mode from the vastus lateralis, rectus femoris, biceps femoris, gluteus maximus, and gastrocnemious muscles of both lower limbs. Mechanical data were collected by means of a stereophotogrammetric system and force platforms. Performance was associated to parameters characterizing the initial conditions of the aerial phase and to the CoM maximal height. The most critical elements having a negative influence on the arbitral evaluation were associated to quantitative error indicators. 3D reconstruction of the movement and videos were used to obtain the referee scores. The Unsu jump was divided into five phases (preparation, take off, ascending flight, descending flight, and landing) and the critical elements were highlighted. When comparing the techniques, no difference was found in the pattern of sEMG activation of the throwing leg muscles, while the push leg showed an earlier activation of RF and GA muscles at the beginning of the modified technique. The only significant improvement associated with the modified technique was evidenced at the beginning of the aerial phase, while there was no significant improvement of the referee score. Nevertheless, the proposed neuromechanical analysis, finalized to correlate technique features with the core performance indicators, is new in the field and is a promising tool to perform further analyses. Key Points A quantitative phase analysis, highlighting the critical features of the technique, was provided for the jump executed during the Kata Unsu. Kinematics and neuromuscular activity can be assessed during the Kata Unsu jump performed by top level karateka. Neuromechanical parameters change during different Kata Unsu jump techniques. Appropriate performance capacity indicators based on the neuromechanical evaluation can describe changes due to a modification of the technique. PMID:24474884

  19. Diffraction based overlay and image based overlay on production flow for advanced technology node

    NASA Astrophysics Data System (ADS)

    Blancquaert, Yoann; Dezauzier, Christophe

    2013-04-01

    One of the main challenges for lithography step is the overlay control. For the advanced technology node like 28nm and 14nm, the overlay budget becomes very tight. Two overlay techniques compete in our advanced semiconductor manufacturing: the Diffraction based Overlay (DBO) with the YieldStar S200 (ASML) and the Image Based Overlay (IBO) with ARCHER (KLA). In this paper we will compare these two methods through 3 critical production layers: Poly Gate, Contact and first metal layer. We will show the overlay results of the 2 techniques, explore the accuracy and compare the total measurement uncertainty (TMU) for the standard overlay targets of both techniques. We will see also the response and impact for the Image Based Overlay and Diffraction Based Overlay techniques through a process change like an additional Hardmask TEOS layer on the front-end stack. The importance of the target design is approached; we will propose more adapted design for image based targets. Finally we will present embedded targets in the 14 FDSOI with first results.

  20. Fluence map optimization (FMO) with dose-volume constraints in IMRT using the geometric distance sorting method.

    PubMed

    Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang

    2012-10-21

    A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose sorting method. By integrating a smart constraint adding/deleting scheme within the iteration framework, the new technique builds up an improved algorithm for solving the fluence map optimization with dose-volume constraints.

Top