Utility-preserving anonymization for health data publishing.
Lee, Hyukki; Kim, Soohyung; Kim, Jong Wook; Chung, Yon Dohn
2017-07-11
Publishing raw electronic health records (EHRs) may be considered as a breach of the privacy of individuals because they usually contain sensitive information. A common practice for the privacy-preserving data publishing is to anonymize the data before publishing, and thus satisfy privacy models such as k-anonymity. Among various anonymization techniques, generalization is the most commonly used in medical/health data processing. Generalization inevitably causes information loss, and thus, various methods have been proposed to reduce information loss. However, existing generalization-based data anonymization methods cannot avoid excessive information loss and preserve data utility. We propose a utility-preserving anonymization for privacy preserving data publishing (PPDP). To preserve data utility, the proposed method comprises three parts: (1) utility-preserving model, (2) counterfeit record insertion, (3) catalog of the counterfeit records. We also propose an anonymization algorithm using the proposed method. Our anonymization algorithm applies full-domain generalization algorithm. We evaluate our method in comparison with existence method on two aspects, information loss measured through various quality metrics and error rate of analysis result. With all different types of quality metrics, our proposed method show the lower information loss than the existing method. In the real-world EHRs analysis, analysis results show small portion of error between the anonymized data through the proposed method and original data. We propose a new utility-preserving anonymization method and an anonymization algorithm using the proposed method. Through experiments on various datasets, we show that the utility of EHRs anonymized by the proposed method is significantly better than those anonymized by previous approaches.
Si, Guangsen; Xu, Zeshui
2018-01-01
Hesitant fuzzy linguistic term set provides an effective tool to represent uncertain decision information. However, the semantics corresponding to the linguistic terms in it cannot accurately reflect the decision-makers’ subjective cognition. In general, different decision-makers’ sensitivities towards the semantics are different. Such sensitivities can be represented by the cumulative prospect theory value function. Inspired by this, we propose a linguistic scale function to transform the semantics corresponding to linguistic terms into the linguistic preference values. Furthermore, we propose the hesitant fuzzy linguistic preference utility set, based on which, the decision-makers can flexibly express their distinct semantics and obtain the decision results that are consistent with their cognition. For calculations and comparisons over the hesitant fuzzy linguistic preference utility sets, we introduce some distance measures and comparison laws. Afterwards, to apply the hesitant fuzzy linguistic preference utility sets in emergency management, we develop a method to obtain objective weights of attributes and then propose a hesitant fuzzy linguistic preference utility-TOPSIS method to select the best fire rescue plan. Finally, the validity of the proposed method is verified by some comparisons of the method with other two representative methods including the hesitant fuzzy linguistic-TOPSIS method and the hesitant fuzzy linguistic-VIKOR method. PMID:29614019
Liao, Huchang; Si, Guangsen; Xu, Zeshui; Fujita, Hamido
2018-04-03
Hesitant fuzzy linguistic term set provides an effective tool to represent uncertain decision information. However, the semantics corresponding to the linguistic terms in it cannot accurately reflect the decision-makers' subjective cognition. In general, different decision-makers' sensitivities towards the semantics are different. Such sensitivities can be represented by the cumulative prospect theory value function. Inspired by this, we propose a linguistic scale function to transform the semantics corresponding to linguistic terms into the linguistic preference values. Furthermore, we propose the hesitant fuzzy linguistic preference utility set, based on which, the decision-makers can flexibly express their distinct semantics and obtain the decision results that are consistent with their cognition. For calculations and comparisons over the hesitant fuzzy linguistic preference utility sets, we introduce some distance measures and comparison laws. Afterwards, to apply the hesitant fuzzy linguistic preference utility sets in emergency management, we develop a method to obtain objective weights of attributes and then propose a hesitant fuzzy linguistic preference utility-TOPSIS method to select the best fire rescue plan. Finally, the validity of the proposed method is verified by some comparisons of the method with other two representative methods including the hesitant fuzzy linguistic-TOPSIS method and the hesitant fuzzy linguistic-VIKOR method.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-27
... subject proposal. Owners of project-based section 8 contracts that utilize the AAF as the method of rent.... Description of the Need for the Information and its Proposed Use: Owners of project-based section 8 contracts that utilize the AAF as the method of rent adjustment provide this information which is necessary to...
Genetic Algorithm-Based Motion Estimation Method using Orientations and EMGs for Robot Controls
Chae, Jeongsook; Jin, Yong; Sung, Yunsick
2018-01-01
Demand for interactive wearable devices is rapidly increasing with the development of smart devices. To accurately utilize wearable devices for remote robot controls, limited data should be analyzed and utilized efficiently. For example, the motions by a wearable device, called Myo device, can be estimated by measuring its orientation, and calculating a Bayesian probability based on these orientation data. Given that Myo device can measure various types of data, the accuracy of its motion estimation can be increased by utilizing these additional types of data. This paper proposes a motion estimation method based on weighted Bayesian probability and concurrently measured data, orientations and electromyograms (EMG). The most probable motion among estimated is treated as a final estimated motion. Thus, recognition accuracy can be improved when compared to the traditional methods that employ only a single type of data. In our experiments, seven subjects perform five predefined motions. When orientation is measured by the traditional methods, the sum of the motion estimation errors is 37.3%; likewise, when only EMG data are used, the error in motion estimation by the proposed method was also 37.3%. The proposed combined method has an error of 25%. Therefore, the proposed method reduces motion estimation errors by 12%. PMID:29324641
Spectral-Spatial Shared Linear Regression for Hyperspectral Image Classification.
Haoliang Yuan; Yuan Yan Tang
2017-04-01
Classification of the pixels in hyperspectral image (HSI) is an important task and has been popularly applied in many practical applications. Its major challenge is the high-dimensional small-sized problem. To deal with this problem, lots of subspace learning (SL) methods are developed to reduce the dimension of the pixels while preserving the important discriminant information. Motivated by ridge linear regression (RLR) framework for SL, we propose a spectral-spatial shared linear regression method (SSSLR) for extracting the feature representation. Comparing with RLR, our proposed SSSLR has the following two advantages. First, we utilize a convex set to explore the spatial structure for computing the linear projection matrix. Second, we utilize a shared structure learning model, which is formed by original data space and a hidden feature space, to learn a more discriminant linear projection matrix for classification. To optimize our proposed method, an efficient iterative algorithm is proposed. Experimental results on two popular HSI data sets, i.e., Indian Pines and Salinas demonstrate that our proposed methods outperform many SL methods.
Real-Time GNSS-Based Attitude Determination in the Measurement Domain.
Zhao, Lin; Li, Na; Li, Liang; Zhang, Yi; Cheng, Chun
2017-02-05
A multi-antenna-based GNSS receiver is capable of providing high-precision and drift-free attitude solution. Carrier phase measurements need be utilized to achieve high-precision attitude. The traditional attitude determination methods in the measurement domain and the position domain resolve the attitude and the ambiguity sequentially. The redundant measurements from multiple baselines have not been fully utilized to enhance the reliability of attitude determination. A multi-baseline-based attitude determination method in the measurement domain is proposed to estimate the attitude parameters and the ambiguity simultaneously. Meanwhile, the redundancy of attitude resolution has also been increased so that the reliability of ambiguity resolution and attitude determination can be enhanced. Moreover, in order to further improve the reliability of attitude determination, we propose a partial ambiguity resolution method based on the proposed attitude determination model. The static and kinematic experiments were conducted to verify the performance of the proposed method. When compared with the traditional attitude determination methods, the static experimental results show that the proposed method can improve the accuracy by at least 0.03° and enhance the continuity by 18%, at most. The kinematic result has shown that the proposed method can obtain an optimal balance between accuracy and reliability performance.
Accurate estimation of human body orientation from RGB-D sensors.
Liu, Wu; Zhang, Yongdong; Tang, Sheng; Tang, Jinhui; Hong, Richang; Li, Jintao
2013-10-01
Accurate estimation of human body orientation can significantly enhance the analysis of human behavior, which is a fundamental task in the field of computer vision. However, existing orientation estimation methods cannot handle the various body poses and appearances. In this paper, we propose an innovative RGB-D-based orientation estimation method to address these challenges. By utilizing the RGB-D information, which can be real time acquired by RGB-D sensors, our method is robust to cluttered environment, illumination change and partial occlusions. Specifically, efficient static and motion cue extraction methods are proposed based on the RGB-D superpixels to reduce the noise of depth data. Since it is hard to discriminate all the 360 (°) orientation using static cues or motion cues independently, we propose to utilize a dynamic Bayesian network system (DBNS) to effectively employ the complementary nature of both static and motion cues. In order to verify our proposed method, we build a RGB-D-based human body orientation dataset that covers a wide diversity of poses and appearances. Our intensive experimental evaluations on this dataset demonstrate the effectiveness and efficiency of the proposed method.
NASA Astrophysics Data System (ADS)
Poobalasubramanian, Mangalraj; Agrawal, Anupam
2016-10-01
The presented work proposes fusion of panchromatic and multispectral images in a shearlet domain. The proposed fusion rules rely on the regional considerations which makes the system efficient in terms of spatial enhancement. The luminance hue saturation-based color conversion system is utilized to avoid spectral distortions. The proposed fusion method is tested on Worldview2 and Ikonos datasets, and the proposed method is compared against other methodologies. The proposed fusion method performs well against the other compared methods in terms of subjective and objective evaluations.
Evaluating the Performance of the IEEE Standard 1366 Method for Identifying Major Event Days
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eto, Joseph H.; LaCommare, Kristina Hamachi; Sohn, Michael D.
IEEE Standard 1366 offers a method for segmenting reliability performance data to isolate the effects of major events from the underlying year-to-year trends in reliability. Recent analysis by the IEEE Distribution Reliability Working Group (DRWG) has found that reliability performance of some utilities differs from the expectations that helped guide the development of the Standard 1366 method. This paper proposes quantitative metrics to evaluate the performance of the Standard 1366 method in identifying major events and in reducing year-to-year variability in utility reliability. The metrics are applied to a large sample of utility-reported reliability data to assess performance of themore » method with alternative specifications that have been considered by the DRWG. We find that none of the alternatives perform uniformly 'better' than the current Standard 1366 method. That is, none of the modifications uniformly lowers the year-to-year variability in System Average Interruption Duration Index without major events. Instead, for any given alternative, while it may lower the value of this metric for some utilities, it also increases it for other utilities (sometimes dramatically). Thus, we illustrate some of the trade-offs that must be considered in using the Standard 1366 method and highlight the usefulness of the metrics we have proposed in conducting these evaluations.« less
Brain medical image diagnosis based on corners with importance-values.
Gao, Linlin; Pan, Haiwei; Li, Qing; Xie, Xiaoqin; Zhang, Zhiqiang; Han, Jinming; Zhai, Xiao
2017-11-21
Brain disorders are one of the top causes of human death. Generally, neurologists analyze brain medical images for diagnosis. In the image analysis field, corners are one of the most important features, which makes corner detection and matching studies essential. However, existing corner detection studies do not consider the domain information of brain. This leads to many useless corners and the loss of significant information. Regarding corner matching, the uncertainty and structure of brain are not employed in existing methods. Moreover, most corner matching studies are used for 3D image registration. They are inapplicable for 2D brain image diagnosis because of the different mechanisms. To address these problems, we propose a novel corner-based brain medical image classification method. Specifically, we automatically extract multilayer texture images (MTIs) which embody diagnostic information from neurologists. Moreover, we present a corner matching method utilizing the uncertainty and structure of brain medical images and a bipartite graph model. Finally, we propose a similarity calculation method for diagnosis. Brain CT and MRI image sets are utilized to evaluate the proposed method. First, classifiers are trained in N-fold cross-validation analysis to produce the best θ and K. Then independent brain image sets are tested to evaluate the classifiers. Moreover, the classifiers are also compared with advanced brain image classification studies. For the brain CT image set, the proposed classifier outperforms the comparison methods by at least 8% on accuracy and 2.4% on F1-score. Regarding the brain MRI image set, the proposed classifier is superior to the comparison methods by more than 7.3% on accuracy and 4.9% on F1-score. Results also demonstrate that the proposed method is robust to different intensity ranges of brain medical image. In this study, we develop a robust corner-based brain medical image classifier. Specifically, we propose a corner detection method utilizing the diagnostic information from neurologists and a corner matching method based on the uncertainty and structure of brain medical images. Additionally, we present a similarity calculation method for brain image classification. Experimental results on two brain image sets show the proposed corner-based brain medical image classifier outperforms the state-of-the-art studies.
Real-Time GNSS-Based Attitude Determination in the Measurement Domain
Zhao, Lin; Li, Na; Li, Liang; Zhang, Yi; Cheng, Chun
2017-01-01
A multi-antenna-based GNSS receiver is capable of providing high-precision and drift-free attitude solution. Carrier phase measurements need be utilized to achieve high-precision attitude. The traditional attitude determination methods in the measurement domain and the position domain resolve the attitude and the ambiguity sequentially. The redundant measurements from multiple baselines have not been fully utilized to enhance the reliability of attitude determination. A multi-baseline-based attitude determination method in the measurement domain is proposed to estimate the attitude parameters and the ambiguity simultaneously. Meanwhile, the redundancy of attitude resolution has also been increased so that the reliability of ambiguity resolution and attitude determination can be enhanced. Moreover, in order to further improve the reliability of attitude determination, we propose a partial ambiguity resolution method based on the proposed attitude determination model. The static and kinematic experiments were conducted to verify the performance of the proposed method. When compared with the traditional attitude determination methods, the static experimental results show that the proposed method can improve the accuracy by at least 0.03° and enhance the continuity by 18%, at most. The kinematic result has shown that the proposed method can obtain an optimal balance between accuracy and reliability performance. PMID:28165434
Piao, Xinglin; Zhang, Yong; Li, Tingshu; Hu, Yongli; Liu, Hao; Zhang, Ke; Ge, Yun
2016-01-01
The Received Signal Strength (RSS) fingerprint-based indoor localization is an important research topic in wireless network communications. Most current RSS fingerprint-based indoor localization methods do not explore and utilize the spatial or temporal correlation existing in fingerprint data and measurement data, which is helpful for improving localization accuracy. In this paper, we propose an RSS fingerprint-based indoor localization method by integrating the spatio-temporal constraints into the sparse representation model. The proposed model utilizes the inherent spatial correlation of fingerprint data in the fingerprint matching and uses the temporal continuity of the RSS measurement data in the localization phase. Experiments on the simulated data and the localization tests in the real scenes show that the proposed method improves the localization accuracy and stability effectively compared with state-of-the-art indoor localization methods. PMID:27827882
NASA Astrophysics Data System (ADS)
Wang, Xiao; Gao, Feng; Dong, Junyu; Qi, Qiang
2018-04-01
Synthetic aperture radar (SAR) image is independent on atmospheric conditions, and it is the ideal image source for change detection. Existing methods directly analysis all the regions in the speckle noise contaminated difference image. The performance of these methods is easily affected by small noisy regions. In this paper, we proposed a novel change detection framework for saliency-guided change detection based on pattern and intensity distinctiveness analysis. The saliency analysis step can remove small noisy regions, and therefore makes the proposed method more robust to the speckle noise. In the proposed method, the log-ratio operator is first utilized to obtain a difference image (DI). Then, the saliency detection method based on pattern and intensity distinctiveness analysis is utilized to obtain the changed region candidates. Finally, principal component analysis and k-means clustering are employed to analysis pixels in the changed region candidates. Thus, the final change map can be obtained by classifying these pixels into changed or unchanged class. The experiment results on two real SAR images datasets have demonstrated the effectiveness of the proposed method.
Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...
2017-11-08
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Xin; Garikapati, Venu M.; You, Daehyun
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
NASA Astrophysics Data System (ADS)
Nikolaeva, L. A.; Khusaenova, A. Z.
2014-05-01
A method for utilizing production wastes is considered, and a process circuit arrangement is proposed for utilizing a mixture of activated silt and sludge from chemical water treatment by incinerating it with possible heat recovery. The sorption capacity of the products from combusting a mixture of activated silt and sludge with respect to gaseous emissions is experimentally determined. A periodic-duty adsorber charged with a fixed bed of sludge is calculated, and the heat-recovery boiler efficiency is estimated together with the technical-economic indicators of the proposed utilization process circuit arrangement.
A transaction assessment method for allocation of transmission services
NASA Astrophysics Data System (ADS)
Banunarayanan, Venkatasubramaniam
The purpose of this research is to develop transaction assessment methods for allocating transmission services that are provided by an area/utility to power transactions. Transmission services are the services needed to deliver, or provide the capacity to deliver, real and reactive power from one or more supply points to one or more delivery points. As the number of transactions increase rapidly in the emerging deregulated environment, accurate quantification of the transmission services an area/utility provides to accommodate a transaction is becoming important, because then appropriate pricing schemes can be developed to compensate for the parties that provide these services. The Allocation methods developed are based on the "Fair Resource Allocation Principle" and they determine for each transaction the following: the flowpath of the transaction (both real and reactive power components), generator reactive power support from each area/utility, real power loss support from each area/utility. Further, allocation methods for distributing the cost of relieving congestion on transmission lines caused by transactions are also developed. The main feature of the proposed methods is representation of actual usage of the transmission services by the transactions. The proposed method is tested extensively on a variety of systems. The allocation methods developed in this thesis for allocation of transmission services to transactions is not only useful in studying the impact of transactions on a transmission system in a multi-transaction case, but they are indeed necessary to meet the criteria set forth by FERC with regard to pricing based on actual usage. The "consistency" of the proposed allocation methods has also been investigated and tested.
Ting, Samuel T; Ahmad, Rizwan; Jin, Ning; Craft, Jason; Serafim da Silveira, Juliana; Xue, Hui; Simonetti, Orlando P
2017-04-01
Sparsity-promoting regularizers can enable stable recovery of highly undersampled magnetic resonance imaging (MRI), promising to improve the clinical utility of challenging applications. However, lengthy computation time limits the clinical use of these methods, especially for dynamic MRI with its large corpus of spatiotemporal data. Here, we present a holistic framework that utilizes the balanced sparse model for compressive sensing and parallel computing to reduce the computation time of cardiac MRI recovery methods. We propose a fast, iterative soft-thresholding method to solve the resulting ℓ1-regularized least squares problem. In addition, our approach utilizes a parallel computing environment that is fully integrated with the MRI acquisition software. The methodology is applied to two formulations of the multichannel MRI problem: image-based recovery and k-space-based recovery. Using measured MRI data, we show that, for a 224 × 144 image series with 48 frames, the proposed k-space-based approach achieves a mean reconstruction time of 2.35 min, a 24-fold improvement compared a reconstruction time of 55.5 min for the nonlinear conjugate gradient method, and the proposed image-based approach achieves a mean reconstruction time of 13.8 s. Our approach can be utilized to achieve fast reconstruction of large MRI datasets, thereby increasing the clinical utility of reconstruction techniques based on compressed sensing. Magn Reson Med 77:1505-1515, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Method of center localization for objects containing concentric arcs
NASA Astrophysics Data System (ADS)
Kuznetsova, Elena G.; Shvets, Evgeny A.; Nikolaev, Dmitry P.
2015-02-01
This paper proposes a method for automatic center location of objects containing concentric arcs. The method utilizes structure tensor analysis and voting scheme optimized with Fast Hough Transform. Two applications of the proposed method are considered: (i) wheel tracking in video-based system for automatic vehicle classification and (ii) tree growth rings analysis on a tree cross cut image.
Evaluating the utility of two gestural discomfort evaluation methods
Son, Minseok; Jung, Jaemoon; Park, Woojin
2017-01-01
Evaluating physical discomfort of designed gestures is important for creating safe and usable gesture-based interaction systems; yet, gestural discomfort evaluation has not been extensively studied in HCI, and few evaluation methods seem currently available whose utility has been experimentally confirmed. To address this, this study empirically demonstrated the utility of the subjective rating method after a small number of gesture repetitions (a maximum of four repetitions) in evaluating designed gestures in terms of physical discomfort resulting from prolonged, repetitive gesture use. The subjective rating method has been widely used in previous gesture studies but without empirical evidence on its utility. This study also proposed a gesture discomfort evaluation method based on an existing ergonomics posture evaluation tool (Rapid Upper Limb Assessment) and demonstrated its utility in evaluating designed gestures in terms of physical discomfort resulting from prolonged, repetitive gesture use. Rapid Upper Limb Assessment is an ergonomics postural analysis tool that quantifies the work-related musculoskeletal disorders risks for manual tasks, and has been hypothesized to be capable of correctly determining discomfort resulting from prolonged, repetitive gesture use. The two methods were evaluated through comparisons against a baseline method involving discomfort rating after actual prolonged, repetitive gesture use. Correlation analyses indicated that both methods were in good agreement with the baseline. The methods proposed in this study seem useful for predicting discomfort resulting from prolonged, repetitive gesture use, and are expected to help interaction designers create safe and usable gesture-based interaction systems. PMID:28423016
Key frame extraction based on spatiotemporal motion trajectory
NASA Astrophysics Data System (ADS)
Zhang, Yunzuo; Tao, Ran; Zhang, Feng
2015-05-01
Spatiotemporal motion trajectory can accurately reflect the changes of motion state. Motivated by this observation, this letter proposes a method for key frame extraction based on motion trajectory on the spatiotemporal slice. Different from the well-known motion related methods, the proposed method utilizes the inflexions of the motion trajectory on the spatiotemporal slice of all the moving objects. Experimental results show that although a similar performance is achieved in the single-objective screen, by comparing the proposed method to that achieved with the state-of-the-art methods based on motion energy or acceleration, the proposed method shows a better performance in a multiobjective video.
A self-reference PRF-shift MR thermometry method utilizing the phase gradient
NASA Astrophysics Data System (ADS)
Langley, Jason; Potter, William; Phipps, Corey; Huang, Feng; Zhao, Qun
2011-12-01
In magnetic resonance (MR) imaging, the most widely used and accurate method for measuring temperature is based on the shift in proton resonance frequency (PRF). However, inter-scan motion and bulk magnetic field shifts can lead to inaccurate temperature measurements in the PRF-shift MR thermometry method. The self-reference PRF-shift MR thermometry method was introduced to overcome such problems by deriving a reference image from the heated or treated image, and approximates the reference phase map with low-order polynomial functions. In this note, a new approach is presented to calculate the baseline phase map in self-reference PRF-shift MR thermometry. The proposed method utilizes the phase gradient to remove the phase unwrapping step inherent to other self-reference PRF-shift MR thermometry methods. The performance of the proposed method was evaluated using numerical simulations with temperature distributions following a two-dimensional Gaussian function as well as phantom and in vivo experimental data sets. The results from both the numerical simulations and experimental data show that the proposed method is a promising technique for measuring temperature.
NASA Astrophysics Data System (ADS)
Jia, Heping; Jin, Wende; Ding, Yi; Song, Yonghua; Yu, Dezhao
2017-01-01
With the expanding proportion of renewable energy generation and development of smart grid technologies, flexible demand resources (FDRs) have been utilized as an approach to accommodating renewable energies. However, multiple uncertainties of FDRs may influence reliable and secure operation of smart grid. Multi-state reliability models for a single FDR and aggregating FDRs have been proposed in this paper with regard to responsive abilities for FDRs and random failures for both FDR devices and information system. The proposed reliability evaluation technique is based on Lz transform method which can formulate time-varying reliability indices. A modified IEEE-RTS has been utilized as an illustration of the proposed technique.
NASA Astrophysics Data System (ADS)
Jang, G. H.; Yeom, J. H.; Kim, M. G.
2007-03-01
This paper presents a method to determine the torque constant and the torque-speed-current characteristics of a brushless DC (BLDC) motor by utilizing back-EMF variation of nonenergized phase. It also develops a BLDC motor controller with a digital signal processor (DSP) to monitor its current, voltage and speed in real time. Torque-speed-current characteristics of a BLDC motor are determined by using the proposed method and the developed controller. They are compared with the torque-speed-current characteristics measured by dynamometer experimentally. This research shows that the proposed method is an effective method to determine the torque constant and the torque-speed-current characteristics of the BLDC motor without using dynamometer.
40 CFR 35.937-4 - Solicitation and evaluation of proposals.
Code of Federal Regulations, 2010 CFR
2010-07-01
... relative importance attached to each criterion (a numerical weighted formula need not be utilized). (c) All... subpart. The grantee shall also evaluate the candidate's proposed method to accomplish the work required...
Multiple Power-Saving MSSs Scheduling Methods for IEEE802.16e Broadband Wireless Networks
2014-01-01
This work proposes two enhanced multiple mobile subscriber stations (MSSs) power-saving scheduling methods for IEEE802.16e broadband wireless networks. The proposed methods are designed for the Unsolicited Grant Service (UGS) of IEEE802.16e. To reduce the active periods of all power-saving MSSs, the base station (BS) allocates each MSS fewest possible transmission frames to retrieve its data from the BS. The BS interlaces the active periods of each MSS to increase the amount of scheduled MSSs and splits the overflowing transmission frames to maximize the bandwidth utilization. Simulation results reveal that interlacing the active periods of MSSs can increase the number of scheduled MSSs to more than four times of that in the Direct scheduling method. The bandwidth utilization can thus be improved by 60%–70%. Splitting the overflowing transmission frames can improve bandwidth utilization by more than 10% over that achieved using the method of interlacing active periods, with a sacrifice of only 1% of the sleep periods in the interlacing active period method. PMID:24523656
Edge enhancement of color images using a digital micromirror device.
Di Martino, J Matías; Flores, Jorge L; Ayubi, Gastón A; Alonso, Julia R; Fernández, Ariel; Ferrari, José A
2012-06-01
A method for orientation-selective enhancement of edges in color images is proposed. The method utilizes the capacity of digital micromirror devices to generate a positive and a negative color replica of the image used as input. When both images are slightly displaced and imagined together, one obtains an image with enhanced edges. The proposed technique does not require a coherent light source or precise alignment. The proposed method could be potentially useful for processing large image sequences in real time. Validation experiments are presented.
Optimal Energy Management for a Smart Grid using Resource-Aware Utility Maximization
NASA Astrophysics Data System (ADS)
Abegaz, Brook W.; Mahajan, Satish M.; Negeri, Ebisa O.
2016-06-01
Heterogeneous energy prosumers are aggregated to form a smart grid based energy community managed by a central controller which could maximize their collective energy resource utilization. Using the central controller and distributed energy management systems, various mechanisms that harness the power profile of the energy community are developed for optimal, multi-objective energy management. The proposed mechanisms include resource-aware, multi-variable energy utility maximization objectives, namely: (1) maximizing the net green energy utilization, (2) maximizing the prosumers' level of comfortable, high quality power usage, and (3) maximizing the economic dispatch of energy storage units that minimize the net energy cost of the energy community. Moreover, an optimal energy management solution that combines the three objectives has been implemented by developing novel techniques of optimally flexible (un)certainty projection and appliance based pricing decomposition in an IBM ILOG CPLEX studio. A real-world, per-minute data from an energy community consisting of forty prosumers in Amsterdam, Netherlands is used. Results show that each of the proposed mechanisms yields significant increases in the aggregate energy resource utilization and welfare of prosumers as compared to traditional peak-power reduction methods. Furthermore, the multi-objective, resource-aware utility maximization approach leads to an optimal energy equilibrium and provides a sustainable energy management solution as verified by the Lagrangian method. The proposed resource-aware mechanisms could directly benefit emerging energy communities in the world to attain their energy resource utilization targets.
Khader, M M
2013-10-01
In this paper, an efficient numerical method for solving the fractional delay differential equations (FDDEs) is considered. The fractional derivative is described in the Caputo sense. The proposed method is based on the derived approximate formula of the Laguerre polynomials. The properties of Laguerre polynomials are utilized to reduce FDDEs to a linear or nonlinear system of algebraic equations. Special attention is given to study the error and the convergence analysis of the proposed method. Several numerical examples are provided to confirm that the proposed method is in excellent agreement with the exact solution.
Emotion recognition based on multiple order features using fractional Fourier transform
NASA Astrophysics Data System (ADS)
Ren, Bo; Liu, Deyin; Qi, Lin
2017-07-01
In order to deal with the insufficiency of recently algorithms based on Two Dimensions Fractional Fourier Transform (2D-FrFT), this paper proposes a multiple order features based method for emotion recognition. Most existing methods utilize the feature of single order or a couple of orders of 2D-FrFT. However, different orders of 2D-FrFT have different contributions on the feature extraction of emotion recognition. Combination of these features can enhance the performance of an emotion recognition system. The proposed approach obtains numerous features that extracted in different orders of 2D-FrFT in the directions of x-axis and y-axis, and uses the statistical magnitudes as the final feature vectors for recognition. The Support Vector Machine (SVM) is utilized for the classification and RML Emotion database and Cohn-Kanade (CK) database are used for the experiment. The experimental results demonstrate the effectiveness of the proposed method.
Utilization of index stations for prediction of interstate traffic volumes.
DOT National Transportation Integrated Search
2006-10-01
To facilitate the collection of traffic volumes along the Interstate System and better utilize the available resources. A method to factor adjacent traffic count locations from index counts collected on an annual basis has been proposed. This process...
A Proposal for the use of the Consortium Method in the Design-build system
NASA Astrophysics Data System (ADS)
Miyatake, Ichiro; Kudo, Masataka; Kawamata, Hiroyuki; Fueta, Toshiharu
In view of the necessity for efficient implementation of public works projects, it is expected to utilize advanced technical skills of private firms, for the purpose of reducing project costs, improving performance and functions of construction objects, and reducing work periods, etc. The design-build system is a method to order design and construction as a single contract, including design of structural forms and main specifications of the construction object. This is a system in which high techniques of private firms can be utilized, as a means to ensure qualities of design and construction, rational design, and efficiency of the project. The objective of this study is to examine the use of a method to form a consortium of civil engineering consultants and construction companies, as it is an issue related to the implementation of the design-build method. Furthermore, by studying various forms of consortiums to be introduced in future, it proposes procedural items required to utilize this method, during the bid and after signing a contract, such as the estimate submission from the civil engineering consultants etc.
A phase match based frequency estimation method for sinusoidal signals
NASA Astrophysics Data System (ADS)
Shen, Yan-Lin; Tu, Ya-Qing; Chen, Lin-Jun; Shen, Ting-Ao
2015-04-01
Accurate frequency estimation affects the ranging precision of linear frequency modulated continuous wave (LFMCW) radars significantly. To improve the ranging precision of LFMCW radars, a phase match based frequency estimation method is proposed. To obtain frequency estimation, linear prediction property, autocorrelation, and cross correlation of sinusoidal signals are utilized. The analysis of computational complex shows that the computational load of the proposed method is smaller than those of two-stage autocorrelation (TSA) and maximum likelihood. Simulations and field experiments are performed to validate the proposed method, and the results demonstrate the proposed method has better performance in terms of frequency estimation precision than methods of Pisarenko harmonic decomposition, modified covariance, and TSA, which contribute to improving the precision of LFMCW radars effectively.
A New Approach for Resolving Conflicts in Actionable Behavioral Rules
Zhu, Dan; Zeng, Daniel
2014-01-01
Knowledge is considered actionable if users can take direct actions based on such knowledge to their advantage. Among the most important and distinctive actionable knowledge are actionable behavioral rules that can directly and explicitly suggest specific actions to take to influence (restrain or encourage) the behavior in the users' best interest. However, in mining such rules, it often occurs that different rules may suggest the same actions with different expected utilities, which we call conflicting rules. To resolve the conflicts, a previous valid method was proposed. However, inconsistency of the measure for rule evaluating may hinder its performance. To overcome this problem, we develop a new method that utilizes rule ranking procedure as the basis for selecting the rule with the highest utility prediction accuracy. More specifically, we propose an integrative measure, which combines the measures of the support and antecedent length, to evaluate the utility prediction accuracies of conflicting rules. We also introduce a tunable weight parameter to allow the flexibility of integration. We conduct several experiments to test our proposed approach and evaluate the sensitivity of the weight parameter. Empirical results indicate that our approach outperforms those from previous research. PMID:25162054
Research on simulated infrared image utility evaluation using deep representation
NASA Astrophysics Data System (ADS)
Zhang, Ruiheng; Mu, Chengpo; Yang, Yu; Xu, Lixin
2018-01-01
Infrared (IR) image simulation is an important data source for various target recognition systems. However, whether simulated IR images could be used as training data for classifiers depends on the features of fidelity and authenticity of simulated IR images. For evaluation of IR image features, a deep-representation-based algorithm is proposed. Being different from conventional methods, which usually adopt a priori knowledge or manually designed feature, the proposed method can extract essential features and quantitatively evaluate the utility of simulated IR images. First, for data preparation, we employ our IR image simulation system to generate large amounts of IR images. Then, we present the evaluation model of simulated IR image, for which an end-to-end IR feature extraction and target detection model based on deep convolutional neural network is designed. At last, the experiments illustrate that our proposed method outperforms other verification algorithms in evaluating simulated IR images. Cross-validation, variable proportion mixed data validation, and simulation process contrast experiments are carried out to evaluate the utility and objectivity of the images generated by our simulation system. The optimum mixing ratio between simulated and real data is 0.2≤γ≤0.3, which is an effective data augmentation method for real IR images.
Procedures for prioritizing road improvements under the statewide highway plan : final report.
DOT National Transportation Integrated Search
1990-01-01
The road improvement prioritizing system currently utilized by the Virginia Department of Transportation is similar to the method utilized by many states: it is a sufficiency rating system that evaluates proposed projects on the basis of points assig...
Inter-Vehicle Communication System Utilizing Autonomous Distributed Transmit Power Control
NASA Astrophysics Data System (ADS)
Hamada, Yuji; Sawa, Yoshitsugu; Goto, Yukio; Kumazawa, Hiroyuki
In ad-hoc network such as inter-vehicle communication (IVC) system, safety applications that vehicles broadcast the information such as car velocity, position and so on periodically are considered. In these applications, if there are many vehicles broadcast data in a communication area, congestion incurs a problem decreasing communication reliability. We propose autonomous distributed transmit power control method to keep high communication reliability. In this method, each vehicle controls its transmit power using feed back control. Furthermore, we design a communication protocol to realize the proposed method, and we evaluate the effectiveness of proposed method using computer simulation.
Joint Dictionary Learning for Multispectral Change Detection.
Lu, Xiaoqiang; Yuan, Yuan; Zheng, Xiangtao
2017-04-01
Change detection is one of the most important applications of remote sensing technology. It is a challenging task due to the obvious variations in the radiometric value of spectral signature and the limited capability of utilizing spectral information. In this paper, an improved sparse coding method for change detection is proposed. The intuition of the proposed method is that unchanged pixels in different images can be well reconstructed by the joint dictionary, which corresponds to knowledge of unchanged pixels, while changed pixels cannot. First, a query image pair is projected onto the joint dictionary to constitute the knowledge of unchanged pixels. Then reconstruction error is obtained to discriminate between the changed and unchanged pixels in the different images. To select the proper thresholds for determining changed regions, an automatic threshold selection strategy is presented by minimizing the reconstruction errors of the changed pixels. Adequate experiments on multispectral data have been tested, and the experimental results compared with the state-of-the-art methods prove the superiority of the proposed method. Contributions of the proposed method can be summarized as follows: 1) joint dictionary learning is proposed to explore the intrinsic information of different images for change detection. In this case, change detection can be transformed as a sparse representation problem. To the authors' knowledge, few publications utilize joint learning dictionary in change detection; 2) an automatic threshold selection strategy is presented, which minimizes the reconstruction errors of the changed pixels without the prior assumption of the spectral signature. As a result, the threshold value provided by the proposed method can adapt to different data due to the characteristic of joint dictionary learning; and 3) the proposed method makes no prior assumption of the modeling and the handling of the spectral signature, which can be adapted to different data.
A Pitch Extraction Method with High Frequency Resolution for Singing Evaluation
NASA Astrophysics Data System (ADS)
Takeuchi, Hideyo; Hoguro, Masahiro; Umezaki, Taizo
This paper proposes a pitch estimation method suitable for singing evaluation incorporable in KARAOKE machines. Professional singers and musicians have sharp hearing for music and singing voice. They recognize that singer's voice pitch is “a little off key” or “be in tune”. In the same way, the pitch estimation method that has high frequency resolution is necessary in order to evaluate singing. This paper proposes a pitch estimation method with high frequency resolution utilizing harmonic characteristic of autocorrelation function. The proposed method can estimate a fundamental frequency in the range 50 ∼ 1700[Hz] with resolution less than 3.6 cents in light processing.
Enhancing MPLS Protection Method with Adaptive Segment Repair
NASA Astrophysics Data System (ADS)
Chen, Chin-Ling
We propose a novel adaptive segment repair mechanism to improve traditional MPLS (Multi-Protocol Label Switching) failure recovery. The proposed mechanism protects one or more contiguous high failure probability links by dynamic setup of segment protection. Simulations demonstrate that the proposed mechanism reduces failure recovery time while also increasing network resource utilization.
Double regions growing algorithm for automated satellite image mosaicking
NASA Astrophysics Data System (ADS)
Tan, Yihua; Chen, Chen; Tian, Jinwen
2011-12-01
Feathering is a most widely used method in seamless satellite image mosaicking. A simple but effective algorithm - double regions growing (DRG) algorithm, which utilizes the shape content of images' valid regions, is proposed for generating robust feathering-line before feathering. It works without any human intervention, and experiment on real satellite images shows the advantages of the proposed method.
Noise-Assisted Concurrent Multipath Traffic Distribution in Ad Hoc Networks
Murata, Masayuki
2013-01-01
The concept of biologically inspired networking has been introduced to tackle unpredictable and unstable situations in computer networks, especially in wireless ad hoc networks where network conditions are continuously changing, resulting in the need of robustness and adaptability of control methods. Unfortunately, existing methods often rely heavily on the detailed knowledge of each network component and the preconfigured, that is, fine-tuned, parameters. In this paper, we utilize a new concept, called attractor perturbation (AP), which enables controlling the network performance using only end-to-end information. Based on AP, we propose a concurrent multipath traffic distribution method, which aims at lowering the average end-to-end delay by only adjusting the transmission rate on each path. We demonstrate through simulations that, by utilizing the attractor perturbation relationship, the proposed method achieves a lower average end-to-end delay compared to other methods which do not take fluctuations into account. PMID:24319375
Economic Analysis and Optimal Sizing for behind-the-meter Battery Storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Di; Kintner-Meyer, Michael CW; Yang, Tao
This paper proposes methods to estimate the potential benefits and determine the optimal energy and power capacity for behind-the-meter BSS. In the proposed method, a linear programming is first formulated only using typical load profiles, energy/demand charge rates, and a set of battery parameters to determine the maximum saving in electric energy cost. The optimization formulation is then adapted to include battery cost as a function of its power and energy capacity in order to capture the trade-off between benefits and cost, and therefore to determine the most economic battery size. Using the proposed methods, economic analysis and optimal sizingmore » have been performed for a few commercial buildings and utility rate structures that are representative of those found in the various regions of the Continental United States. The key factors that affect the economic benefits and optimal size have been identified. The proposed methods and case study results cannot only help commercial and industrial customers or battery vendors to evaluate and size the storage system for behind-the-meter application, but can also assist utilities and policy makers to design electricity rate or subsidies to promote the development of energy storage.« less
Acoustics based assessment of respiratory diseases using GMM classification.
Mayorga, P; Druzgalski, C; Morelos, R L; Gonzalez, O H; Vidales, J
2010-01-01
The focus of this paper is to present a method utilizing lung sounds for a quantitative assessment of patient health as it relates to respiratory disorders. In order to accomplish this, applicable traditional techniques within the speech processing domain were utilized to evaluate lung sounds obtained with a digital stethoscope. Traditional methods utilized in the evaluation of asthma involve auscultation and spirometry, but utilization of more sensitive electronic stethoscopes, which are currently available, and application of quantitative signal analysis methods offer opportunities of improved diagnosis. In particular we propose an acoustic evaluation methodology based on the Gaussian Mixed Models (GMM) which should assist in broader analysis, identification, and diagnosis of asthma based on the frequency domain analysis of wheezing and crackles.
Video Encryption and Decryption on Quantum Computers
NASA Astrophysics Data System (ADS)
Yan, Fei; Iliyasu, Abdullah M.; Venegas-Andraca, Salvador E.; Yang, Huamin
2015-08-01
A method for video encryption and decryption on quantum computers is proposed based on color information transformations on each frame encoding the content of the encoding the content of the video. The proposed method provides a flexible operation to encrypt quantum video by means of the quantum measurement in order to enhance the security of the video. To validate the proposed approach, a tetris tile-matching puzzle game video is utilized in the experimental simulations. The results obtained suggest that the proposed method enhances the security and speed of quantum video encryption and decryption, both properties required for secure transmission and sharing of video content in quantum communication.
Varmazyar, Mohsen; Dehghanbaghi, Maryam; Afkhami, Mehdi
2016-10-01
Balanced Scorecard (BSC) is a strategic evaluation tool using both financial and non-financial indicators to determine the business performance of organizations or companies. In this paper, a new integrated approach based on the Balanced Scorecard (BSC) and multi-criteria decision making (MCDM) methods are proposed to evaluate the performance of research centers of research and technology organization (RTO) in Iran. Decision-Making Trial and Evaluation Laboratory (DEMATEL) are employed to reflect the interdependencies among BSC perspectives. Then, Analytic Network Process (ANP) is utilized to weight the indices influencing the considered problem. In the next step, we apply four MCDM methods including Additive Ratio Assessment (ARAS), Complex Proportional Assessment (COPRAS), Multi-Objective Optimization by Ratio Analysis (MOORA), and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) for ranking of alternatives. Finally, the utility interval technique is applied to combine the ranking results of MCDM methods. Weighted utility intervals are computed by constructing a correlation matrix between the ranking methods. A real case is presented to show the efficacy of the proposed approach. Copyright © 2016 Elsevier Ltd. All rights reserved.
Wu, Wenchuan; Fang, Sheng; Guo, Hua
2014-06-01
Aiming at motion artifacts and off-resonance artifacts in multi-shot diffusion magnetic resonance imaging (MRI), we proposed a joint correction method in this paper to correct the two kinds of artifacts simultaneously without additional acquisition of navigation data and field map. We utilized the proposed method using multi-shot variable density spiral sequence to acquire MRI data and used auto-focusing technique for image deblurring. We also used direct method or iterative method to correct motion induced phase errors in the process of deblurring. In vivo MRI experiments demonstrated that the proposed method could effectively suppress motion artifacts and off-resonance artifacts and achieve images with fine structures. In addition, the scan time was not increased in applying the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Yong; Chu, Yuchuan; He, Xiaoming
This paper proposes a domain decomposition method for the coupled stationary Navier-Stokes and Darcy equations with the Beavers-Joseph-Saffman interface condition in order to improve the efficiency of the finite element method. The physical interface conditions are directly utilized to construct the boundary conditions on the interface and then decouple the Navier-Stokes and Darcy equations. Newton iteration will be used to deal with the nonlinear systems. Numerical results are presented to illustrate the features of the proposed method.
NASA Astrophysics Data System (ADS)
Wang, Lei; Fan, Youping; Zhang, Dai; Ge, Mengxin; Zou, Xianbin; Li, Jingjiao
2017-09-01
This paper proposes a method to simulate a back-to-back modular multilevel converter (MMC) HVDC transmission system. In this paper we utilize an equivalent networks to simulate the dynamic power system. Moreover, to account for the performance of converter station, core components of model of the converter station gives a basic model of simulation. The proposed method is applied to an equivalent real power system.
Capitation pricing: Adjusting for prior utilization and physician discretion
Anderson, Gerard F.; Cantor, Joel C.; Steinberg, Earl P.; Holloway, James
1986-01-01
As the number of Medicare beneficiaries receiving care under at-risk capitation arrangements increases, the method for setting payment rates will come under increasing scrutiny. A number of modifications to the current adjusted average per capita cost (AAPCC) methodology have been proposed, including an adjustment for prior utilization. In this article, we propose use of a utilization adjustment that includes only hospitalizations involving low or moderate physician discretion in the decision to hospitalize. This modification avoids discrimination against capitated systems that prevent certain discretionary admissions. The model also explains more of the variance in per capita expenditures than does the current AAPCC. PMID:10312010
Zhang, Kai; Wang, Ke; Zhu, Xue; Zhang, Jue; Xu, Lan; Huang, Biao; Xie, Minhao
2014-01-07
A general and reliable strategy for the detection of cocaine was proposed utilizing DNA-templated silver nanoclusters as signal indicators and the nicking endonuclease-assisted signal amplification method. This strategy can detect cocaine specifically with a detection limit as low as 2 nM by using a small volume of 5 μL.
NASA Astrophysics Data System (ADS)
Ashraf, M. A. M.; Kumar, N. S.; Yusoh, R.; Hazreek, Z. A. M.; Aziman, M.
2018-04-01
Site classification utilizing average shear wave velocity (Vs(30) up to 30 meters depth is a typical parameter. Numerous geophysical methods have been proposed for estimation of shear wave velocity by utilizing assortment of testing configuration, processing method, and inversion algorithm. Multichannel Analysis of Surface Wave (MASW) method is been rehearsed by numerous specialist and professional to geotechnical engineering for local site characterization and classification. This study aims to determine the site classification on soft and hard ground using MASW method. The subsurface classification was made utilizing National Earthquake Hazards Reduction Program (NERHP) and international Building Code (IBC) classification. Two sites are chosen to acquire the shear wave velocity which is in the state of Pulau Pinang for soft soil and Perlis for hard rock. Results recommend that MASW technique can be utilized to spatially calculate the distribution of shear wave velocity (Vs(30)) in soil and rock to characterize areas.
Unaldi, Numan; Temel, Samil; Asari, Vijayan K.
2012-01-01
One of the most critical issues of Wireless Sensor Networks (WSNs) is the deployment of a limited number of sensors in order to achieve maximum coverage on a terrain. The optimal sensor deployment which enables one to minimize the consumed energy, communication time and manpower for the maintenance of the network has attracted interest with the increased number of studies conducted on the subject in the last decade. Most of the studies in the literature today are proposed for two dimensional (2D) surfaces; however, real world sensor deployments often arise on three dimensional (3D) environments. In this paper, a guided wavelet transform (WT) based deployment strategy (WTDS) for 3D terrains, in which the sensor movements are carried out within the mutation phase of the genetic algorithms (GAs) is proposed. The proposed algorithm aims to maximize the Quality of Coverage (QoC) of a WSN via deploying a limited number of sensors on a 3D surface by utilizing a probabilistic sensing model and the Bresenham's line of sight (LOS) algorithm. In addition, the method followed in this paper is novel to the literature and the performance of the proposed algorithm is compared with the Delaunay Triangulation (DT) method as well as a standard genetic algorithm based method and the results reveal that the proposed method is a more powerful and more successful method for sensor deployment on 3D terrains. PMID:22666078
Effective evaluation of privacy protection techniques in visible and thermal imagery
NASA Astrophysics Data System (ADS)
Nawaz, Tahir; Berg, Amanda; Ferryman, James; Ahlberg, Jörgen; Felsberg, Michael
2017-09-01
Privacy protection may be defined as replacing the original content in an image region with a (less intrusive) content having modified target appearance information to make it less recognizable by applying a privacy protection technique. Indeed, the development of privacy protection techniques also needs to be complemented with an established objective evaluation method to facilitate their assessment and comparison. Generally, existing evaluation methods rely on the use of subjective judgments or assume a specific target type in image data and use target detection and recognition accuracies to assess privacy protection. An annotation-free evaluation method that is neither subjective nor assumes a specific target type is proposed. It assesses two key aspects of privacy protection: "protection" and "utility." Protection is quantified as an appearance similarity, and utility is measured as a structural similarity between original and privacy-protected image regions. We performed an extensive experimentation using six challenging datasets (having 12 video sequences), including a new dataset (having six sequences) that contains visible and thermal imagery. The new dataset is made available online for the community. We demonstrate effectiveness of the proposed method by evaluating six image-based privacy protection techniques and also show comparisons of the proposed method over existing methods.
A cardioid oscillator with asymmetric time ratio for establishing CPG models.
Fu, Q; Wang, D H; Xu, L; Yuan, G
2018-01-13
Nonlinear oscillators are usually utilized by bionic scientists for establishing central pattern generator models for imitating rhythmic motions by bionic scientists. In the natural word, many rhythmic motions possess asymmetric time ratios, which means that the forward and the backward motions of an oscillating process sustain different times within one period. In order to model rhythmic motions with asymmetric time ratios, nonlinear oscillators with asymmetric forward and backward trajectories within one period should be studied. In this paper, based on the property of the invariant set, a method to design the closed curve in the phase plane of a dynamic system as its limit cycle is proposed. Utilizing the proposed method and considering that a cardioid curve is a kind of asymmetrical closed curves, a cardioid oscillator with asymmetric time ratios is proposed and realized. Through making the derivation of the closed curve in the phase plane of a dynamic system equal to zero, the closed curve is designed as its limit cycle. Utilizing the proposed limit cycle design method and according to the global invariant set theory, a cardioid oscillator applying a cardioid curve as its limit cycle is achieved. On these bases, the numerical simulations are conducted for analyzing the behaviors of the cardioid oscillator. The example utilizing the established cardioid oscillator to simulate rhythmic motions of the hip joint of a human body in the sagittal plane is presented. The results of the numerical simulations indicate that, whatever the initial condition is and without any outside input, the proposed cardioid oscillator possesses the following properties: (1) The proposed cardioid oscillator is able to generate a series of periodic and anti-interference self-exciting trajectories, (2) the generated trajectories possess an asymmetric time ratio, and (3) the time ratio can be regulated by adjusting the oscillator's parameters. Furthermore, the comparison between the simulated trajectories by the established cardioid oscillator and the measured angle trajectories of the hip angle of a human body show that the proposed cardioid oscillator is fit for imitating the rhythmic motions of the hip of a human body with asymmetric time ratios.
Nezarat, Amin; Dastghaibifard, GH
2015-01-01
One of the most complex issues in the cloud computing environment is the problem of resource allocation so that, on one hand, the cloud provider expects the most profitability and, on the other hand, users also expect to have the best resources at their disposal considering the budget constraints and time. In most previous work conducted, heuristic and evolutionary approaches have been used to solve this problem. Nevertheless, since the nature of this environment is based on economic methods, using such methods can decrease response time and reducing the complexity of the problem. In this paper, an auction-based method is proposed which determines the auction winner by applying game theory mechanism and holding a repetitive game with incomplete information in a non-cooperative environment. In this method, users calculate suitable price bid with their objective function during several round and repetitions and send it to the auctioneer; and the auctioneer chooses the winning player based the suggested utility function. In the proposed method, the end point of the game is the Nash equilibrium point where players are no longer inclined to alter their bid for that resource and the final bid also satisfies the auctioneer’s utility function. To prove the response space convexity, the Lagrange method is used and the proposed model is simulated in the cloudsim and the results are compared with previous work. At the end, it is concluded that this method converges to a response in a shorter time, provides the lowest service level agreement violations and the most utility to the provider. PMID:26431035
Nezarat, Amin; Dastghaibifard, G H
2015-01-01
One of the most complex issues in the cloud computing environment is the problem of resource allocation so that, on one hand, the cloud provider expects the most profitability and, on the other hand, users also expect to have the best resources at their disposal considering the budget constraints and time. In most previous work conducted, heuristic and evolutionary approaches have been used to solve this problem. Nevertheless, since the nature of this environment is based on economic methods, using such methods can decrease response time and reducing the complexity of the problem. In this paper, an auction-based method is proposed which determines the auction winner by applying game theory mechanism and holding a repetitive game with incomplete information in a non-cooperative environment. In this method, users calculate suitable price bid with their objective function during several round and repetitions and send it to the auctioneer; and the auctioneer chooses the winning player based the suggested utility function. In the proposed method, the end point of the game is the Nash equilibrium point where players are no longer inclined to alter their bid for that resource and the final bid also satisfies the auctioneer's utility function. To prove the response space convexity, the Lagrange method is used and the proposed model is simulated in the cloudsim and the results are compared with previous work. At the end, it is concluded that this method converges to a response in a shorter time, provides the lowest service level agreement violations and the most utility to the provider.
NASA Astrophysics Data System (ADS)
Chady, Tomasz; Gorący, Krzysztof
2018-04-01
Active infrared thermography is increasingly used for nondestructive testing of various materials. Properties of this method are creating a unique possibility to utilize it for inspection of composites. In the case of active thermography, an external energy source is usually used to induce a thermal contrast inside tested objects. The conventional heating methods (like halogen lamps or flash lamps) are utilized for this purpose. In this study, we propose to use a cooling unit. The proposed system consists of a thermal imaging infrared camera, which is used to observe the surface of the inspected specimen and a specially designed cooling unit with thermoelectric modules (the Peltier modules).
78 FR 70059 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-22
... (as opposed to quantitative statistical methods). In consultation with research experts, we have... qualitative interviews (as opposed to quantitative statistical methods). In consultation with research experts... utilization of qualitative interviews (as opposed to quantitative statistical methods). In consultation with...
Empirical mode decomposition of the ECG signal for noise removal
NASA Astrophysics Data System (ADS)
Khan, Jesmin; Bhuiyan, Sharif; Murphy, Gregory; Alam, Mohammad
2011-04-01
Electrocardiography is a diagnostic procedure for the detection and diagnosis of heart abnormalities. The electrocardiogram (ECG) signal contains important information that is utilized by physicians for the diagnosis and analysis of heart diseases. So good quality ECG signal plays a vital role for the interpretation and identification of pathological, anatomical and physiological aspects of the whole cardiac muscle. However, the ECG signals are corrupted by noise which severely limit the utility of the recorded ECG signal for medical evaluation. The most common noise presents in the ECG signal is the high frequency noise caused by the forces acting on the electrodes. In this paper, we propose a new ECG denoising method based on the empirical mode decomposition (EMD). The proposed method is able to enhance the ECG signal upon removing the noise with minimum signal distortion. Simulation is done on the MIT-BIH database to verify the efficacy of the proposed algorithm. Experiments show that the presented method offers very good results to remove noise from the ECG signal.
NASA Astrophysics Data System (ADS)
Pua, Rizza; Park, Miran; Wi, Sunhee; Cho, Seungryong
2016-12-01
We propose a hybrid metal artifact reduction (MAR) approach for computed tomography (CT) that is computationally more efficient than a fully iterative reconstruction method, but at the same time achieves superior image quality to the interpolation-based in-painting techniques. Our proposed MAR method, an image-based artifact subtraction approach, utilizes an intermediate prior image reconstructed via PDART to recover the background information underlying the high density objects. For comparison, prior images generated by total-variation minimization (TVM) algorithm, as a realization of fully iterative approach, were also utilized as intermediate images. From the simulation and real experimental results, it has been shown that PDART drastically accelerates the reconstruction to an acceptable quality of prior images. Incorporating PDART-reconstructed prior images in the proposed MAR scheme achieved higher quality images than those by a conventional in-painting method. Furthermore, the results were comparable to the fully iterative MAR that uses high-quality TVM prior images.
Winkelman, Warren J; Leonard, Kevin J
2004-01-01
There are constraints embedded in medical record structure that limit use by patients in self-directed disease management. Through systematic review of the literature from a critical perspective, four characteristics that either enhance or mitigate the influence of medical record structure on patient utilization of an electronic patient record (EPR) system have been identified: environmental pressures, physician centeredness, collaborative organizational culture, and patient centeredness. An evaluation framework is proposed for use when considering adaptation of existing EPR systems for online patient access. Exemplars of patient-accessible EPR systems from the literature are evaluated utilizing the framework. From this study, it appears that traditional information system research and development methods may not wholly capture many pertinent social issues that arise when expanding access of EPR systems to patients. Critically rooted methods such as action research can directly inform development strategies so that these systems may positively influence health outcomes.
NASA Astrophysics Data System (ADS)
Han, Jin; Kim, Jong-Wook; Lee, Hiwon; Min, Byung-Kwon; Lee, Sang Jo
2009-02-01
A new microfabrication method that combines localized ion implantation and magnetorheological finishing is proposed. The proposed technique involves two steps. First, selected regions of a silicon wafer are irradiated with gallium ions by using a focused ion beam system. The mechanical properties of the irradiated regions are altered as a result of the ion implantation. Second, the wafer is processed by using a magnetorheological finishing method. During the finishing process, the regions not implanted with ion are preferentially removed. The material removal rate difference is utilized for microfabrication. The mechanisms of the proposed method are discussed, and applications are presented.
NASA Astrophysics Data System (ADS)
Józwik, Michal; Mikuła, Marta; Kozacki, Tomasz; Kostencka, Julianna; Gorecki, Christophe
2017-06-01
In this contribution, we propose a method of digital holographic microscopy (DHM) that enables measurement of high numerical aperture spherical and aspherical microstructures of both concave and convex shapes. The proposed method utilizes reflection of the spherical illumination beam from the object surface and the interference with a spherical reference beam of the similar curvature. In this case, the NA of DHM is fully utilized for illumination and imaging of the reflected object beam. Thus, the system allows capturing the phase coming from larger areas of the quasi-spherical object and, therefore, offers possibility of high accuracy characterization of its surface even in the areas of high inclination. The proposed measurement procedure allows determining all parameters required for the accurate shape recovery: the location of the object focus point and the positions of the illumination and reference point sources. The utility of the method is demonstrated with characterization of surface of high NA focusing objects. The accuracy is firstly verified by characterization of a known reference sphere with low error of sphericity. Then, the method is applied for shape measurement of spherical and aspheric microlenses. The results provide a full-field reconstruction of high NA topography with resolution in the nanometer range. The surface sphericity is evaluated by the deviation from the best fitted sphere or asphere, and the important parameters of the measured microlens: e.g.: radius of curvature and conic constant.
Multi-Domain Transfer Learning for Early Diagnosis of Alzheimer's Disease.
Cheng, Bo; Liu, Mingxia; Shen, Dinggang; Li, Zuoyong; Zhang, Daoqiang
2017-04-01
Recently, transfer learning has been successfully applied in early diagnosis of Alzheimer's Disease (AD) based on multi-domain data. However, most of existing methods only use data from a single auxiliary domain, and thus cannot utilize the intrinsic useful correlation information from multiple domains. Accordingly, in this paper, we consider the joint learning of tasks in multi-auxiliary domains and the target domain, and propose a novel Multi-Domain Transfer Learning (MDTL) framework for early diagnosis of AD. Specifically, the proposed MDTL framework consists of two key components: 1) a multi-domain transfer feature selection (MDTFS) model that selects the most informative feature subset from multi-domain data, and 2) a multi-domain transfer classification (MDTC) model that can identify disease status for early AD detection. We evaluate our method on 807 subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database using baseline magnetic resonance imaging (MRI) data. The experimental results show that the proposed MDTL method can effectively utilize multi-auxiliary domain data for improving the learning performance in the target domain, compared with several state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Tanigawa, Hiroshi; Seno, Hiroaki; Watanabe, Yoshiaki; Nakajima, Koshiro
1998-05-01
A nondestructive inspection method to estimate the contact condition of soil on the surface of an underground pipe, utilizing the resonance of a transverse Lamb wave circulating along the pipe wall is proposed.The Q factor of the resonance is considered and measured under some contact conditions by sweeping the vibrating frequency in a 150-mm-inner diameter Fiberglass Reinforced Plastic Mortar (FRPM) pipe. It is confirmed that the Q factor shows a clear response to the change in the contact conditions. For example, the Q factor is 8.4 when the pipe is in ideal contact with the soil plane and goes up to 19.2 when a 100-mm-diameter void is located at the contact surface of the soil.The spatial resolution of the proposed inspection method is also measured by moving the sensing point along the direction of laying the length of the pipe into a 85-mm-diameter void. The resolution of the proposed method is estimated at about 50 mm.
Multi-Domain Transfer Learning for Early Diagnosis of Alzheimer’s Disease
Cheng, Bo; Liu, Mingxia; Li, Zuoyong
2017-01-01
Recently, transfer learning has been successfully applied in early diagnosis of Alzheimer’s Disease (AD) based on multi-domain data. However, most of existing methods only use data from a single auxiliary domain, and thus cannot utilize the intrinsic useful correlation information from multiple domains. Accordingly, in this paper, we consider the joint learning of tasks in multi-auxiliary domains and the target domain, and propose a novel Multi-Domain Transfer Learning (MDTL) framework for early diagnosis of AD. Specifically, the proposed MDTL framework consists of two key components: 1) a multi-domain transfer feature selection (MDTFS) model that selects the most informative feature subset from multi-domain data, and 2) a multidomain transfer classification (MDTC) model that can identify disease status for early AD detection. We evaluate our method on 807 subjects from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database using baseline magnetic resonance imaging (MRI) data. The experimental results show that the proposed MDTL method can effectively utilize multi-auxiliary domain data for improving the learning performance in the target domain, compared with several state-of-the-art methods. PMID:27928657
Determining the Number of Components from the Matrix of Partial Correlations
ERIC Educational Resources Information Center
Velicer, Wayne F.
1976-01-01
A method is presented for determining the number of components to retain in a principal components or image components analysis which utilizes a matrix of partial correlations. Advantages and uses of the method are discussed and a comparison of the proposed method with existing methods is presented. (JKS)
Khalil, Mohammed S.; Khan, Muhammad Khurram; Alginahi, Yasser M.
2014-01-01
This paper presents a novel watermarking method to facilitate the authentication and detection of the image forgery on the Quran images. Two layers of embedding scheme on wavelet and spatial domain are introduced to enhance the sensitivity of fragile watermarking and defend the attacks. Discrete wavelet transforms are applied to decompose the host image into wavelet prior to embedding the watermark in the wavelet domain. The watermarked wavelet coefficient is inverted back to spatial domain then the least significant bits is utilized to hide another watermark. A chaotic map is utilized to blur the watermark to make it secure against the local attack. The proposed method allows high watermark payloads, while preserving good image quality. Experiment results confirm that the proposed methods are fragile and have superior tampering detection even though the tampered area is very small. PMID:25028681
Khalil, Mohammed S; Kurniawan, Fajri; Khan, Muhammad Khurram; Alginahi, Yasser M
2014-01-01
This paper presents a novel watermarking method to facilitate the authentication and detection of the image forgery on the Quran images. Two layers of embedding scheme on wavelet and spatial domain are introduced to enhance the sensitivity of fragile watermarking and defend the attacks. Discrete wavelet transforms are applied to decompose the host image into wavelet prior to embedding the watermark in the wavelet domain. The watermarked wavelet coefficient is inverted back to spatial domain then the least significant bits is utilized to hide another watermark. A chaotic map is utilized to blur the watermark to make it secure against the local attack. The proposed method allows high watermark payloads, while preserving good image quality. Experiment results confirm that the proposed methods are fragile and have superior tampering detection even though the tampered area is very small.
NASA Astrophysics Data System (ADS)
Limić, Nedzad; Valković, Vladivoj
1996-04-01
Pollution of coastal seas with toxic substances can be efficiently detected by examining toxic materials in sediment samples. These samples contain information on the overall pollution from surrounding sources such as yacht anchorages, nearby industries, sewage systems, etc. In an efficient analysis of pollution one must determine the contribution from each individual source. In this work it is demonstrated that a modelling method can be utilized for solving this latter problem. The modelling method is based on a unique interpretation of concentrations in sediments from all sampling stations. The proposed method is a synthesis consisting of the utilization of PIXE as an efficient method of pollution concentration determination and the code ANCOPOL (N. Limic and R. Benis, The computer code ANCOPOL, SimTel/msdos/geology, 1994 [1]) for the calculation of contributions from the main polluters. The efficiency and limits of the proposed method are demonstrated by discussing trace element concentrations in sediments of Punat Bay on the island of Krk in Croatia.
Deep neural network-based bandwidth enhancement of photoacoustic data.
Gutta, Sreedevi; Kadimesetty, Venkata Suryanarayana; Kalva, Sandeep Kumar; Pramanik, Manojit; Ganapathy, Sriram; Yalavarthy, Phaneendra K
2017-11-01
Photoacoustic (PA) signals collected at the boundary of tissue are always band-limited. A deep neural network was proposed to enhance the bandwidth (BW) of the detected PA signal, thereby improving the quantitative accuracy of the reconstructed PA images. A least square-based deconvolution method that utilizes the Tikhonov regularization framework was used for comparison with the proposed network. The proposed method was evaluated using both numerical and experimental data. The results indicate that the proposed method was capable of enhancing the BW of the detected PA signal, which inturn improves the contrast recovery and quality of reconstructed PA images without adding any significant computational burden. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Ghodsi, Seyed Hamed; Kerachian, Reza; Zahmatkesh, Zahra
2016-04-15
In this paper, an integrated framework is proposed for urban runoff management. To control and improve runoff quality and quantity, Low Impact Development (LID) practices are utilized. In order to determine the LIDs' areas and locations, the Non-dominated Sorting Genetic Algorithm-II (NSGA-II), which considers three objective functions of minimizing runoff volume, runoff pollution and implementation cost of LIDs, is utilized. In this framework, the Storm Water Management Model (SWMM) is used for stream flow simulation. The non-dominated solutions provided by the NSGA-II are considered as management scenarios. To select the most preferred scenario, interactions among the main stakeholders in the study area with conflicting utilities are incorporated by utilizing bargaining models including a non-cooperative game, Nash model and social choice procedures of Borda count and approval voting. Moreover, a new social choice procedure, named pairwise voting method, is proposed and applied. Based on each conflict resolution approach, a scenario is identified as the ideal solution providing the LIDs' areas, locations and implementation cost. The proposed framework is applied for urban water quality and quantity management in the northern part of Tehran metropolitan city, Iran. Results show that the proposed pairwise voting method tends to select a scenario with a higher percentage of reduction in TSS (Total Suspended Solid) load and runoff volume, in comparison with the Borda count and approval voting methods. Besides, the Nash method presents a management scenario with the highest cost for LIDs' implementation and the maximum values for percentage of runoff volume reduction and TSS removal. The results also signify that selection of an appropriate management scenario by stakeholders in the study area depends on the available financial resources and the relative importance of runoff quality improvement in comparison with reducing the runoff volume. Copyright © 2016 Elsevier B.V. All rights reserved.
Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.
Gao, Wei; Kwong, Sam; Jia, Yuheng
2017-08-25
In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.
Ye, Huimin; Chen, Elizabeth S.
2011-01-01
In order to support the increasing need to share electronic health data for research purposes, various methods have been proposed for privacy preservation including k-anonymity. Many k-anonymity models provide the same level of anoymization regardless of practical need, which may decrease the utility of the dataset for a particular research study. In this study, we explore extensions to the k-anonymity algorithm that aim to satisfy the heterogeneous needs of different researchers while preserving privacy as well as utility of the dataset. The proposed algorithm, Attribute Utility Motivated k-anonymization (AUM), involves analyzing the characteristics of attributes and utilizing them to minimize information loss during the anonymization process. Through comparison with two existing algorithms, Mondrian and Incognito, preliminary results indicate that AUM may preserve more information from original datasets thus providing higher quality results with lower distortion. PMID:22195223
Reversible integer wavelet transform for blind image hiding method
Bibi, Nargis; Mahmood, Zahid; Akram, Tallha; Naqvi, Syed Rameez
2017-01-01
In this article, a blind data hiding reversible methodology to embed the secret data for hiding purpose into cover image is proposed. The key advantage of this research work is to resolve the privacy and secrecy issues raised during the data transmission over the internet. Firstly, data is decomposed into sub-bands using the integer wavelets. For decomposition, the Fresnelet transform is utilized which encrypts the secret data by choosing a unique key parameter to construct a dummy pattern. The dummy pattern is then embedded into an approximated sub-band of the cover image. Our proposed method reveals high-capacity and great imperceptibility of the secret embedded data. With the utilization of family of integer wavelets, the proposed novel approach becomes more efficient for hiding and retrieving process. It retrieved the secret hidden data from the embedded data blindly, without the requirement of original cover image. PMID:28498855
Cone beam x-ray luminescence computed tomography: a feasibility study.
Chen, Dongmei; Zhu, Shouping; Yi, Huangjian; Zhang, Xianghan; Chen, Duofang; Liang, Jimin; Tian, Jie
2013-03-01
The appearance of x-ray luminescence computed tomography (XLCT) opens new possibilities to perform molecular imaging by x ray. In the previous XLCT system, the sample was irradiated by a sequence of narrow x-ray beams and the x-ray luminescence was measured by a highly sensitive charge coupled device (CCD) camera. This resulted in a relatively long sampling time and relatively low utilization of the x-ray beam. In this paper, a novel cone beam x-ray luminescence computed tomography strategy is proposed, which can fully utilize the x-ray dose and shorten the scanning time. The imaging model and reconstruction method are described. The validity of the imaging strategy has been studied in this paper. In the cone beam XLCT system, the cone beam x ray was adopted to illuminate the sample and a highly sensitive CCD camera was utilized to acquire luminescent photons emitted from the sample. Photons scattering in biological tissues makes it an ill-posed problem to reconstruct the 3D distribution of the x-ray luminescent sample in the cone beam XLCT. In order to overcome this issue, the authors used the diffusion approximation model to describe the photon propagation in tissues, and employed the sparse regularization method for reconstruction. An incomplete variables truncated conjugate gradient method and permissible region strategy were used for reconstruction. Meanwhile, traditional x-ray CT imaging could also be performed in this system. The x-ray attenuation effect has been considered in their imaging model, which is helpful in improving the reconstruction accuracy. First, simulation experiments with cylinder phantoms were carried out to illustrate the validity of the proposed compensated method. The experimental results showed that the location error of the compensated algorithm was smaller than that of the uncompensated method. The permissible region strategy was applied and reduced the reconstruction error to less than 2 mm. The robustness and stability were then evaluated from different view numbers, different regularization parameters, different measurement noise levels, and optical parameters mismatch. The reconstruction results showed that the settings had a small effect on the reconstruction. The nonhomogeneous phantom simulation was also carried out to simulate a more complex experimental situation and evaluated their proposed method. Second, the physical cylinder phantom experiments further showed similar results in their prototype XLCT system. With the discussion of the above experiments, it was shown that the proposed method is feasible to the general case and actual experiments. Utilizing numerical simulation and physical experiments, the authors demonstrated the validity of the new cone beam XLCT method. Furthermore, compared with the previous narrow beam XLCT, the cone beam XLCT could more fully utilize the x-ray dose and the scanning time would be shortened greatly. The study of both simulation experiments and physical phantom experiments indicated that the proposed method was feasible to the general case and actual experiments.
A social choice-based methodology for treated wastewater reuse in urban and suburban areas.
Mahjouri, Najmeh; Pourmand, Ehsan
2017-07-01
Reusing treated wastewater for supplying water demands such as landscape and agricultural irrigation in urban and suburban areas has become a major water supply approach especially in regions struggling with water shortage. Due to limited available treated wastewater to satisfy all water demands, conflicts may arise in allocating treated wastewater to water users. Since there is usually more than one decision maker and more than one criterion to measure the impact of each water allocation scenario, effective tools are needed to combine individual preferences to reach a collective decision. In this paper, a new social choice (SC) method, which can consider some indifference thresholds for decision makers, is proposed for evaluating and ranking treated wastewater and urban runoff allocation scenarios to water users in urban and suburban areas. Some SC methods, namely plurality voting, Borda count, pairwise comparisons, Hare system, dictatorship, and approval voting, are applied for comparing and evaluating the results. Different scenarios are proposed for allocating treated wastewater and urban runoff to landscape irrigation, agricultural lands as well as artificial recharge of aquifer in the Tehran metropolitan Area, Iran. The main stakeholders rank the proposed scenarios based on their utilities using two different approaches. The proposed method suggests ranking of the scenarios based on the stakeholders' utilities and considering the scores they assigned to each scenario. Comparing the results of the proposed method with those of six different SC methods shows that the obtained ranks are mostly in compliance with the social welfare.
Kumar, Shiu; Mamun, Kabir; Sharma, Alok
2017-12-01
Classification of electroencephalography (EEG) signals for motor imagery based brain computer interface (MI-BCI) is an exigent task and common spatial pattern (CSP) has been extensively explored for this purpose. In this work, we focused on developing a new framework for classification of EEG signals for MI-BCI. We propose a single band CSP framework for MI-BCI that utilizes the concept of tangent space mapping (TSM) in the manifold of covariance matrices. The proposed method is named CSP-TSM. Spatial filtering is performed on the bandpass filtered MI EEG signal. Riemannian tangent space is utilized for extracting features from the spatial filtered signal. The TSM features are then fused with the CSP variance based features and feature selection is performed using Lasso. Linear discriminant analysis (LDA) is then applied to the selected features and finally classification is done using support vector machine (SVM) classifier. The proposed framework gives improved performance for MI EEG signal classification in comparison with several competing methods. Experiments conducted shows that the proposed framework reduces the overall classification error rate for MI-BCI by 3.16%, 5.10% and 1.70% (for BCI Competition III dataset IVa, BCI Competition IV Dataset I and BCI Competition IV Dataset IIb, respectively) compared to the conventional CSP method under the same experimental settings. The proposed CSP-TSM method produces promising results when compared with several competing methods in this paper. In addition, the computational complexity is less compared to that of TSM method. Our proposed CSP-TSM framework can be potentially used for developing improved MI-BCI systems. Copyright © 2017 Elsevier Ltd. All rights reserved.
Cao, Yong; Chu, Yuchuan; He, Xiaoming; ...
2013-01-01
This paper proposes a domain decomposition method for the coupled stationary Navier-Stokes and Darcy equations with the Beavers-Joseph-Saffman interface condition in order to improve the efficiency of the finite element method. The physical interface conditions are directly utilized to construct the boundary conditions on the interface and then decouple the Navier-Stokes and Darcy equations. Newton iteration will be used to deal with the nonlinear systems. Numerical results are presented to illustrate the features of the proposed method.
Utilization Bound of Non-preemptive Fixed Priority Schedulers
NASA Astrophysics Data System (ADS)
Park, Moonju; Chae, Jinseok
It is known that the schedulability of a non-preemptive task set with fixed priority can be determined in pseudo-polynomial time. However, since Rate Monotonic scheduling is not optimal for non-preemptive scheduling, the applicability of existing polynomial time tests that provide sufficient schedulability conditions, such as Liu and Layland's bound, is limited. This letter proposes a new sufficient condition for non-preemptive fixed priority scheduling that can be used for any fixed priority assignment scheme. It is also shown that the proposed schedulability test has a tighter utilization bound than existing test methods.
Applications of fuzzy ranking methods to risk-management decisions
NASA Astrophysics Data System (ADS)
Mitchell, Harold A.; Carter, James C., III
1993-12-01
The Department of Energy is making significant improvements to its nuclear facilities as a result of more stringent regulation, internal audits, and recommendations from external review groups. A large backlog of upgrades has resulted. Currently, a prioritization method is being utilized which relies on a matrix of potential consequence and probability of occurrence. The attributes of the potential consequences considered include likelihood, exposure, public health and safety, environmental impact, site personnel safety, public relations, legal liability, and business loss. This paper describes an improved method which utilizes fuzzy multiple attribute decision methods to rank proposed improvement projects.
Combining the Best of Two Standard Setting Methods: The Ordered Item Booklet Angoff
ERIC Educational Resources Information Center
Smith, Russell W.; Davis-Becker, Susan L.; O'Leary, Lisa S.
2014-01-01
This article describes a hybrid standard setting method that combines characteristics of the Angoff (1971) and Bookmark (Mitzel, Lewis, Patz & Green, 2001) methods. The proposed approach utilizes strengths of each method while addressing weaknesses. An ordered item booklet, with items sorted based on item difficulty, is used in combination…
NASA Astrophysics Data System (ADS)
Staden, Raluca-Ioana Stefan-Van; Gugoaşă, Livia Alexandra; Calenic, Bogdan; Legler, Juliette
2014-07-01
Stochastic microsensors based on diamond paste and three types of electroactive materials (maltodextrin (MD), α-cyclodextrin (α-CD) and 5,10,15,20-tetraphenyl-21H,23H porphyrin (P)) were developed for the assay of estradiol (E2), testosterone (T2) and dihydrotestosterone (DHT) in children's saliva. The main advantage of utilization of such tools is the possibility to identify and quantify all three hormones within minutes in small volumes of childen's saliva. The limits of quantification obtained for DHT, T2, and E2 (1 fmol/L for DHT, 1 pmol/L for T2, and 66 fmol/L for E2) determined using the proposed tools allows the utilization of these new methods with high reliability for the screening of saliva samples from children. This new method proposed for the assay of the three hormones overcomes the limitations (regarding limits of determination) of ELISA method which is the standard method used in clinical laboratories for the assay of DHT, T2, and E2 in saliva samples. The main feature of its utilization for children's saliva is to identify earlier problems related to early puberty and obesity.
Communicative Competence of the Fourth Year Students: Basis for Proposed English Language Program
ERIC Educational Resources Information Center
Tuan, Vu Van
2017-01-01
This study on level of communicative competence covering linguistic/grammatical and discourse has aimed at constructing a proposed English language program for 5 key universities in Vietnam. The descriptive method utilized was scientifically employed with comparative techniques and correlational analysis. The researcher treated the surveyed data…
7 CFR 1940.317 - Methods for ensuring proper implementation of categorical exclusions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... (Continued) RURAL HOUSING SERVICE, RURAL BUSINESS-COOPERATIVE SERVICE, RURAL UTILITIES SERVICE, AND FARM... or archeological site; (6) Critical Habitat or Endangered/Threatened Species (listed or proposed)—the proposed action: (i) Contain a critical habitat within the project site, (ii) Is adjacent to a critical...
An improved method for pancreas segmentation using SLIC and interactive region merging
NASA Astrophysics Data System (ADS)
Zhang, Liyuan; Yang, Huamin; Shi, Weili; Miao, Yu; Li, Qingliang; He, Fei; He, Wei; Li, Yanfang; Zhang, Huimao; Mori, Kensaku; Jiang, Zhengang
2017-03-01
Considering the weak edges in pancreas segmentation, this paper proposes a new solution which integrates more features of CT images by combining SLIC superpixels and interactive region merging. In the proposed method, Mahalanobis distance is first utilized in SLIC method to generate better superpixel images. By extracting five texture features and one gray feature, the similarity measure between two superpixels becomes more reliable in interactive region merging. Furthermore, object edge blocks are accurately addressed by re-segmentation merging process. Applying the proposed method to four cases of abdominal CT images, we segment pancreatic tissues to verify the feasibility and effectiveness. The experimental results show that the proposed method can make segmentation accuracy increase to 92% on average. This study will boost the application process of pancreas segmentation for computer-aided diagnosis system.
NASA Astrophysics Data System (ADS)
Osman, Essam Eldin A.
2018-01-01
This work represents a comparative study of different approaches of manipulating ratio spectra, applied on a binary mixture of ciprofloxacin HCl and dexamethasone sodium phosphate co-formulated as ear drops. The proposed new spectrophotometric methods are: ratio difference spectrophotometric method (RDSM), amplitude center method (ACM), first derivative of the ratio spectra (1DD) and mean centering of ratio spectra (MCR). The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs. The proposed methods were validated according to the ICH guidelines. A comparative study was conducted between those methods regarding simplicity, limitations and sensitivity. The obtained results were statistically compared with those obtained from the reported HPLC method, showing no significant difference with respect to accuracy and precision.
Methods to Develop Inhalation Cancer Risk Estimates for ...
This document summarizes the approaches and rationale for the technical and scientific considerations used to derive inhalation cancer risks for emissions of chromium and nickel compounds from electric utility steam generating units. The purpose of this document is to discuss the methods used to develop inhalation cancer risk estimates associated with emissions of chromium and nickel compounds from coal- and oil-fired electric utility steam generating units (EGUs) in support of EPA's recently proposed Air Toxics Rule.
NASA Astrophysics Data System (ADS)
Marshalkin, V. E.; Povyshev, V. M.
2015-12-01
A method for joint utilization of non-weapons-grade plutonium and highly enriched uranium in the thorium-uranium—plutonium oxide fuel of a water-moderated reactor with a varying water composition (D2O, H2O) is proposed. The method is characterized by efficient breeding of the 233U isotope and safe reactor operation and is comparatively simple to implement.
NASA Astrophysics Data System (ADS)
Khataee, Alireza; Lotfi, Roya; Hasanzadeh, Aliyeh; Iranifam, Mortaza; Joo, Sang Woo
2016-02-01
A simple and sensitive flow injection chemiluminescence (CL) method was developed for determination of nalidixic acid by application of CdS quantum dots (QDs) in KMnO4-morin CL system in acidic medium. Optical and structural features of L-cysteine capped CdS quantum dots which were synthesized via hydrothermal approach were investigated using X-ray diffraction (XRD), scanning electron microscopy (SEM), photoluminescence (PL), and ultraviolet-visible (UV-Vis) spectroscopy. Moreover, the potential mechanism of the proposed CL method was described using the results of the kinetic curves of CL systems, the spectra of CL, PL and UV-Vis analyses. The CL intensity of the KMnO4-morin-CdS QDs system was considerably increased in the presence of nalidixic acid. Under the optimum condition, the enhanced CL intensity was linearly proportional to the concentration of nalidixic acid in the range of 0.0013 to 21.0 mg L- 1, with a detection limit of (3σ) 0.003 mg L- 1. Also, the proposed CL method was utilized for determination of nalidixic acid in environmental water samples, and commercial pharmaceutical formulation to approve its applicability. Furthermore, corona discharge ionization ion mobility spectrometry (CD-IMS) method was utilized for determination of nalidixic acid and the results of real sample analysis by two proposed methods were compared. Comparison the analytical features of these methods represented that the proposed CL method is preferable to CD-IMS method for determination of nalidixic acid due to its high sensitivity and precision.
Khataee, Alireza; Lotfi, Roya; Hasanzadeh, Aliyeh; Iranifam, Mortaza; Joo, Sang Woo
2016-02-05
A simple and sensitive flow injection chemiluminescence (CL) method was developed for determination of nalidixic acid by application of CdS quantum dots (QDs) in KMnO4-morin CL system in acidic medium. Optical and structural features of L-cysteine capped CdS quantum dots which were synthesized via hydrothermal approach were investigated using X-ray diffraction (XRD), scanning electron microscopy (SEM), photoluminescence (PL), and ultraviolet-visible (UV-Vis) spectroscopy. Moreover, the potential mechanism of the proposed CL method was described using the results of the kinetic curves of CL systems, the spectra of CL, PL and UV-Vis analyses. The CL intensity of the KMnO4-morin-CdS QDs system was considerably increased in the presence of nalidixic acid. Under the optimum condition, the enhanced CL intensity was linearly proportional to the concentration of nalidixic acid in the range of 0.0013 to 21.0 mg L(-1), with a detection limit of (3σ) 0.003 mg L(-1). Also, the proposed CL method was utilized for determination of nalidixic acid in environmental water samples, and commercial pharmaceutical formulation to approve its applicability. Furthermore, corona discharge ionization ion mobility spectrometry (CD-IMS) method was utilized for determination of nalidixic acid and the results of real sample analysis by two proposed methods were compared. Comparison the analytical features of these methods represented that the proposed CL method is preferable to CD-IMS method for determination of nalidixic acid due to its high sensitivity and precision. Copyright © 2015 Elsevier B.V. All rights reserved.
Android malware detection based on evolutionary super-network
NASA Astrophysics Data System (ADS)
Yan, Haisheng; Peng, Lingling
2018-04-01
In the paper, an android malware detection method based on evolutionary super-network is proposed in order to improve the precision of android malware detection. Chi square statistics method is used for selecting characteristics on the basis of analyzing android authority. Boolean weighting is utilized for calculating characteristic weight. Processed characteristic vector is regarded as the system training set and test set; hyper edge alternative strategy is used for training super-network classification model, thereby classifying test set characteristic vectors, and it is compared with traditional classification algorithm. The results show that the detection method proposed in the paper is close to or better than traditional classification algorithm. The proposed method belongs to an effective Android malware detection means.
Statistical methods to detect novel genetic variants using publicly available GWAS summary data.
Guo, Bin; Wu, Baolin
2018-03-01
We propose statistical methods to detect novel genetic variants using only genome-wide association studies (GWAS) summary data without access to raw genotype and phenotype data. With more and more summary data being posted for public access in the post GWAS era, the proposed methods are practically very useful to identify additional interesting genetic variants and shed lights on the underlying disease mechanism. We illustrate the utility of our proposed methods with application to GWAS meta-analysis results of fasting glucose from the international MAGIC consortium. We found several novel genome-wide significant loci that are worth further study. Copyright © 2018 Elsevier Ltd. All rights reserved.
Data fusion of multi-scale representations for structural damage detection
NASA Astrophysics Data System (ADS)
Guo, Tian; Xu, Zili
2018-01-01
Despite extensive researches into structural health monitoring (SHM) in the past decades, there are few methods that can detect multiple slight damage in noisy environments. Here, we introduce a new hybrid method that utilizes multi-scale space theory and data fusion approach for multiple damage detection in beams and plates. A cascade filtering approach provides multi-scale space for noisy mode shapes and filters the fluctuations caused by measurement noise. In multi-scale space, a series of amplification and data fusion algorithms are utilized to search the damage features across all possible scales. We verify the effectiveness of the method by numerical simulation using damaged beams and plates with various types of boundary conditions. Monte Carlo simulations are conducted to illustrate the effectiveness and noise immunity of the proposed method. The applicability is further validated via laboratory cases studies focusing on different damage scenarios. Both results demonstrate that the proposed method has a superior noise tolerant ability, as well as damage sensitivity, without knowing material properties or boundary conditions.
The Utility of Robust Means in Statistics
ERIC Educational Resources Information Center
Goodwyn, Fara
2012-01-01
Location estimates calculated from heuristic data were examined using traditional and robust statistical methods. The current paper demonstrates the impact outliers have on the sample mean and proposes robust methods to control for outliers in sample data. Traditional methods fail because they rely on the statistical assumptions of normality and…
Novel Maximum-based Timing Acquisition for Spread-Spectrum Communications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sibbetty, Taylor; Moradiz, Hussein; Farhang-Boroujeny, Behrouz
This paper proposes and analyzes a new packet detection and timing acquisition method for spread spectrum systems. The proposed method provides an enhancement over the typical thresholding techniques that have been proposed for direct sequence spread spectrum (DS-SS). The effective implementation of thresholding methods typically require accurate knowledge of the received signal-to-noise ratio (SNR), which is particularly difficult to estimate in spread spectrum systems. Instead, we propose a method which utilizes a consistency metric of the location of maximum samples at the output of a filter matched to the spread spectrum waveform to achieve acquisition, and does not require knowledgemore » of the received SNR. Through theoretical study, we show that the proposed method offers a low probability of missed detection over a large range of SNR with a corresponding probability of false alarm far lower than other methods. Computer simulations that corroborate our theoretical results are also presented. Although our work here has been motivated by our previous study of a filter bank multicarrier spread-spectrum (FB-MC-SS) system, the proposed method is applicable to DS-SS systems as well.« less
Information hiding based on double random-phase encoding and public-key cryptography.
Sheng, Yuan; Xin, Zhou; Alam, Mohammed S; Xi, Lu; Xiao-Feng, Li
2009-03-02
A novel information hiding method based on double random-phase encoding (DRPE) and Rivest-Shamir-Adleman (RSA) public-key cryptosystem is proposed. In the proposed technique, the inherent diffusion property of DRPE is cleverly utilized to make up the diffusion insufficiency of RSA public-key cryptography, while the RSA cryptosystem is utilized for simultaneous transmission of the cipher text and the two phase-masks, which is not possible under the DRPE technique. This technique combines the complementary advantages of the DPRE and RSA encryption techniques and brings security and convenience for efficient information transmission. Extensive numerical simulation results are presented to verify the performance of the proposed technique.
Reference point detection for camera-based fingerprint image based on wavelet transformation.
Khalil, Mohammed S
2015-04-30
Fingerprint recognition systems essentially require core-point detection prior to fingerprint matching. The core-point is used as a reference point to align the fingerprint with a template database. When processing a larger fingerprint database, it is necessary to consider the core-point during feature extraction. Numerous core-point detection methods are available and have been reported in the literature. However, these methods are generally applied to scanner-based images. Hence, this paper attempts to explore the feasibility of applying a core-point detection method to a fingerprint image obtained using a camera phone. The proposed method utilizes a discrete wavelet transform to extract the ridge information from a color image. The performance of proposed method is evaluated in terms of accuracy and consistency. These two indicators are calculated automatically by comparing the method's output with the defined core points. The proposed method is tested on two data sets, controlled and uncontrolled environment, collected from 13 different subjects. In the controlled environment, the proposed method achieved a detection rate 82.98%. In uncontrolled environment, the proposed method yield a detection rate of 78.21%. The proposed method yields promising results in a collected-image database. Moreover, the proposed method outperformed compare to existing method.
NASA Astrophysics Data System (ADS)
Luo, Shouhua; Shen, Tao; Sun, Yi; Li, Jing; Li, Guang; Tang, Xiangyang
2018-04-01
In high resolution (microscopic) CT applications, the scan field of view should cover the entire specimen or sample to allow complete data acquisition and image reconstruction. However, truncation may occur in projection data and results in artifacts in reconstructed images. In this study, we propose a low resolution image constrained reconstruction algorithm (LRICR) for interior tomography in microscopic CT at high resolution. In general, the multi-resolution acquisition based methods can be employed to solve the data truncation problem if the project data acquired at low resolution are utilized to fill up the truncated projection data acquired at high resolution. However, most existing methods place quite strict restrictions on the data acquisition geometry, which greatly limits their utility in practice. In the proposed LRICR algorithm, full and partial data acquisition (scan) at low and high resolutions, respectively, are carried out. Using the image reconstructed from sparse projection data acquired at low resolution as the prior, a microscopic image at high resolution is reconstructed from the truncated projection data acquired at high resolution. Two synthesized digital phantoms, a raw bamboo culm and a specimen of mouse femur, were utilized to evaluate and verify performance of the proposed LRICR algorithm. Compared with the conventional TV minimization based algorithm and the multi-resolution scout-reconstruction algorithm, the proposed LRICR algorithm shows significant improvement in reduction of the artifacts caused by data truncation, providing a practical solution for high quality and reliable interior tomography in microscopic CT applications. The proposed LRICR algorithm outperforms the multi-resolution scout-reconstruction method and the TV minimization based reconstruction for interior tomography in microscopic CT.
A study on quantifying COPD severity by combining pulmonary function tests and CT image analysis
NASA Astrophysics Data System (ADS)
Nimura, Yukitaka; Kitasaka, Takayuki; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Mori, Kensaku
2011-03-01
This paper describes a novel method that can evaluate chronic obstructive pulmonary disease (COPD) severity by combining measurements of pulmonary function tests and measurements obtained from CT image analysis. There is no cure for COPD. However, with regular medical care and consistent patient compliance with treatments and lifestyle changes, the symptoms of COPD can be minimized and progression of the disease can be slowed. Therefore, many diagnosis methods based on CT image analysis have been proposed for quantifying COPD. Most of diagnosis methods for COPD extract the lesions as low-attenuation areas (LAA) by thresholding and evaluate the COPD severity by calculating the LAA in the lung (LAA%). However, COPD is usually the result of a combination of two conditions, emphysema and chronic obstructive bronchitis. Therefore, the previous methods based on only LAA% do not work well. The proposed method utilizes both of information including the measurements of pulmonary function tests and the results of the chest CT image analysis to evaluate the COPD severity. In this paper, we utilize a multi-class AdaBoost to combine both of information and classify the COPD severity into five stages automatically. The experimental results revealed that the accuracy rate of the proposed method was 88.9% (resubstitution scheme) and 64.4% (leave-one-out scheme).
Model-based sensor-less wavefront aberration correction in optical coherence tomography.
Verstraete, Hans R G W; Wahls, Sander; Kalkman, Jeroen; Verhaegen, Michel
2015-12-15
Several sensor-less wavefront aberration correction methods that correct nonlinear wavefront aberrations by maximizing the optical coherence tomography (OCT) signal are tested on an OCT setup. A conventional coordinate search method is compared to two model-based optimization methods. The first model-based method takes advantage of the well-known optimization algorithm (NEWUOA) and utilizes a quadratic model. The second model-based method (DONE) is new and utilizes a random multidimensional Fourier-basis expansion. The model-based algorithms achieve lower wavefront errors with up to ten times fewer measurements. Furthermore, the newly proposed DONE method outperforms the NEWUOA method significantly. The DONE algorithm is tested on OCT images and shows a significantly improved image quality.
Lee, Woo-Chun; Lee, Sang-Woo; Yun, Seong-Taek; Lee, Pyeong-Koo; Hwang, Yu Sik; Kim, Soon-Oh
2016-01-15
Numerous technologies have been developed and applied to remediate AMD, but each has specific drawbacks. To overcome the limitations of existing methods and improve their effectiveness, we propose a novel method utilizing permeable reactive kiddle (PRK). This manuscript explores the performance of the PRK method. In line with the concept of green technology, the PRK method recycles industrial waste, such as steel slag and waste cast iron. Our results demonstrate that the PRK method can be applied to remediate AMD under optimal operational conditions. Especially, this method allows for simple installation and cheap expenditure, compared with established technologies. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Nikonorov, Aleksandr; Terleev, Vitaly; Badenko, Vladimir; Mirschel, Wilfried; Abakumov, Evgeny; Ginevsky, Roman; Lazarev, Viktor; Togo, Issa; Volkova, Yulia; Melnichuk, Aleksandr; Dunaieva, Ielizaveta; Akimov, Luka
2017-10-01
The problem of flood protection measures are considered in the paper. The regulation of river flow by the system of Self-Regulated Flood Dams (SRFD) is analyzed. The method of SRFD modeling in GIS environment is proposed. The question of the ecological aspect of the SRFD management is considered based on the hydrophysical properties of the soil. The improved Mualem-Van Genuchted method is proposed for the evaluation of the possible SRFD location influence on the soil of flooded territory - the temporary reservoirs. The importance and utility of the proposed complex method is stated.
A New TCP Congestion Control Supporting RTT-Fairness
NASA Astrophysics Data System (ADS)
Ogura, Kazumine; Nemoto, Yohei; Su, Zhou; Katto, Jiro
This paper focuses on RTT-fairness of multiple TCP flows over the Internet, and proposes a new TCP congestion control named “HRF (Hybrid RTT-Fair)-TCP”. Today, it is a serious problem that the flows having smaller RTT utilize more bandwidth than others when multiple flows having different RTT values compete in the same network. This means that a user with longer RTT may not be able to obtain sufficient bandwidth by the current methods. This RTT fairness issue has been discussed in many TCP papers. An example is CR (Constant Rate) algorithm, which achieves RTT-fairness by multiplying the square of RTT value in its window increment phase against TCP-Reno. However, the method halves its windows size same as TCP-Reno when a packet loss is detected. This makes worse its efficiency in certain network cases. On the other hand, recent proposed TCP versions essentially require throughput efficiency and TCP-friendliness with TCP-Reno. Therefore, we try to keep these advantages in our TCP design in addition to RTT-fairness. In this paper, we make intuitive analytical models in which we separate resource utilization processes into two cases: utilization of bottleneck link capacity and that of buffer space at the bottleneck link router. These models take into account three characteristic algorithms (Reno, Constant Rate, Constant Increase) in window increment phase where a sender receives an acknowledgement successfully. Their validity is proved by both simulations and implementations. From these analyses, we propose HRF-TCP which switches two modes according to observed RTT values and achieves RTT fairness. Experiments are carried out to validate the proposed method. Finally, HRF-TCP outperforms conventional methods in RTT-fairness, efficiency and friendliness with TCP-Reno.
Missing RRI interpolation for HRV analysis using locally-weighted partial least squares regression.
Kamata, Keisuke; Fujiwara, Koichi; Yamakawa, Toshiki; Kano, Manabu
2016-08-01
The R-R interval (RRI) fluctuation in electrocardiogram (ECG) is called heart rate variability (HRV). Since HRV reflects autonomic nervous function, HRV-based health monitoring services, such as stress estimation, drowsy driving detection, and epileptic seizure prediction, have been proposed. In these HRV-based health monitoring services, precise R wave detection from ECG is required; however, R waves cannot always be detected due to ECG artifacts. Missing RRI data should be interpolated appropriately for HRV analysis. The present work proposes a missing RRI interpolation method by utilizing using just-in-time (JIT) modeling. The proposed method adopts locally weighted partial least squares (LW-PLS) for RRI interpolation, which is a well-known JIT modeling method used in the filed of process control. The usefulness of the proposed method was demonstrated through a case study of real RRI data collected from healthy persons. The proposed JIT-based interpolation method could improve the interpolation accuracy in comparison with a static interpolation method.
Fabric defect detection based on faster R-CNN
NASA Astrophysics Data System (ADS)
Liu, Zhoufeng; Liu, Xianghui; Li, Chunlei; Li, Bicao; Wang, Baorui
2018-04-01
In order to effectively detect the defects for fabric image with complex texture, this paper proposed a novel detection algorithm based on an end-to-end convolutional neural network. First, the proposal regions are generated by RPN (regional proposal Network). Then, Fast Region-based Convolutional Network method (Fast R-CNN) is adopted to determine whether the proposal regions extracted by RPN is a defect or not. Finally, Soft-NMS (non-maximum suppression) and data augmentation strategies are utilized to improve the detection precision. Experimental results demonstrate that the proposed method can locate the fabric defect region with higher accuracy compared with the state-of- art, and has better adaptability to all kinds of the fabric image.
Reverse-time migration for subsurface imaging using single- and multi- frequency components
NASA Astrophysics Data System (ADS)
Ha, J.; Kim, Y.; Kim, S.; Chung, W.; Shin, S.; Lee, D.
2017-12-01
Reverse-time migration is a seismic data processing method for obtaining accurate subsurface structure images from seismic data. This method has been applied to obtain more precise complex geological structure information, including steep dips, by considering wave propagation characteristics based on two-way traveltime. Recently, various studies have reported the characteristics of acquired datasets from different types of media. In particular, because real subsurface media is comprised of various types of structures, seismic data represent various responses. Among them, frequency characteristics can be used as an important indicator for analyzing wave propagation in subsurface structures. All frequency components are utilized in conventional reverse-time migration, but analyzing each component is required because they contain inherent seismic response characteristics. In this study, we propose a reverse-time migration method that utilizes single- and multi- frequency components for analyzing subsurface imaging. We performed a spectral decomposition to utilize the characteristics of non-stationary seismic data. We propose two types of imaging conditions, in which decomposed signals are applied in complex and envelope traces. The SEG/EAGE Overthrust model was used to demonstrate the proposed method, and the 1st derivative Gaussian function with a 10 Hz cutoff was used as the source signature. The results were more accurate and stable when relatively lower frequency components in the effective frequency range were used. By combining the gradient obtained from various frequency components, we confirmed that the results are clearer than the conventional method using all frequency components. Also, further study is required to effectively combine the multi-frequency components.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Hu-Chen; Department of Industrial Engineering and Management, Tokyo Institute of Technology, Tokyo 152-8552; Wu, Jing
Highlights: • Propose a VIKOR-based fuzzy MCDM technique for evaluating HCW disposal methods. • Linguistic variables are used to assess the ratings and weights for the criteria. • The OWA operator is utilized to aggregate individual opinions of decision makers. • A case study is given to illustrate the procedure of the proposed framework. - Abstract: Nowadays selection of the appropriate treatment method in health-care waste (HCW) management has become a challenge task for the municipal authorities especially in developing countries. Assessment of HCW disposal alternatives can be regarded as a complicated multi-criteria decision making (MCDM) problem which requires considerationmore » of multiple alternative solutions and conflicting tangible and intangible criteria. The objective of this paper is to present a new MCDM technique based on fuzzy set theory and VIKOR method for evaluating HCW disposal methods. Linguistic variables are used by decision makers to assess the ratings and weights for the established criteria. The ordered weighted averaging (OWA) operator is utilized to aggregate individual opinions of decision makers into a group assessment. The computational procedure of the proposed framework is illustrated through a case study in Shanghai, one of the largest cities of China. The HCW treatment alternatives considered in this study include “incineration”, “steam sterilization”, “microwave” and “landfill”. The results obtained using the proposed approach are analyzed in a comparative way.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, Jialin, E-mail: 2004pjl@163.com; Zhang, Hongbo; Hu, Peijun
Purpose: Efficient and accurate 3D liver segmentations from contrast-enhanced computed tomography (CT) images play an important role in therapeutic strategies for hepatic diseases. However, inhomogeneous appearances, ambiguous boundaries, and large variance in shape often make it a challenging task. The existence of liver abnormalities poses further difficulty. Despite the significant intensity difference, liver tumors should be segmented as part of the liver. This study aims to address these challenges, especially when the target livers contain subregions with distinct appearances. Methods: The authors propose a novel multiregion-appearance based approach with graph cuts to delineate the liver surface. For livers with multiplemore » subregions, a geodesic distance based appearance selection scheme is introduced to utilize proper appearance constraint for each subregion. A special case of the proposed method, which uses only one appearance constraint to segment the liver, is also presented. The segmentation process is modeled with energy functions incorporating both boundary and region information. Rather than a simple fixed combination, an adaptive balancing weight is introduced and learned from training sets. The proposed method only calls initialization inside the liver surface. No additional constraints from user interaction are utilized. Results: The proposed method was validated on 50 3D CT images from three datasets, i.e., Medical Image Computing and Computer Assisted Intervention (MICCAI) training and testing set, and local dataset. On MICCAI testing set, the proposed method achieved a total score of 83.4 ± 3.1, outperforming nonexpert manual segmentation (average score of 75.0). When applying their method to MICCAI training set and local dataset, it yielded a mean Dice similarity coefficient (DSC) of 97.7% ± 0.5% and 97.5% ± 0.4%, respectively. These results demonstrated the accuracy of the method when applied to different computed tomography (CT) datasets. In addition, user operator variability experiments showed its good reproducibility. Conclusions: A multiregion-appearance based method is proposed and evaluated to segment liver. This approach does not require prior model construction and so eliminates the burdens associated with model construction and matching. The proposed method provides comparable results with state-of-the-art methods. Validation results suggest that it may be suitable for the clinical use.« less
DOT National Transportation Integrated Search
2014-01-01
A comprehensive field detection method is proposed that is aimed at developing advanced capability for : reliable monitoring, inspection and life estimation of bridge infrastructure. The goal is to utilize Motion-Sensing Radio Transponders (RFIDS) on...
Single-shot real-time three dimensional measurement based on hue-height mapping
NASA Astrophysics Data System (ADS)
Wan, Yingying; Cao, Yiping; Chen, Cheng; Fu, Guangkai; Wang, Yapin; Li, Chengmeng
2018-06-01
A single-shot three-dimensional (3D) measurement based on hue-height mapping is proposed. The color fringe pattern is encoded by three sinusoidal fringes with the same frequency but different shifting phase into red (R), green (G) and blue (B) color channels, respectively. It is found that the hue of the captured color fringe pattern on the reference plane maintains monotonic in one period even it has the color crosstalk. Thus, unlike the traditional color phase shifting technique, the hue information is utilized to decode the color fringe pattern and map to the pixels of the fringe displacement in the proposed method. Because the monotonicity of the hue is limited within one period, displacement unwrapping is proposed to obtain the continuous displacement that is finally used to map to the height distribution. This method directly utilizes the hue under the effect of color crosstalk for mapping the height so that no color calibration is involved. Also, as it requires only single shot deformed color fringe pattern, this method can be applied into the real-time or dynamic 3D measurements.
Reweighted mass center based object-oriented sparse subspace clustering for hyperspectral images
NASA Astrophysics Data System (ADS)
Zhai, Han; Zhang, Hongyan; Zhang, Liangpei; Li, Pingxiang
2016-10-01
Considering the inevitable obstacles faced by the pixel-based clustering methods, such as salt-and-pepper noise, high computational complexity, and the lack of spatial information, a reweighted mass center based object-oriented sparse subspace clustering (RMC-OOSSC) algorithm for hyperspectral images (HSIs) is proposed. First, the mean-shift segmentation method is utilized to oversegment the HSI to obtain meaningful objects. Second, a distance reweighted mass center learning model is presented to extract the representative and discriminative features for each object. Third, assuming that all the objects are sampled from a union of subspaces, it is natural to apply the SSC algorithm to the HSI. Faced with the high correlation among the hyperspectral objects, a weighting scheme is adopted to ensure that the highly correlated objects are preferred in the procedure of sparse representation, to reduce the representation errors. Two widely used hyperspectral datasets were utilized to test the performance of the proposed RMC-OOSSC algorithm, obtaining high clustering accuracies (overall accuracy) of 71.98% and 89.57%, respectively. The experimental results show that the proposed method clearly improves the clustering performance with respect to the other state-of-the-art clustering methods, and it significantly reduces the computational time.
Experimental Methods to Evaluate Science Utility Relative to the Decadal Survey
NASA Technical Reports Server (NTRS)
Widergren, Cynthia
2012-01-01
The driving factor for competed missions is the science that it plans on performing once it has reached its target body. These science goals are derived from the science recommended by the most current Decadal Survey. This work focuses on science goals in previous Venus mission proposals with respect to the 2013 Decadal Survey. By looking at how the goals compare to the survey and how much confidence NASA has in the mission's ability to accomplish these goals, a method was created to assess the science return utility of each mission. This method can be used as a tool for future Venus mission formulation and serves as a starting point for future development of create science utility assessment tools.
Joshi, Vinayak S; Reinhardt, Joseph M; Garvin, Mona K; Abramoff, Michael D
2014-01-01
The separation of the retinal vessel network into distinct arterial and venous vessel trees is of high interest. We propose an automated method for identification and separation of retinal vessel trees in a retinal color image by converting a vessel segmentation image into a vessel segment map and identifying the individual vessel trees by graph search. Orientation, width, and intensity of each vessel segment are utilized to find the optimal graph of vessel segments. The separated vessel trees are labeled as primary vessel or branches. We utilize the separated vessel trees for arterial-venous (AV) classification, based on the color properties of the vessels in each tree graph. We applied our approach to a dataset of 50 fundus images from 50 subjects. The proposed method resulted in an accuracy of 91.44% correctly classified vessel pixels as either artery or vein. The accuracy of correctly classified major vessel segments was 96.42%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshalkin, V. E., E-mail: marshalkin@vniief.ru; Povyshev, V. M.
A method for joint utilization of non-weapons-grade plutonium and highly enriched uranium in the thorium–uranium—plutonium oxide fuel of a water-moderated reactor with a varying water composition (D{sub 2}O, H{sub 2}O) is proposed. The method is characterized by efficient breeding of the {sup 233}U isotope and safe reactor operation and is comparatively simple to implement.
Winkelman, Warren J.; Leonard, Kevin J.
2004-01-01
There are constraints embedded in medical record structure that limit use by patients in self-directed disease management. Through systematic review of the literature from a critical perspective, four characteristics that either enhance or mitigate the influence of medical record structure on patient utilization of an electronic patient record (EPR) system have been identified: environmental pressures, physician centeredness, collaborative organizational culture, and patient centeredness. An evaluation framework is proposed for use when considering adaptation of existing EPR systems for online patient access. Exemplars of patient-accessible EPR systems from the literature are evaluated utilizing the framework. From this study, it appears that traditional information system research and development methods may not wholly capture many pertinent social issues that arise when expanding access of EPR systems to patients. Critically rooted methods such as action research can directly inform development strategies so that these systems may positively influence health outcomes. PMID:14633932
Code of Federal Regulations, 2012 CFR
2012-10-01
..., cost of project and methods, personnel, and facilities to be utilized in carrying out the requirements... Act, established to conduct technical and scientific review of contract proposals and to make...
Code of Federal Regulations, 2014 CFR
2014-10-01
..., cost of project and methods, personnel, and facilities to be utilized in carrying out the requirements... Act, established to conduct technical and scientific review of contract proposals and to make...
Code of Federal Regulations, 2013 CFR
2013-10-01
..., cost of project and methods, personnel, and facilities to be utilized in carrying out the requirements... Act, established to conduct technical and scientific review of contract proposals and to make...
Code of Federal Regulations, 2011 CFR
2011-10-01
..., cost of project and methods, personnel, and facilities to be utilized in carrying out the requirements... Act, established to conduct technical and scientific review of contract proposals and to make...
Multiparameter measurement utilizing poloidal polarimeter for burning plasma reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imazawa, Ryota; Kawano, Yasunori; Itami, Kiyoshi
2014-08-21
The authors have made the basic and applied research on the polarimeter for plasma diagnostics. Recently, the authors have proposed an application of multiparameter measurement (magnetic field, B, electron density, n{sub e}, electron temperature, T{sub e}, and total plasma current, I{sub p}) utilizing polarimeter to future fusion reactors. In this proceedings, the brief review of the polarimeter, the principle of the multiparameter measurement and the progress of the research on the multiparameter measurement are explained. The measurement method that the authors have proposed is suitable for the reactor for the following reasons; multiparameters can be obtained from a small numbermore » of diagnostics, the proposed method does not depend on time-history, and far-infrared light utilized by the polarimeter is less sensitive to degradation of of optical components. Taking into account the measuring error, performance assessment of the proposed method was carried. Assuming that the error of Δθ and Δε were 0.1° and 0.6°, respectively, the error of reconstructed j{sub φ}, n{sub e} and T{sub e} were 12 %, 8.4 % and 31 %, respectively. This study has shown that the reconstruction error can be decreased by increasing the number of the wavelength of the probing laser and by increasing the number of the viewing chords. For example, By increasing the number of viewing chords to forty-five, the error of j{sub φ}, n{sub e} and T{sub e} were reduced to 4.4 %, 4.4 %, and 17 %, respectively.« less
The Model for External Reliance of Localities In (MERLIN) Coastal Management Zones is a proposed solution to allow scaling of variables to smaller, nested geographies. Utilizing a Principal Components Analysis and data normalization techniques, smaller scale trends are linked to ...
Agin, Patricia Poh; Edmonds, Susan H
2002-08-01
The goals of this study were (i) to demonstrate that existing and widely used sun protection factor (SPF) test methodologies can produce accurate and reproducible results for high SPF formulations and (ii) to provide data on the number of test-subjects needed, the variability of the data, and the appropriate exposure increments needed for testing high SPF formulations. Three high SPF formulations were tested, according to the Food and Drug Administration's (FDA) 1993 tentative final monograph (TFM) 'very water resistant' test method and/or the 1978 proposed monograph 'waterproof' test method, within one laboratory. A fourth high SPF formulation was tested at four independent SPF testing laboratories, using the 1978 waterproof SPF test method. All laboratories utilized xenon arc solar simulators. The data illustrate that the testing conducted within one laboratory, following either the 1978 proposed or the 1993 TFM SPF test method, was able to reproducibly determine the SPFs of the formulations tested, using either the statistical analysis method in the proposed monograph or the statistical method described in the TFM. When one formulation was tested at four different laboratories, the anticipated variation in the data owing to the equipment and other operational differences was minimized through the use of the statistical method described in the 1993 monograph. The data illustrate that either the 1978 proposed monograph SPF test method or the 1993 TFM SPF test method can provide accurate and reproducible results for high SPF formulations. Further, these results can be achieved with panels of 20-25 subjects with an acceptable level of variability. Utilization of the statistical controls from the 1993 sunscreen monograph can help to minimize lab-to-lab variability for well-formulated products.
Active Semi-Supervised Community Detection Based on Must-Link and Cannot-Link Constraints
Cheng, Jianjun; Leng, Mingwei; Li, Longjie; Zhou, Hanhai; Chen, Xiaoyun
2014-01-01
Community structure detection is of great importance because it can help in discovering the relationship between the function and the topology structure of a network. Many community detection algorithms have been proposed, but how to incorporate the prior knowledge in the detection process remains a challenging problem. In this paper, we propose a semi-supervised community detection algorithm, which makes full utilization of the must-link and cannot-link constraints to guide the process of community detection and thereby extracts high-quality community structures from networks. To acquire the high-quality must-link and cannot-link constraints, we also propose a semi-supervised component generation algorithm based on active learning, which actively selects nodes with maximum utility for the proposed semi-supervised community detection algorithm step by step, and then generates the must-link and cannot-link constraints by accessing a noiseless oracle. Extensive experiments were carried out, and the experimental results show that the introduction of active learning into the problem of community detection makes a success. Our proposed method can extract high-quality community structures from networks, and significantly outperforms other comparison methods. PMID:25329660
Link-Based Similarity Measures Using Reachability Vectors
Yoon, Seok-Ho; Kim, Ji-Soo; Ryu, Minsoo; Choi, Ho-Jin
2014-01-01
We present a novel approach for computing link-based similarities among objects accurately by utilizing the link information pertaining to the objects involved. We discuss the problems with previous link-based similarity measures and propose a novel approach for computing link based similarities that does not suffer from these problems. In the proposed approach each target object is represented by a vector. Each element of the vector corresponds to all the objects in the given data, and the value of each element denotes the weight for the corresponding object. As for this weight value, we propose to utilize the probability of reaching from the target object to the specific object, computed using the “Random Walk with Restart” strategy. Then, we define the similarity between two objects as the cosine similarity of the two vectors. In this paper, we provide examples to show that our approach does not suffer from the aforementioned problems. We also evaluate the performance of the proposed methods in comparison with existing link-based measures, qualitatively and quantitatively, with respect to two kinds of data sets, scientific papers and Web documents. Our experimental results indicate that the proposed methods significantly outperform the existing measures. PMID:24701188
Geometrical force constraint method for vessel and x-ray angiogram simulation.
Song, Shuang; Yang, Jian; Fan, Jingfan; Cong, Weijian; Ai, Danni; Zhao, Yitian; Wang, Yongtian
2016-01-01
This study proposes a novel geometrical force constraint method for 3-D vasculature modeling and angiographic image simulation. For this method, space filling force, gravitational force, and topological preserving force are proposed and combined for the optimization of the topology of the vascular structure. The surface covering force and surface adhesion force are constructed to drive the growth of the vasculature on any surface. According to the combination effects of the topological and surface adhering forces, a realistic vasculature can be effectively simulated on any surface. The image projection of the generated 3-D vascular structures is simulated according to the perspective projection and energy attenuation principles of X-rays. Finally, the simulated projection vasculature is fused with a predefined angiographic mask image to generate a realistic angiogram. The proposed method is evaluated on a CT image and three generally utilized surfaces. The results fully demonstrate the effectiveness and robustness of the proposed method.
NASA Astrophysics Data System (ADS)
Wu, Yunna; Chen, Kaifeng; Xu, Hu; Xu, Chuanbo; Zhang, Haobo; Yang, Meng
2017-12-01
There is insufficient research relating to offshore wind farm site selection in China. The current methods for site selection have some defects. First, information loss is caused by two aspects: the implicit assumption that the probability distribution on the interval number is uniform; and ignoring the value of decision makers' (DMs') common opinion on the criteria information evaluation. Secondly, the difference in DMs' utility function has failed to receive attention. An innovative method is proposed in this article to solve these drawbacks. First, a new form of interval number and its weighted operator are proposed to reflect the uncertainty and reduce information loss. Secondly, a new stochastic dominance degree is proposed to quantify the interval number with a probability distribution. Thirdly, a two-stage method integrating the weighted operator with stochastic dominance degree is proposed to evaluate the alternatives. Finally, a case from China proves the effectiveness of this method.
NASA Astrophysics Data System (ADS)
Binol, Hamidullah; Bal, Abdullah; Cukur, Huseyin
2015-10-01
The performance of the kernel based techniques depends on the selection of kernel parameters. That's why; suitable parameter selection is an important problem for many kernel based techniques. This article presents a novel technique to learn the kernel parameters in kernel Fukunaga-Koontz Transform based (KFKT) classifier. The proposed approach determines the appropriate values of kernel parameters through optimizing an objective function constructed based on discrimination ability of KFKT. For this purpose we have utilized differential evolution algorithm (DEA). The new technique overcomes some disadvantages such as high time consumption existing in the traditional cross-validation method, and it can be utilized in any type of data. The experiments for target detection applications on the hyperspectral images verify the effectiveness of the proposed method.
Robust and Accurate Anomaly Detection in ECG Artifacts Using Time Series Motif Discovery
Sivaraks, Haemwaan
2015-01-01
Electrocardiogram (ECG) anomaly detection is an important technique for detecting dissimilar heartbeats which helps identify abnormal ECGs before the diagnosis process. Currently available ECG anomaly detection methods, ranging from academic research to commercial ECG machines, still suffer from a high false alarm rate because these methods are not able to differentiate ECG artifacts from real ECG signal, especially, in ECG artifacts that are similar to ECG signals in terms of shape and/or frequency. The problem leads to high vigilance for physicians and misinterpretation risk for nonspecialists. Therefore, this work proposes a novel anomaly detection technique that is highly robust and accurate in the presence of ECG artifacts which can effectively reduce the false alarm rate. Expert knowledge from cardiologists and motif discovery technique is utilized in our design. In addition, every step of the algorithm conforms to the interpretation of cardiologists. Our method can be utilized to both single-lead ECGs and multilead ECGs. Our experiment results on real ECG datasets are interpreted and evaluated by cardiologists. Our proposed algorithm can mostly achieve 100% of accuracy on detection (AoD), sensitivity, specificity, and positive predictive value with 0% false alarm rate. The results demonstrate that our proposed method is highly accurate and robust to artifacts, compared with competitive anomaly detection methods. PMID:25688284
Traffic sign classification with dataset augmentation and convolutional neural network
NASA Astrophysics Data System (ADS)
Tang, Qing; Kurnianggoro, Laksono; Jo, Kang-Hyun
2018-04-01
This paper presents a method for traffic sign classification using a convolutional neural network (CNN). In this method, firstly we transfer a color image into grayscale, and then normalize it in the range (-1,1) as the preprocessing step. To increase robustness of classification model, we apply a dataset augmentation algorithm and create new images to train the model. To avoid overfitting, we utilize a dropout module before the last fully connection layer. To assess the performance of the proposed method, the German traffic sign recognition benchmark (GTSRB) dataset is utilized. Experimental results show that the method is effective in classifying traffic signs.
Credit Risk Evaluation of Power Market Players with Random Forest
NASA Astrophysics Data System (ADS)
Umezawa, Yasushi; Mori, Hiroyuki
A new method is proposed for credit risk evaluation in a power market. The credit risk evaluation is to measure the bankruptcy risk of the company. The power system liberalization results in new environment that puts emphasis on the profit maximization and the risk minimization. There is a high probability that the electricity transaction causes a risk between companies. So, power market players are concerned with the risk minimization. As a management strategy, a risk index is requested to evaluate the worth of the business partner. This paper proposes a new method for evaluating the credit risk with Random Forest (RF) that makes ensemble learning for the decision tree. RF is one of efficient data mining technique in clustering data and extracting relationship between input and output data. In addition, the method of generating pseudo-measurements is proposed to improve the performance of RF. The proposed method is successfully applied to real financial data of energy utilities in the power market. A comparison is made between the proposed and the conventional methods.
Finger vein recognition using local line binary pattern.
Rosdi, Bakhtiar Affendi; Shing, Chai Wuh; Suandi, Shahrel Azmin
2011-01-01
In this paper, a personal verification method using finger vein is presented. Finger vein can be considered more secured compared to other hands based biometric traits such as fingerprint and palm print because the features are inside the human body. In the proposed method, a new texture descriptor called local line binary pattern (LLBP) is utilized as feature extraction technique. The neighbourhood shape in LLBP is a straight line, unlike in local binary pattern (LBP) which is a square shape. Experimental results show that the proposed method using LLBP has better performance than the previous methods using LBP and local derivative pattern (LDP).
Boundary fitting based segmentation of fluorescence microscopy images
NASA Astrophysics Data System (ADS)
Lee, Soonam; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.
2015-03-01
Segmentation is a fundamental step in quantifying characteristics, such as volume, shape, and orientation of cells and/or tissue. However, quantification of these characteristics still poses a challenge due to the unique properties of microscopy volumes. This paper proposes a 2D segmentation method that utilizes a combination of adaptive and global thresholding, potentials, z direction refinement, branch pruning, end point matching, and boundary fitting methods to delineate tubular objects in microscopy volumes. Experimental results demonstrate that the proposed method achieves better performance than an active contours based scheme.
2011-03-01
Utility Theory . . . . . . . . . . . . . . . . . 21 2.2.4 ELECTRE Method . . . . . . . . . . . . . . . . . . . . . . . . 22 2.2.5 PROMETHEE Method...more complicated than the other ones because of its structure. 2.2.5 PROMETHEE Method. PROMETHEE (Preference Ranking Organization Method for...Enrichment Evaluations) method was proposed by Brans and Vincke (1985). Basically this method has two different types. PROMETHEE I has been designed for
The Valuation of Scientific and Technical Experiments
NASA Technical Reports Server (NTRS)
Williams, F. E.
1972-01-01
Rational selection of scientific and technical experiments for space missions is studied. Particular emphasis is placed on the assessment of value or worth of an experiment. A specification procedure is outlined and discussed for the case of one decision maker. Experiments are viewed as multi-attributed entities, and a relevant set of attributes is proposed. Alternative methods of describing levels of the attributes are proposed and discussed. The reasonableness of certain simplifying assumptions such as preferential and utility independence is explored, and it is tentatively concluded that preferential independence applies and utility independence appears to be appropriate.
Multiple UAV Cooperation for Wildfire Monitoring
NASA Astrophysics Data System (ADS)
Lin, Zhongjie
Wildfires have been a major factor in the development and management of the world's forest. An accurate assessment of wildfire status is imperative for fire management. This thesis is dedicated to the topic of utilizing multiple unmanned aerial vehicles (UAVs) to cooperatively monitor a large-scale wildfire. This is achieved through wildfire spreading situation estimation based on on-line measurements and wise cooperation strategy to ensure efficiency. First, based on the understanding of the physical characteristics of the wildfire propagation behavior, a wildfire model and a Kalman filter-based method are proposed to estimate the wildfire rate of spread and the fire front contour profile. With the enormous on-line measurements from on-board sensors of UAVs, the proposed method allows a wildfire monitoring mission to benefit from on-line information updating, increased flexibility, and accurate estimation. An independent wildfire simulator is utilized to verify the effectiveness of the proposed method. Second, based on the filter analysis, wildfire spreading situation and vehicle dynamics, the influence of different cooperation strategies of UAVs to the overall mission performance is studied. The multi-UAV cooperation problem is formulated in a distributed network. A consensus-based method is proposed to help address the problem. The optimal cooperation strategy of UAVs is obtained through mathematical analysis. The derived optimal cooperation strategy is then verified in an independent fire simulation environment to verify its effectiveness.
Maximizing Total QoS-Provisioning of Image Streams with Limited Energy Budget
NASA Astrophysics Data System (ADS)
Lee, Wan Yeon; Kim, Kyong Hoon; Ko, Young Woong
To fully utilize the limited battery energy of mobile electronic devices, we propose an adaptive adjustment method of processing quality for multiple image stream tasks running with widely varying execution times. This adjustment method completes the worst-case executions of the tasks with a given budget of energy, and maximizes the total reward value of processing quality obtained during their executions by exploiting the probability distribution of task execution times. The proposed method derives the maximum reward value for the tasks being executable with arbitrary processing quality, and near maximum value for the tasks being executable with a finite number of processing qualities. Our evaluation on a prototype system shows that the proposed method achieves larger reward values, by up to 57%, than the previous method.
Transient Stability Output Margin Estimation Based on Energy Function Method
NASA Astrophysics Data System (ADS)
Miwa, Natsuki; Tanaka, Kazuyuki
In this paper, a new method of estimating critical generation margin (CGM) in power systems is proposed from the viewpoint of transient stability diagnostic. The proposed method has the capability to directly compute the stability limit output for a given contingency based on transient energy function method (TEF). Since CGM can be directly obtained by the limit output using estimated P-θ curves and is easy to understand, it is more useful rather than conventional critical clearing time (CCT) of energy function method. The proposed method can also estimate CGM as its negative value that means unstable in present load profile, then negative CGM can be directly utilized as generator output restriction. The proposed method is verified its accuracy and fast solution ability by applying to simple 3-machine model and IEEJ EAST10-machine standard model. Furthermore the useful application to severity ranking of transient stability for a lot of contingency cases is discussed by using CGM.
NASA Astrophysics Data System (ADS)
Jang, Jaeseong; Ahn, Chi Young; Jeon, Kiwan; Choi, Jung-il; Lee, Changhoon; Seo, Jin Keun
2015-03-01
A reconstruction method is proposed here to quantify the distribution of blood flow velocity fields inside the left ventricle from color Doppler echocardiography measurement. From 3D incompressible Navier- Stokes equation, a 2D incompressible Navier-Stokes equation with a mass source term is derived to utilize the measurable color flow ultrasound data in a plane along with the moving boundary condition. The proposed model reflects out-of-plane blood flows on the imaging plane through the mass source term. For demonstrating a feasibility of the proposed method, we have performed numerical simulations of the forward problem and numerical analysis of the reconstruction method. First, we construct a 3D moving LV region having a specific stroke volume. To obtain synthetic intra-ventricular flows, we performed a numerical simulation of the forward problem of Navier-Stokes equation inside the 3D moving LV, computed 3D intra-ventricular velocity fields as a solution of the forward problem, projected the 3D velocity fields on the imaging plane and took the inner product of the 2D velocity fields on the imaging plane and scanline directional velocity fields for synthetic scanline directional projected velocity at each position. The proposed method utilized the 2D synthetic projected velocity data for reconstructing LV blood flow. By computing the difference between synthetic flow and reconstructed flow fields, we obtained the averaged point-wise errors of 0.06 m/s and 0.02 m/s for u- and v-components, respectively.
Reconstruction of fluorescence molecular tomography with a cosinoidal level set method.
Zhang, Xuanxuan; Cao, Xu; Zhu, Shouping
2017-06-27
Implicit shape-based reconstruction method in fluorescence molecular tomography (FMT) is capable of achieving higher image clarity than image-based reconstruction method. However, the implicit shape method suffers from a low convergence speed and performs unstably due to the utilization of gradient-based optimization methods. Moreover, the implicit shape method requires priori information about the number of targets. A shape-based reconstruction scheme of FMT with a cosinoidal level set method is proposed in this paper. The Heaviside function in the classical implicit shape method is replaced with a cosine function, and then the reconstruction can be accomplished with the Levenberg-Marquardt method rather than gradient-based methods. As a result, the priori information about the number of targets is not required anymore and the choice of step length is avoided. Numerical simulations and phantom experiments were carried out to validate the proposed method. Results of the proposed method show higher contrast to noise ratios and Pearson correlations than the implicit shape method and image-based reconstruction method. Moreover, the number of iterations required in the proposed method is much less than the implicit shape method. The proposed method performs more stably, provides a faster convergence speed than the implicit shape method, and achieves higher image clarity than the image-based reconstruction method.
Rahman, Md Mostafizur; Fattah, Shaikh Anowarul
2017-01-01
In view of recent increase of brain computer interface (BCI) based applications, the importance of efficient classification of various mental tasks has increased prodigiously nowadays. In order to obtain effective classification, efficient feature extraction scheme is necessary, for which, in the proposed method, the interchannel relationship among electroencephalogram (EEG) data is utilized. It is expected that the correlation obtained from different combination of channels will be different for different mental tasks, which can be exploited to extract distinctive feature. The empirical mode decomposition (EMD) technique is employed on a test EEG signal obtained from a channel, which provides a number of intrinsic mode functions (IMFs), and correlation coefficient is extracted from interchannel IMF data. Simultaneously, different statistical features are also obtained from each IMF. Finally, the feature matrix is formed utilizing interchannel correlation features and intrachannel statistical features of the selected IMFs of EEG signal. Different kernels of the support vector machine (SVM) classifier are used to carry out the classification task. An EEG dataset containing ten different combinations of five different mental tasks is utilized to demonstrate the classification performance and a very high level of accuracy is achieved by the proposed scheme compared to existing methods.
NASA Astrophysics Data System (ADS)
Yao, Jiachi; Xiang, Yang; Qian, Sichong; Li, Shengyang; Wu, Shaowei
2017-11-01
In order to separate and identify the combustion noise and the piston slap noise of a diesel engine, a noise source separation and identification method that combines a binaural sound localization method and blind source separation method is proposed. During a diesel engine noise and vibration test, because a diesel engine has many complex noise sources, a lead covering method was carried out on a diesel engine to isolate other interference noise from the No. 1-5 cylinders. Only the No. 6 cylinder parts were left bare. Two microphones that simulated the human ears were utilized to measure the radiated noise signals 1 m away from the diesel engine. First, a binaural sound localization method was adopted to separate the noise sources that are in different places. Then, for noise sources that are in the same place, a blind source separation method is utilized to further separate and identify the noise sources. Finally, a coherence function method, continuous wavelet time-frequency analysis method, and prior knowledge of the diesel engine are combined to further identify the separation results. The results show that the proposed method can effectively separate and identify the combustion noise and the piston slap noise of a diesel engine. The frequency of the combustion noise and the piston slap noise are respectively concentrated at 4350 Hz and 1988 Hz. Compared with the blind source separation method, the proposed method has superior separation and identification effects, and the separation results have fewer interference components from other noise.
Jiang, Haihe; Yin, Yixin; Xiao, Wendong; Zhao, Baoyong
2018-01-01
Gas utilization ratio (GUR) is an important indicator that is used to evaluate the energy consumption of blast furnaces (BFs). Currently, the existing methods cannot predict the GUR accurately. In this paper, we present a novel data-driven model for predicting the GUR. The proposed approach utilized both the TS fuzzy neural network (TS-FNN) and the particle swarm algorithm (PSO) to predict the GUR. The particle swarm algorithm (PSO) is applied to optimize the parameters of the TS-FNN in order to decrease the error caused by the inaccurate initial parameter. This paper also applied the box graph (Box-plot) method to eliminate the abnormal value of the raw data during the data preprocessing. This method can deal with the data which does not obey the normal distribution which is caused by the complex industrial environments. The prediction results demonstrate that the optimization model based on PSO and the TS-FNN approach achieves higher prediction accuracy compared with the TS-FNN model and SVM model and the proposed approach can accurately predict the GUR of the blast furnace, providing an effective way for the on-line blast furnace distribution control. PMID:29461469
Zhang, Sen; Jiang, Haihe; Yin, Yixin; Xiao, Wendong; Zhao, Baoyong
2018-02-20
Gas utilization ratio (GUR) is an important indicator that is used to evaluate the energy consumption of blast furnaces (BFs). Currently, the existing methods cannot predict the GUR accurately. In this paper, we present a novel data-driven model for predicting the GUR. The proposed approach utilized both the TS fuzzy neural network (TS-FNN) and the particle swarm algorithm (PSO) to predict the GUR. The particle swarm algorithm (PSO) is applied to optimize the parameters of the TS-FNN in order to decrease the error caused by the inaccurate initial parameter. This paper also applied the box graph (Box-plot) method to eliminate the abnormal value of the raw data during the data preprocessing. This method can deal with the data which does not obey the normal distribution which is caused by the complex industrial environments. The prediction results demonstrate that the optimization model based on PSO and the TS-FNN approach achieves higher prediction accuracy compared with the TS-FNN model and SVM model and the proposed approach can accurately predict the GUR of the blast furnace, providing an effective way for the on-line blast furnace distribution control.
An Open Source Agenda for Research Linking Text and Image Content Features.
ERIC Educational Resources Information Center
Goodrum, Abby A.; Rorvig, Mark E.; Jeong, Ki-Tai; Suresh, Chitturi
2001-01-01
Proposes methods to utilize image primitives to support term assignment for image classification. Proposes to release code for image analysis in a common tool set for other researchers to use. Of particular focus is the expansion of work by researchers in image indexing to include image content-based feature extraction capabilities in their work.…
Cluster-based query expansion using external collections in medical information retrieval.
Oh, Heung-Seon; Jung, Yuchul
2015-12-01
Utilizing external collections to improve retrieval performance is challenging research because various test collections are created for different purposes. Improving medical information retrieval has also gained much attention as various types of medical documents have become available to researchers ever since they started storing them in machine processable formats. In this paper, we propose an effective method of utilizing external collections based on the pseudo relevance feedback approach. Our method incorporates the structure of external collections in estimating individual components in the final feedback model. Extensive experiments on three medical collections (TREC CDS, CLEF eHealth, and OHSUMED) were performed, and the results were compared with a representative expansion approach utilizing the external collections to show the superiority of our method. Copyright © 2015 Elsevier Inc. All rights reserved.
Almas, Muhammad Shoaib; Vanfretti, Luigi
2017-01-01
Synchrophasor measurements from Phasor Measurement Units (PMUs) are the primary sensors used to deploy Wide-Area Monitoring, Protection and Control (WAMPAC) systems. PMUs stream out synchrophasor measurements through the IEEE C37.118.2 protocol using TCP/IP or UDP/IP. The proposed method establishes a direct communication between two PMUs, thus eliminating the requirement of an intermediate phasor data concentrator, data mediator and/or protocol parser and thereby ensuring minimum communication latency without considering communication link delays. This method allows utilizing synchrophasor measurements internally in a PMU to deploy custom protection and control algorithms. These algorithms are deployed using protection logic equations which are supported by all the PMU vendors. Moreover, this method reduces overall equipment cost as the algorithms execute internally in a PMU and therefore does not require any additional controller for their deployment. The proposed method can be utilized for fast prototyping of wide-area measurements based protection and control applications. The proposed method is tested by coupling commercial PMUs as Hardware-in-the-Loop (HIL) with Opal-RT's eMEGAsim Real-Time Simulator (RTS). As illustrative example, anti-islanding protection application is deployed using proposed method and its performance is assessed. The essential points in the method are: •Bypassing intermediate phasor data concentrator or protocol parsers as the synchrophasors are communicated directly between the PMUs (minimizes communication delays).•Wide Area Protection and Control Algorithm is deployed using logic equations in the client PMU, therefore eliminating the requirement for an external hardware controller (cost curtailment)•Effortless means to exploit PMU measurements in an environment familiar to protection engineers.
A novel calibration method of focused light field camera for 3-D reconstruction of flame temperature
NASA Astrophysics Data System (ADS)
Sun, Jun; Hossain, Md. Moinul; Xu, Chuan-Long; Zhang, Biao; Wang, Shi-Min
2017-05-01
This paper presents a novel geometric calibration method for focused light field camera to trace the rays of flame radiance and to reconstruct the three-dimensional (3-D) temperature distribution of a flame. A calibration model is developed to calculate the corner points and their projections of the focused light field camera. The characteristics of matching main lens and microlens f-numbers are used as an additional constrains for the calibration. Geometric parameters of the focused light field camera are then achieved using Levenberg-Marquardt algorithm. Total focused images in which all the points are in focus, are utilized to validate the proposed calibration method. Calibration results are presented and discussed in details. The maximum mean relative error of the calibration is found less than 0.13%, indicating that the proposed method is capable of calibrating the focused light field camera successfully. The parameters obtained by the calibration are then utilized to trace the rays of flame radiance. A least square QR-factorization algorithm with Plank's radiation law is used to reconstruct the 3-D temperature distribution of a flame. Experiments were carried out on an ethylene air fired combustion test rig to reconstruct the temperature distribution of flames. The flame temperature obtained by the proposed method is then compared with that obtained by using high-precision thermocouple. The difference between the two measurements was found no greater than 6.7%. Experimental results demonstrated that the proposed calibration method and the applied measurement technique perform well in the reconstruction of the flame temperature.
Visual navigation using edge curve matching for pinpoint planetary landing
NASA Astrophysics Data System (ADS)
Cui, Pingyuan; Gao, Xizhen; Zhu, Shengying; Shao, Wei
2018-05-01
Pinpoint landing is challenging for future Mars and asteroid exploration missions. Vision-based navigation scheme based on feature detection and matching is practical and can achieve the required precision. However, existing algorithms are computationally prohibitive and utilize poor-performance measurements, which pose great challenges for the application of visual navigation. This paper proposes an innovative visual navigation scheme using crater edge curves during descent and landing phase. In the algorithm, the edge curves of the craters tracked from two sequential images are utilized to determine the relative attitude and position of the lander through a normalized method. Then, considering error accumulation of relative navigation, a method is developed. That is to integrate the crater-based relative navigation method with crater-based absolute navigation method that identifies craters using a georeferenced database for continuous estimation of absolute states. In addition, expressions of the relative state estimate bias are derived. Novel necessary and sufficient observability criteria based on error analysis are provided to improve the navigation performance, which hold true for similar navigation systems. Simulation results demonstrate the effectiveness and high accuracy of the proposed navigation method.
Seismic data restoration with a fast L1 norm trust region method
NASA Astrophysics Data System (ADS)
Cao, Jingjie; Wang, Yanfei
2014-08-01
Seismic data restoration is a major strategy to provide reliable wavefield when field data dissatisfy the Shannon sampling theorem. Recovery by sparsity-promoting inversion often get sparse solutions of seismic data in a transformed domains, however, most methods for sparsity-promoting inversion are line-searching methods which are efficient but are inclined to obtain local solutions. Using trust region method which can provide globally convergent solutions is a good choice to overcome this shortcoming. A trust region method for sparse inversion has been proposed, however, the efficiency should be improved to suitable for large-scale computation. In this paper, a new L1 norm trust region model is proposed for seismic data restoration and a robust gradient projection method for solving the sub-problem is utilized. Numerical results of synthetic and field data demonstrate that the proposed trust region method can get excellent computation speed and is a viable alternative for large-scale computation.
A new method to extract modal parameters using output-only responses
NASA Astrophysics Data System (ADS)
Kim, Byeong Hwa; Stubbs, Norris; Park, Taehyo
2005-04-01
This work proposes a new output-only modal analysis method to extract mode shapes and natural frequencies of a structure. The proposed method is based on an approach with a single-degree-of-freedom in the time domain. For a set of given mode-isolated signals, the un-damped mode shapes are extracted utilizing the singular value decomposition of the output energy correlation matrix with respect to sensor locations. The natural frequencies are extracted from a noise-free signal that is projected on the estimated modal basis. The proposed method is particularly efficient when a high resolution of mode shape is essential. The accuracy of the method is numerically verified using a set of time histories that are simulated using a finite-element method. The feasibility and practicality of the method are verified using experimental data collected at the newly constructed King Storm Water Bridge in California, United States.
Crowd motion segmentation and behavior recognition fusing streak flow and collectiveness
NASA Astrophysics Data System (ADS)
Gao, Mingliang; Jiang, Jun; Shen, Jin; Zou, Guofeng; Fu, Guixia
2018-04-01
Crowd motion segmentation and crowd behavior recognition are two hot issues in computer vision. A number of methods have been proposed to tackle these two problems. Among the methods, flow dynamics is utilized to model the crowd motion, with little consideration of collective property. Moreover, the traditional crowd behavior recognition methods treat the local feature and dynamic feature separately and overlook the interconnection of topological and dynamical heterogeneity in complex crowd processes. A crowd motion segmentation method and a crowd behavior recognition method are proposed based on streak flow and crowd collectiveness. The streak flow is adopted to reveal the dynamical property of crowd motion, and the collectiveness is incorporated to reveal the structure property. Experimental results show that the proposed methods improve the crowd motion segmentation accuracy and the crowd recognition rates compared with the state-of-the-art methods.
Knowledge Discovery from Posts in Online Health Communities Using Unified Medical Language System.
Chen, Donghua; Zhang, Runtong; Liu, Kecheng; Hou, Lei
2018-06-19
Patient-reported posts in Online Health Communities (OHCs) contain various valuable information that can help establish knowledge-based online support for online patients. However, utilizing these reports to improve online patient services in the absence of appropriate medical and healthcare expert knowledge is difficult. Thus, we propose a comprehensive knowledge discovery method that is based on the Unified Medical Language System for the analysis of narrative posts in OHCs. First, we propose a domain-knowledge support framework for OHCs to provide a basis for post analysis. Second, we develop a Knowledge-Involved Topic Modeling (KI-TM) method to extract and expand explicit knowledge within the text. We propose four metrics, namely, explicit knowledge rate, latent knowledge rate, knowledge correlation rate, and perplexity, for the evaluation of the KI-TM method. Our experimental results indicate that our proposed method outperforms existing methods in terms of providing knowledge support. Our method enhances knowledge support for online patients and can help develop intelligent OHCs in the future.
Multiview road sign detection via self-adaptive color model and shape context matching
NASA Astrophysics Data System (ADS)
Liu, Chunsheng; Chang, Faliang; Liu, Chengyun
2016-09-01
The multiview appearance of road signs in uncontrolled environments has made the detection of road signs a challenging problem in computer vision. We propose a road sign detection method to detect multiview road signs. This method is based on several algorithms, including the classical cascaded detector, the self-adaptive weighted Gaussian color model (SW-Gaussian model), and a shape context matching method. The classical cascaded detector is used to detect the frontal road signs in video sequences and obtain the parameters for the SW-Gaussian model. The proposed SW-Gaussian model combines the two-dimensional Gaussian model and the normalized red channel together, which can largely enhance the contrast between the red signs and background. The proposed shape context matching method can match shapes with big noise, which is utilized to detect road signs in different directions. The experimental results show that compared with previous detection methods, the proposed multiview detection method can reach higher detection rate in detecting signs with different directions.
Novel X-ray Communication Based XNAV Augmentation Method Using X-ray Detectors
Song, Shibin; Xu, Luping; Zhang, Hua; Bai, Yuanjie
2015-01-01
The further development of X-ray pulsar-based NAVigation (XNAV) is hindered by its lack of accuracy, so accuracy improvement has become a critical issue for XNAV. In this paper, an XNAV augmentation method which utilizes both pulsar observation and X-ray ranging observation for navigation filtering is proposed to deal with this issue. As a newly emerged concept, X-ray communication (XCOM) shows great potential in space exploration. X-ray ranging, derived from XCOM, could achieve high accuracy in range measurement, which could provide accurate information for XNAV. For the proposed method, the measurement models of pulsar observation and range measurement observation are established, and a Kalman filtering algorithm based on the observations and orbit dynamics is proposed to estimate the position and velocity of a spacecraft. A performance comparison of the proposed method with the traditional pulsar observation method is conducted by numerical experiments. Besides, the parameters that influence the performance of the proposed method, such as the pulsar observation time, the SNR of the ranging signal, etc., are analyzed and evaluated by numerical experiments. PMID:26404295
Two computational methods are proposed for estimation of the emission rate of volatile organic compounds (VOCs) from solvent-based indoor coating materials based on the knowledge of product formulation. The first method utilizes two previously developed mass transfer models with ...
Robot Position Sensor Fault Tolerance
NASA Technical Reports Server (NTRS)
Aldridge, Hal A.
1997-01-01
Robot systems in critical applications, such as those in space and nuclear environments, must be able to operate during component failure to complete important tasks. One failure mode that has received little attention is the failure of joint position sensors. Current fault tolerant designs require the addition of directly redundant position sensors which can affect joint design. A new method is proposed that utilizes analytical redundancy to allow for continued operation during joint position sensor failure. Joint torque sensors are used with a virtual passive torque controller to make the robot joint stable without position feedback and improve position tracking performance in the presence of unknown link dynamics and end-effector loading. Two Cartesian accelerometer based methods are proposed to determine the position of the joint. The joint specific position determination method utilizes two triaxial accelerometers attached to the link driven by the joint with the failed position sensor. The joint specific method is not computationally complex and the position error is bounded. The system wide position determination method utilizes accelerometers distributed on different robot links and the end-effector to determine the position of sets of multiple joints. The system wide method requires fewer accelerometers than the joint specific method to make all joint position sensors fault tolerant but is more computationally complex and has lower convergence properties. Experiments were conducted on a laboratory manipulator. Both position determination methods were shown to track the actual position satisfactorily. A controller using the position determination methods and the virtual passive torque controller was able to servo the joints to a desired position during position sensor failure.
Methods for reverberation suppression utilizing dual frequency band imaging.
Rau, Jochen M; Måsøy, Svein-Erik; Hansen, Rune; Angelsen, Bjørn; Tangen, Thor Andreas
2013-09-01
Reverberations impair the contrast resolution of diagnostic ultrasound images. Tissue harmonic imaging is a common method to reduce these artifacts, but does not remove all reverberations. Dual frequency band imaging (DBI), utilizing a low frequency pulse which manipulates propagation of the high frequency imaging pulse, has been proposed earlier for reverberation suppression. This article adds two different methods for reverberation suppression with DBI: the delay corrected subtraction (DCS) and the first order content weighting (FOCW) method. Both methods utilize the propagation delay of the imaging pulse of two transmissions with alternating manipulation pressure to extract information about its depth of first scattering. FOCW further utilizes this information to estimate the content of first order scattering in the received signal. Initial evaluation is presented where both methods are applied to simulated and in vivo data. Both methods yield visual and measurable substantial improvement in image contrast. Comparing DCS with FOCW, DCS produces sharper images and retains more details while FOCW achieves best suppression levels and, thus, highest image contrast. The measured improvement in contrast ranges from 8 to 27 dB for DCS and from 4 dB up to the dynamic range for FOCW.
NASA Astrophysics Data System (ADS)
Hooshyar, Milad; Wang, Dingbao; Kim, Seoyoung; Medeiros, Stephen C.; Hagen, Scott C.
2016-10-01
A method for automatic extraction of valley and channel networks from high-resolution digital elevation models (DEMs) is presented. This method utilizes both positive (i.e., convergent topography) and negative (i.e., divergent topography) curvature to delineate the valley network. The valley and ridge skeletons are extracted using the pixels' curvature and the local terrain conditions. The valley network is generated by checking the terrain for the existence of at least one ridge between two intersecting valleys. The transition from unchannelized to channelized sections (i.e., channel head) in each first-order valley tributary is identified independently by categorizing the corresponding contours using an unsupervised approach based on k-means clustering. The method does not require a spatially constant channel initiation threshold (e.g., curvature or contributing area). Moreover, instead of a point attribute (e.g., curvature), the proposed clustering method utilizes the shape of contours, which reflects the entire cross-sectional profile including possible banks. The method was applied to three catchments: Indian Creek and Mid Bailey Run in Ohio and Feather River in California. The accuracy of channel head extraction from the proposed method is comparable to state-of-the-art channel extraction methods.
Point target detection utilizing super-resolution strategy for infrared scanning oversampling system
NASA Astrophysics Data System (ADS)
Wang, Longguang; Lin, Zaiping; Deng, Xinpu; An, Wei
2017-11-01
To improve the resolution of remote sensing infrared images, infrared scanning oversampling system is employed with information amount quadrupled, which contributes to the target detection. Generally the image data from double-line detector of infrared scanning oversampling system is shuffled to a whole oversampled image to be post-processed, whereas the aliasing between neighboring pixels leads to image degradation with a great impact on target detection. This paper formulates a point target detection method utilizing super-resolution (SR) strategy concerning infrared scanning oversampling system, with an accelerated SR strategy proposed to realize fast de-aliasing of the oversampled image and an adaptive MRF-based regularization designed to achieve the preserving and aggregation of target energy. Extensive experiments demonstrate the superior detection performance, robustness and efficiency of the proposed method compared with other state-of-the-art approaches.
Comprehensive evaluation of impacts of distributed generation integration in distribution network
NASA Astrophysics Data System (ADS)
Peng, Sujiang; Zhou, Erbiao; Ji, Fengkun; Cao, Xinhui; Liu, Lingshuang; Liu, Zifa; Wang, Xuyang; Cai, Xiaoyu
2018-04-01
All Distributed generation (DG) as the supplement to renewable energy centralized utilization, is becoming the focus of development direction of renewable energy utilization. With the increasing proportion of DG in distribution network, the network power structure, power flow distribution, operation plans and protection are affected to some extent. According to the main impacts of DG, a comprehensive evaluation model of distributed network with DG is proposed in this paper. A comprehensive evaluation index system including 7 aspects, along with their corresponding index calculation method is established for quantitative analysis. The indices under different access capacity of DG in distribution network are calculated based on the IEEE RBTS-Bus 6 system and the evaluation result is calculated by analytic hierarchy process (AHP). The proposed model and method are verified effective and validity through case study.
Locational Sensitivity Investigation on PV Hosting Capacity and Fast Track PV Screening
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Fei; Mather, Barry; Ainsworth, Nathan
A 15% PV penetration threshold is commonly used by utilities to define photovoltaic (PV) screening methods where PV penetration is defined as the ratio of total solar PV capacity on a line section to peak load. However, this method doesn't take into account PV locational impact or feeder characteristics that could strongly change the feeder's capability to host PVs. This paper investigates the impact of PV location and phase connection type on PV hosting capacity, and then proposes a fast-track PV screening approach that leverages various PV hosting capacity metric responding to different PV locations and types. The proposed studymore » could help utilities to evaluate PV interconnection requests and also help increase the PV hosting capacity of distribution feeders without adverse impacts on system voltages.« less
NASA Astrophysics Data System (ADS)
Kawamoto, Shigeru; Ikeda, Yuichi; Fukui, Chihiro; Tateshita, Fumihiko
Private finance initiative is a business scheme that materializes social infrastructure and public services by utilizing private-sector resources. In this paper we propose a new method to optimize capital structure, which is the ratio of capital to debt, and senior-sub structure, which is the ratio of senior loan to subordinated loan, for private finance initiative. We make the quantitative analysis of a private finance initiative's project using the proposed method. We analyze trade-off structure between risk and return in the project, and optimize capital structure and senior-sub structure. The method we propose helps to improve financial stability of the project, and to make a fund raising plan that is expected to be reasonable for project sponsor and moneylender.
Multiple-image hiding using super resolution reconstruction in high-frequency domains
NASA Astrophysics Data System (ADS)
Li, Xiao-Wei; Zhao, Wu-Xiang; Wang, Jun; Wang, Qiong-Hua
2017-12-01
In this paper, a robust multiple-image hiding method using the computer-generated integral imaging and the modified super-resolution reconstruction algorithm is proposed. In our work, the host image is first transformed into frequency domains by cellular automata (CA), to assure the quality of the stego-image, the secret images are embedded into the CA high-frequency domains. The proposed method has the following advantages: (1) robustness to geometric attacks because of the memory-distributed property of elemental images, (2) increasing quality of the reconstructed secret images as the scheme utilizes the modified super-resolution reconstruction algorithm. The simulation results show that the proposed multiple-image hiding method outperforms other similar hiding methods and is robust to some geometric attacks, e.g., Gaussian noise and JPEG compression attacks.
BPP: a sequence-based algorithm for branch point prediction.
Zhang, Qing; Fan, Xiaodan; Wang, Yejun; Sun, Ming-An; Shao, Jianlin; Guo, Dianjing
2017-10-15
Although high-throughput sequencing methods have been proposed to identify splicing branch points in the human genome, these methods can only detect a small fraction of the branch points subject to the sequencing depth, experimental cost and the expression level of the mRNA. An accurate computational model for branch point prediction is therefore an ongoing objective in human genome research. We here propose a novel branch point prediction algorithm that utilizes information on the branch point sequence and the polypyrimidine tract. Using experimentally validated data, we demonstrate that our proposed method outperforms existing methods. Availability and implementation: https://github.com/zhqingit/BPP. djguo@cuhk.edu.hk. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Qin, Guoyou; Zhang, Jiajia; Zhu, Zhongyi; Fung, Wing
2016-12-20
Outliers, measurement error, and missing data are commonly seen in longitudinal data because of its data collection process. However, no method can address all three of these issues simultaneously. This paper focuses on the robust estimation of partially linear models for longitudinal data with dropouts and measurement error. A new robust estimating equation, simultaneously tackling outliers, measurement error, and missingness, is proposed. The asymptotic properties of the proposed estimator are established under some regularity conditions. The proposed method is easy to implement in practice by utilizing the existing standard generalized estimating equations algorithms. The comprehensive simulation studies show the strength of the proposed method in dealing with longitudinal data with all three features. Finally, the proposed method is applied to data from the Lifestyle Education for Activity and Nutrition study and confirms the effectiveness of the intervention in producing weight loss at month 9. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Li, Shuang; Xie, Dongfeng
2016-11-17
In this paper, a new sensor array geometry, called a compressed symmetric nested array (CSNA), is designed to increase the degrees of freedom in the near field. As its name suggests, a CSNA is constructed by getting rid of some elements from two identical nested arrays. The closed form expressions are also presented for the sensor locations and the largest degrees of freedom obtainable as a function of the total number of sensors. Furthermore, a novel DOA estimation method is proposed by utilizing the CSNA in the near field. By employing this new array geometry, our method can identify more sources than sensors. Compared with other existing methods, the proposed method achieves higher resolution because of increased array aperture. Simulation results are demonstrated to verify the effectiveness of the proposed method.
Hu, Erzhong; Nosato, Hirokazu; Sakanashi, Hidenori; Murakawa, Masahiro
2013-01-01
Capsule endoscopy is a patient-friendly endoscopy broadly utilized in gastrointestinal examination. However, the efficacy of diagnosis is restricted by the large quantity of images. This paper presents a modified anomaly detection method, by which both known and unknown anomalies in capsule endoscopy images of small intestine are expected to be detected. To achieve this goal, this paper introduces feature extraction using a non-linear color conversion and Higher-order Local Auto Correlation (HLAC) Features, and makes use of image partition and subspace method for anomaly detection. Experiments are implemented among several major anomalies with combinations of proposed techniques. As the result, the proposed method achieved 91.7% and 100% detection accuracy for swelling and bleeding respectively, so that the effectiveness of proposed method is demonstrated.
Couple Graph Based Label Propagation Method for Hyperspectral Remote Sensing Data Classification
NASA Astrophysics Data System (ADS)
Wang, X. P.; Hu, Y.; Chen, J.
2018-04-01
Graph based semi-supervised classification method are widely used for hyperspectral image classification. We present a couple graph based label propagation method, which contains both the adjacency graph and the similar graph. We propose to construct the similar graph by using the similar probability, which utilize the label similarity among examples probably. The adjacency graph was utilized by a common manifold learning method, which has effective improve the classification accuracy of hyperspectral data. The experiments indicate that the couple graph Laplacian which unite both the adjacency graph and the similar graph, produce superior classification results than other manifold Learning based graph Laplacian and Sparse representation based graph Laplacian in label propagation framework.
Finger Vein Recognition Using Local Line Binary Pattern
Rosdi, Bakhtiar Affendi; Shing, Chai Wuh; Suandi, Shahrel Azmin
2011-01-01
In this paper, a personal verification method using finger vein is presented. Finger vein can be considered more secured compared to other hands based biometric traits such as fingerprint and palm print because the features are inside the human body. In the proposed method, a new texture descriptor called local line binary pattern (LLBP) is utilized as feature extraction technique. The neighbourhood shape in LLBP is a straight line, unlike in local binary pattern (LBP) which is a square shape. Experimental results show that the proposed method using LLBP has better performance than the previous methods using LBP and local derivative pattern (LDP). PMID:22247670
Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization
Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali
2014-01-01
Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584
Fast Edge Detection and Segmentation of Terrestrial Laser Scans Through Normal Variation Analysis
NASA Astrophysics Data System (ADS)
Che, E.; Olsen, M. J.
2017-09-01
Terrestrial Laser Scanning (TLS) utilizes light detection and ranging (lidar) to effectively and efficiently acquire point cloud data for a wide variety of applications. Segmentation is a common procedure of post-processing to group the point cloud into a number of clusters to simplify the data for the sequential modelling and analysis needed for most applications. This paper presents a novel method to rapidly segment TLS data based on edge detection and region growing. First, by computing the projected incidence angles and performing the normal variation analysis, the silhouette edges and intersection edges are separated from the smooth surfaces. Then a modified region growing algorithm groups the points lying on the same smooth surface. The proposed method efficiently exploits the gridded scan pattern utilized during acquisition of TLS data from most sensors and takes advantage of parallel programming to process approximately 1 million points per second. Moreover, the proposed segmentation does not require estimation of the normal at each point, which limits the errors in normal estimation propagating to segmentation. Both an indoor and outdoor scene are used for an experiment to demonstrate and discuss the effectiveness and robustness of the proposed segmentation method.
NASA Astrophysics Data System (ADS)
Ushakov, Nikolai; Liokumovich, Leonid
2014-05-01
A novel approach for extrinsic Fabry-Perot interferometer baseline measurement has been developed. The principles of frequency-scanning interferometry are utilized for registration of the interferometer spectral function, from which the baseline is demodulated. The proposed approach enables one to capture the absolute baseline variations at frequencies much higher than the spectral acquisition rate. Despite the conventional approaches, associating a single baseline indication to the registered spectrum, in the proposed method a modified frequency detection procedure is applied to the spectrum. This provides an ability to capture the baseline variations which took place during the spectrum acquisition. The limitations on the parameters of the possibly registered baseline variations are formulated. The experimental verification of the proposed approach for different perturbations has been performed.
Analytical sizing methods for behind-the-meter battery storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Di; Kintner-Meyer, Michael; Yang, Tao
In behind-the-meter application, battery storage system (BSS) is utilized to reduce a commercial or industrial customer’s payment for electricity use, including energy charge and demand charge. The potential value of BSS in payment reduction and the most economic size can be determined by formulating and solving standard mathematical programming problems. In this method, users input system information such as load profiles, energy/demand charge rates, and battery characteristics to construct a standard programming problem that typically involve a large number of constraints and decision variables. Such a large scale programming problem is then solved by optimization solvers to obtain numerical solutions.more » Such a method cannot directly link the obtained optimal battery sizes to input parameters and requires case-by-case analysis. In this paper, we present an objective quantitative analysis of costs and benefits of customer-side energy storage, and thereby identify key factors that affect battery sizing. Based on the analysis, we then develop simple but effective guidelines that can be used to determine the most cost-effective battery size or guide utility rate design for stimulating energy storage development. The proposed analytical sizing methods are innovative, and offer engineering insights on how the optimal battery size varies with system characteristics. We illustrate the proposed methods using practical building load profile and utility rate. The obtained results are compared with the ones using mathematical programming based methods for validation.« less
A Novel Numerical Method for Fuzzy Boundary Value Problems
NASA Astrophysics Data System (ADS)
Can, E.; Bayrak, M. A.; Hicdurmaz
2016-05-01
In the present paper, a new numerical method is proposed for solving fuzzy differential equations which are utilized for the modeling problems in science and engineering. Fuzzy approach is selected due to its important applications on processing uncertainty or subjective information for mathematical models of physical problems. A second-order fuzzy linear boundary value problem is considered in particular due to its important applications in physics. Moreover, numerical experiments are presented to show the effectiveness of the proposed numerical method on specific physical problems such as heat conduction in an infinite plate and a fin.
Job shop scheduling model for non-identic machine with fixed delivery time to minimize tardiness
NASA Astrophysics Data System (ADS)
Kusuma, K. K.; Maruf, A.
2016-02-01
Scheduling non-identic machines problem with low utilization characteristic and fixed delivery time are frequent in manufacture industry. This paper propose a mathematical model to minimize total tardiness for non-identic machines in job shop environment. This model will be categorized as an integer linier programming model and using branch and bound algorithm as the solver method. We will use fixed delivery time as main constraint and different processing time to process a job. The result of this proposed model shows that the utilization of production machines can be increase with minimal tardiness using fixed delivery time as constraint.
Liu, Baiyang; Lin, Guoying; Cui, Yuehui; Li, RongLin
2017-08-29
For purpose of utilizing orbital angular momentum (OAM) mode diversity, multiple OAM beams should be generated preferably by a single antenna. In this paper, an OAM mode reconfigurable antenna is proposed. Different from the existed OAM antennas with multiple ports for multiple OAM modes transmitting, the proposed antenna with only a single port, but it can be used to transmit mode 1 or mode -1 OAM beams arbitrary by controlling the PIN diodes on the feeding network through a programmable microcontroller which control by a remote controller. Simulation and measurement results such as return loss, near-field and far-field radiation patterns of two operating states for mode 1 and mode -1, and OAM mode orthogonality are given. The proposed antenna can serve as a candidate for utilizing OAM diversity, namely phase diversity to increase channel capacity at 2.4 GHz. Moreover, an OAM-mode based encoding method is experimentally carried out by the proposed OAM mode reconfigurable antenna, the digital data are encoded and decoded by different OAM modes. At the transmitter, the proposed OAM mode reconfigurable antenna is used to encode the digital data, data symbol 0 and 1 are mapped to OAM mode 1 and mode -1, respectively. At the receiver, the data symbols are decoded by phase gradient method.
An arrhythmia classification algorithm using a dedicated wavelet adapted to different subjects
2011-01-01
Background Numerous studies have been conducted regarding a heartbeat classification algorithm over the past several decades. However, many algorithms have also been studied to acquire robust performance, as biosignals have a large amount of variation among individuals. Various methods have been proposed to reduce the differences coming from personal characteristics, but these expand the differences caused by arrhythmia. Methods In this paper, an arrhythmia classification algorithm using a dedicated wavelet adapted to individual subjects is proposed. We reduced the performance variation using dedicated wavelets, as in the ECG morphologies of the subjects. The proposed algorithm utilizes morphological filtering and a continuous wavelet transform with a dedicated wavelet. A principal component analysis and linear discriminant analysis were utilized to compress the morphological data transformed by the dedicated wavelets. An extreme learning machine was used as a classifier in the proposed algorithm. Results A performance evaluation was conducted with the MIT-BIH arrhythmia database. The results showed a high sensitivity of 97.51%, specificity of 85.07%, accuracy of 97.94%, and a positive predictive value of 97.26%. Conclusions The proposed algorithm achieves better accuracy than other state-of-the-art algorithms with no intrasubject between the training and evaluation datasets. And it significantly reduces the amount of intervention needed by physicians. PMID:21707989
NASA Astrophysics Data System (ADS)
Guo, Tian; Xu, Zili
2018-03-01
Measurement noise is inevitable in practice; thus, it is difficult to identify defects, cracks or damage in a structure while suppressing noise simultaneously. In this work, a novel method is introduced to detect multiple damage in noisy environments. Based on multi-scale space analysis for discrete signals, a method for extracting damage characteristics from the measured displacement mode shape is illustrated. Moreover, the proposed method incorporates a data fusion algorithm to further eliminate measurement noise-based interference. The effectiveness of the method is verified by numerical and experimental methods applied to different structural types. The results demonstrate that there are two advantages to the proposed method. First, damage features are extracted by the difference of the multi-scale representation; this step is taken such that the interference of noise amplification can be avoided. Second, a data fusion technique applied to the proposed method provides a global decision, which retains the damage features while maximally eliminating the uncertainty. Monte Carlo simulations are utilized to validate that the proposed method has a higher accuracy in damage detection.
Zhang, Junhua; Wang, Yuanyuan; Shi, Xinling
2009-12-01
A modified graph cut was proposed under the elliptical shape constraint to segment cervical lymph nodes on sonograms, and its effect on the measurement of short axis to long axis ratio (S/L) was investigated by using the relative ultimate measurement accuracy (RUMA). Under the same user inputs, the proposed algorithm successfully segmented all 60 sonograms tested, while the traditional graph cut failed. The mean RUMA resulted from the developed method was comparable to that resulted from the manual segmentation. Results indicated that utilizing the elliptical shape prior could appreciably improve the graph cut for nodes segmentation, and the proposed method satisfied the accuracy requirement of S/L measurement.
Contact-free heart rate measurement using multiple video data
NASA Astrophysics Data System (ADS)
Hung, Pang-Chan; Lee, Kual-Zheng; Tsai, Luo-Wei
2013-10-01
In this paper, we propose a contact-free heart rate measurement method by analyzing sequential images of multiple video data. In the proposed method, skin-like pixels are firstly detected from multiple video data for extracting the color features. These color features are synchronized and analyzed by independent component analysis. A representative component is finally selected among these independent component candidates to measure the HR, which achieves under 2% deviation on average compared with a pulse oximeter in the controllable environment. The advantages of the proposed method include: 1) it uses low cost and high accessibility camera device; 2) it eases users' discomfort by utilizing contact-free measurement; and 3) it achieves the low error rate and the high stability by integrating multiple video data.
Mode conversion in metal-insulator-metal waveguide with a shifted cavity
NASA Astrophysics Data System (ADS)
Wang, Yueke; Yan, Xin
2018-01-01
We propose a method, which is utilized to achieve the plasmonic mode conversion in metal-insulator-metal (MIM) waveguide, theoretically. Our proposed structure is composed of bus waveguides and a shifted cavity. The shifted cavity can choose out a plasmonic mode (a- or s-mode) when it is in Fabry-Perot (FP) resonance. The length of the shifted cavity L is carefully chosen, and our structure can achieve the mode conversion between a- and s-mode in the communication region. Besides, our proposed structure can also achieve plasmonic mode-division multiplexing. All the numerical simulations are carried on by the finite element method to verify our design.
Measurement of impulse peak insertion loss for four hearing protection devices in field conditions
Murphy, William J.; Flamme, Gregory A.; Meinke, Deanna K.; Sondergaard, Jacob; Finan, Donald S.; Lankford, James E.; Khan, Amir; Vernon, Julia; Stewart, Michael
2015-01-01
Objective In 2009, the U.S. Environmental Protection Agency (EPA) proposed an impulse noise reduction rating (NRR) for hearing protection devices based upon the impulse peak insertion loss (IPIL) methods in the ANSI S12.42-2010 standard. This study tests the ANSI S12.42 methods with a range of hearing protection devices measured in field conditions. Design The method utilizes an acoustic test fixture and three ranges for impulse levels: 130–134, 148–152, and 166–170 dB peak SPL. For this study, four different models of hearing protectors were tested: Bilsom 707 Impact II electronic earmuff, E·A·R Pod Express, E·A·R Combat Arms version 4, and the Etymotic Research, Inc. Electronic BlastPLG™ EB1. Study sample Five samples of each protector were fitted on the fixture or inserted in the fixture's ear canal five times for each impulse level. Impulses were generated by a 0.223 caliber rifle. Results The average IPILs increased with peak pressure and ranged between 20 and 38 dB. For some protectors, significant differences were observed across protector examples of the same model, and across insertions. Conclusions The EPA's proposed methods provide consistent and reproducible results. The proposed impulse NRR rating should utilize the minimum and maximum protection percentiles as determined by the ANSI S12.42-2010 methods. PMID:22176308
On the Existence and Uniqueness of the Scientific Method.
Wagensberg, Jorge
2014-01-01
The ultimate utility of science is widely agreed upon: the comprehension of reality. But there is much controversy about what scientific understanding actually means, and how we should proceed in order to gain new scientific understanding. Is there a method for acquiring new scientific knowledge? Is this method unique and universal? There has been no shortage of proposals, but neither has there been a shortage of skeptics about these proposals. This article proffers for discussion a potential scientific method that aspires to be unique and universal and is rooted in the recent and ancient history of scientific thinking. Curiously, conclusions can be inferred from this scientific method that also concern education and the transmission of science to others.
Analysis of quetiapine in human plasma using fluorescence spectroscopy
NASA Astrophysics Data System (ADS)
Mostafa, Islam M.; Omar, Mahmoud A.; Nagy, Dalia M.; Derayea, Sayed M.
2018-05-01
A simple and sensitive spectrofluorimetric method has been development for the assurance of quetiapine fumarate (QTF). The proposed method was utilized for measuring the fluorescence intensity of the yellow fluorescent product at 510 nm (λex 470 nm). The fluorescent product has resulted from the nucleophilic substitution reaction of QTF with 4-chloro-7-nitrobenzofurazane (NBD-Cl) in Mcllvaine buffer (pH 7.0). The diverse variables influencing the development of the reaction product were deliberately changed and optimized. The linear concentration range of the proposed method was of 0.2-2.0 μg ml-1.The limits of detection and quantitation were 0.05 and 0.17 μg ml-1, respectively. The proposed method was applied for the assurance of QTF in its tablets without interference from basic excipients. In addition, the proposed method was used for in vitro analysis of the QTF in spiked human plasma, the percent mean recovery was (n = 3) 98.82 ± 1.484%.
Patel, Rashmin B; Patel, Nilay M; Patel, Mrunali R; Solanki, Ajay B
2017-03-01
The aim of this work was to develop and optimize a robust HPLC method for the separation and quantitation of ambroxol hydrochloride and roxithromycin utilizing Design of Experiment (DoE) approach. The Plackett-Burman design was used to assess the impact of independent variables (concentration of organic phase, mobile phase pH, flow rate and column temperature) on peak resolution, USP tailing and number of plates. A central composite design was utilized to evaluate the main, interaction, and quadratic effects of independent variables on the selected dependent variables. The optimized HPLC method was validated based on ICH Q2R1 guideline and was used to separate and quantify ambroxol hydrochloride and roxithromycin in tablet formulations. The findings showed that DoE approach could be effectively applied to optimize a robust HPLC method for quantification of ambroxol hydrochloride and roxithromycin in tablet formulations. Statistical comparison between results of proposed and reported HPLC method revealed no significant difference; indicating the ability of proposed HPLC method for analysis of ambroxol hydrochloride and roxithromycin in pharmaceutical formulations. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Optimizing area under the ROC curve using semi-supervised learning
Wang, Shijun; Li, Diana; Petrick, Nicholas; Sahiner, Berkman; Linguraru, Marius George; Summers, Ronald M.
2014-01-01
Receiver operating characteristic (ROC) analysis is a standard methodology to evaluate the performance of a binary classification system. The area under the ROC curve (AUC) is a performance metric that summarizes how well a classifier separates two classes. Traditional AUC optimization techniques are supervised learning methods that utilize only labeled data (i.e., the true class is known for all data) to train the classifiers. In this work, inspired by semi-supervised and transductive learning, we propose two new AUC optimization algorithms hereby referred to as semi-supervised learning receiver operating characteristic (SSLROC) algorithms, which utilize unlabeled test samples in classifier training to maximize AUC. Unlabeled samples are incorporated into the AUC optimization process, and their ranking relationships to labeled positive and negative training samples are considered as optimization constraints. The introduced test samples will cause the learned decision boundary in a multidimensional feature space to adapt not only to the distribution of labeled training data, but also to the distribution of unlabeled test data. We formulate the semi-supervised AUC optimization problem as a semi-definite programming problem based on the margin maximization theory. The proposed methods SSLROC1 (1-norm) and SSLROC2 (2-norm) were evaluated using 34 (determined by power analysis) randomly selected datasets from the University of California, Irvine machine learning repository. Wilcoxon signed rank tests showed that the proposed methods achieved significant improvement compared with state-of-the-art methods. The proposed methods were also applied to a CT colonography dataset for colonic polyp classification and showed promising results.1 PMID:25395692
Optimizing area under the ROC curve using semi-supervised learning.
Wang, Shijun; Li, Diana; Petrick, Nicholas; Sahiner, Berkman; Linguraru, Marius George; Summers, Ronald M
2015-01-01
Receiver operating characteristic (ROC) analysis is a standard methodology to evaluate the performance of a binary classification system. The area under the ROC curve (AUC) is a performance metric that summarizes how well a classifier separates two classes. Traditional AUC optimization techniques are supervised learning methods that utilize only labeled data (i.e., the true class is known for all data) to train the classifiers. In this work, inspired by semi-supervised and transductive learning, we propose two new AUC optimization algorithms hereby referred to as semi-supervised learning receiver operating characteristic (SSLROC) algorithms, which utilize unlabeled test samples in classifier training to maximize AUC. Unlabeled samples are incorporated into the AUC optimization process, and their ranking relationships to labeled positive and negative training samples are considered as optimization constraints. The introduced test samples will cause the learned decision boundary in a multidimensional feature space to adapt not only to the distribution of labeled training data, but also to the distribution of unlabeled test data. We formulate the semi-supervised AUC optimization problem as a semi-definite programming problem based on the margin maximization theory. The proposed methods SSLROC1 (1-norm) and SSLROC2 (2-norm) were evaluated using 34 (determined by power analysis) randomly selected datasets from the University of California, Irvine machine learning repository. Wilcoxon signed rank tests showed that the proposed methods achieved significant improvement compared with state-of-the-art methods. The proposed methods were also applied to a CT colonography dataset for colonic polyp classification and showed promising results.
NASA Astrophysics Data System (ADS)
Zhang, Aizhu; Sun, Genyun; Wang, Zhenjie
2015-12-01
The serious information redundancy in hyperspectral images (HIs) cannot contribute to the data analysis accuracy, instead it require expensive computational resources. Consequently, to identify the most useful and valuable information from the HIs, thereby improve the accuracy of data analysis, this paper proposed a novel hyperspectral band selection method using the hybrid genetic algorithm and gravitational search algorithm (GA-GSA). In the proposed method, the GA-GSA is mapped to the binary space at first. Then, the accuracy of the support vector machine (SVM) classifier and the number of selected spectral bands are utilized to measure the discriminative capability of the band subset. Finally, the band subset with the smallest number of spectral bands as well as covers the most useful and valuable information is obtained. To verify the effectiveness of the proposed method, studies conducted on an AVIRIS image against two recently proposed state-of-the-art GSA variants are presented. The experimental results revealed the superiority of the proposed method and indicated that the method can indeed considerably reduce data storage costs and efficiently identify the band subset with stable and high classification precision.
Adaptive Encoding for Numerical Data Compression.
ERIC Educational Resources Information Center
Yokoo, Hidetoshi
1994-01-01
Discusses the adaptive compression of computer files of numerical data whose statistical properties are not given in advance. A new lossless coding method for this purpose, which utilizes Adelson-Velskii and Landis (AVL) trees, is proposed. The method is effective to any word length. Its application to the lossless compression of gray-scale images…
Chen, Xiyuan; Wang, Xiying; Xu, Yuan
2014-01-01
This paper deals with the problem of state estimation for the vector-tracking loop of a software-defined Global Positioning System (GPS) receiver. For a nonlinear system that has the model error and white Gaussian noise, a noise statistics estimator is used to estimate the model error, and based on this, a modified iterated extended Kalman filter (IEKF) named adaptive iterated Kalman filter (AIEKF) is proposed. A vector-tracking GPS receiver utilizing AIEKF is implemented to evaluate the performance of the proposed method. Through road tests, it is shown that the proposed method has an obvious accuracy advantage over the IEKF and Adaptive Extended Kalman filter (AEKF) in position determination. The results show that the proposed method is effective to reduce the root-mean-square error (RMSE) of position (including longitude, latitude and altitude). Comparing with EKF, the position RMSE values of AIEKF are reduced by about 45.1%, 40.9% and 54.6% in the east, north and up directions, respectively. Comparing with IEKF, the position RMSE values of AIEKF are reduced by about 25.7%, 19.3% and 35.7% in the east, north and up directions, respectively. Compared with AEKF, the position RMSE values of AIEKF are reduced by about 21.6%, 15.5% and 30.7% in the east, north and up directions, respectively. PMID:25502124
Chen, Xiyuan; Wang, Xiying; Xu, Yuan
2014-12-09
This paper deals with the problem of state estimation for the vector-tracking loop of a software-defined Global Positioning System (GPS) receiver. For a nonlinear system that has the model error and white Gaussian noise, a noise statistics estimator is used to estimate the model error, and based on this, a modified iterated extended Kalman filter (IEKF) named adaptive iterated Kalman filter (AIEKF) is proposed. A vector-tracking GPS receiver utilizing AIEKF is implemented to evaluate the performance of the proposed method. Through road tests, it is shown that the proposed method has an obvious accuracy advantage over the IEKF and Adaptive Extended Kalman filter (AEKF) in position determination. The results show that the proposed method is effective to reduce the root-mean-square error (RMSE) of position (including longitude, latitude and altitude). Comparing with EKF, the position RMSE values of AIEKF are reduced by about 45.1%, 40.9% and 54.6% in the east, north and up directions, respectively. Comparing with IEKF, the position RMSE values of AIEKF are reduced by about 25.7%, 19.3% and 35.7% in the east, north and up directions, respectively. Compared with AEKF, the position RMSE values of AIEKF are reduced by about 21.6%, 15.5% and 30.7% in the east, north and up directions, respectively.
Identifying Degenerative Brain Disease Using Rough Set Classifier Based on Wavelet Packet Method.
Cheng, Ching-Hsue; Liu, Wei-Xiang
2018-05-28
Population aging has become a worldwide phenomenon, which causes many serious problems. The medical issues related to degenerative brain disease have gradually become a concern. Magnetic Resonance Imaging is one of the most advanced methods for medical imaging and is especially suitable for brain scans. From the literature, although the automatic segmentation method is less laborious and time-consuming, it is restricted in several specific types of images. In addition, hybrid techniques segmentation improves the shortcomings of the single segmentation method. Therefore, this study proposed a hybrid segmentation combined with rough set classifier and wavelet packet method to identify degenerative brain disease. The proposed method is a three-stage image process method to enhance accuracy of brain disease classification. In the first stage, this study used the proposed hybrid segmentation algorithms to segment the brain ROI (region of interest). In the second stage, wavelet packet was used to conduct the image decomposition and calculate the feature values. In the final stage, the rough set classifier was utilized to identify the degenerative brain disease. In verification and comparison, two experiments were employed to verify the effectiveness of the proposed method and compare with the TV-seg (total variation segmentation) algorithm, Discrete Cosine Transform, and the listing classifiers. Overall, the results indicated that the proposed method outperforms the listing methods.
Utility indifference pricing of insurance catastrophe derivatives.
Eichler, Andreas; Leobacher, Gunther; Szölgyenyi, Michaela
2017-01-01
We propose a model for an insurance loss index and the claims process of a single insurance company holding a fraction of the total number of contracts that captures both ordinary losses and losses due to catastrophes. In this model we price a catastrophe derivative by the method of utility indifference pricing. The associated stochastic optimization problem is treated by techniques for piecewise deterministic Markov processes. A numerical study illustrates our results.
A distributed algorithm for demand-side management: Selling back to the grid.
Latifi, Milad; Khalili, Azam; Rastegarnia, Amir; Zandi, Sajad; Bazzi, Wael M
2017-11-01
Demand side energy consumption scheduling is a well-known issue in the smart grid research area. However, there is lack of a comprehensive method to manage the demand side and consumer behavior in order to obtain an optimum solution. The method needs to address several aspects, including the scale-free requirement and distributed nature of the problem, consideration of renewable resources, allowing consumers to sell electricity back to the main grid, and adaptivity to a local change in the solution point. In addition, the model should allow compensation to consumers and ensurance of certain satisfaction levels. To tackle these issues, this paper proposes a novel autonomous demand side management technique which minimizes consumer utility costs and maximizes consumer comfort levels in a fully distributed manner. The technique uses a new logarithmic cost function and allows consumers to sell excess electricity (e.g. from renewable resources) back to the grid in order to reduce their electric utility bill. To develop the proposed scheme, we first formulate the problem as a constrained convex minimization problem. Then, it is converted to an unconstrained version using the segmentation-based penalty method. At each consumer location, we deploy an adaptive diffusion approach to obtain the solution in a distributed fashion. The use of adaptive diffusion makes it possible for consumers to find the optimum energy consumption schedule with a small number of information exchanges. Moreover, the proposed method is able to track drifts resulting from changes in the price parameters and consumer preferences. Simulations and numerical results show that our framework can reduce the total load demand peaks, lower the consumer utility bill, and improve the consumer comfort level.
Chai, Hua; Li, Zi-Na; Meng, De-Yu; Xia, Liang-Yong; Liang, Yong
2017-10-12
Gene selection is an attractive and important task in cancer survival analysis. Most existing supervised learning methods can only use the labeled biological data, while the censored data (weakly labeled data) far more than the labeled data are ignored in model building. Trying to utilize such information in the censored data, a semi-supervised learning framework (Cox-AFT model) combined with Cox proportional hazard (Cox) and accelerated failure time (AFT) model was used in cancer research, which has better performance than the single Cox or AFT model. This method, however, is easily affected by noise. To alleviate this problem, in this paper we combine the Cox-AFT model with self-paced learning (SPL) method to more effectively employ the information in the censored data in a self-learning way. SPL is a kind of reliable and stable learning mechanism, which is recently proposed for simulating the human learning process to help the AFT model automatically identify and include samples of high confidence into training, minimizing interference from high noise. Utilizing the SPL method produces two direct advantages: (1) The utilization of censored data is further promoted; (2) the noise delivered to the model is greatly decreased. The experimental results demonstrate the effectiveness of the proposed model compared to the traditional Cox-AFT model.
Low extractable wipers for cleaning space flight hardware
NASA Technical Reports Server (NTRS)
Tijerina, Veronica; Gross, Frederick C.
1986-01-01
There is a need for low extractable wipers for solvent cleaning of space flight hardware. Soxhlet extraction is the method utilized today by most NASA subcontractors, but there may be alternate methods to achieve the same results. The need for low non-volatile residue materials, the history of soxhlet extraction, and proposed alternate methods are discussed, as well as different types of wipers, test methods, and current standards.
NASA Astrophysics Data System (ADS)
Masuda, Kazuaki; Aiyoshi, Eitaro
We propose a method for solving optimal price decision problems for simultaneous multi-article auctions. An auction problem, originally formulated as a combinatorial problem, determines both every seller's whether or not to sell his/her article and every buyer's which article(s) to buy, so that the total utility of buyers and sellers will be maximized. Due to the duality theory, we transform it equivalently into a dual problem in which Lagrange multipliers are interpreted as articles' transaction price. As the dual problem is a continuous optimization problem with respect to the multipliers (i.e., the transaction prices), we propose a numerical method to solve it by applying heuristic global search methods. In this paper, Particle Swarm Optimization (PSO) is used to solve the dual problem, and experimental results are presented to show the validity of the proposed method.
Wen, Shameng; Meng, Qingkun; Feng, Chao; Tang, Chaojing
2017-01-01
Formal techniques have been devoted to analyzing whether network protocol specifications violate security policies; however, these methods cannot detect vulnerabilities in the implementations of the network protocols themselves. Symbolic execution can be used to analyze the paths of the network protocol implementations, but for stateful network protocols, it is difficult to reach the deep states of the protocol. This paper proposes a novel model-guided approach to detect vulnerabilities in network protocol implementations. Our method first abstracts a finite state machine (FSM) model, then utilizes the model to guide the symbolic execution. This approach achieves high coverage of both the code and the protocol states. The proposed method is implemented and applied to test numerous real-world network protocol implementations. The experimental results indicate that the proposed method is more effective than traditional fuzzing methods such as SPIKE at detecting vulnerabilities in the deep states of network protocol implementations.
A new sampling scheme for developing metamodels with the zeros of Chebyshev polynomials
NASA Astrophysics Data System (ADS)
Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing
2015-09-01
The accuracy of metamodelling is determined by both the sampling and approximation. This article proposes a new sampling method based on the zeros of Chebyshev polynomials to capture the sampling information effectively. First, the zeros of one-dimensional Chebyshev polynomials are applied to construct Chebyshev tensor product (CTP) sampling, and the CTP is then used to construct high-order multi-dimensional metamodels using the 'hypercube' polynomials. Secondly, the CTP sampling is further enhanced to develop Chebyshev collocation method (CCM) sampling, to construct the 'simplex' polynomials. The samples of CCM are randomly and directly chosen from the CTP samples. Two widely studied sampling methods, namely the Smolyak sparse grid and Hammersley, are used to demonstrate the effectiveness of the proposed sampling method. Several numerical examples are utilized to validate the approximation accuracy of the proposed metamodel under different dimensions.
Weakly supervised image semantic segmentation based on clustering superpixels
NASA Astrophysics Data System (ADS)
Yan, Xiong; Liu, Xiaohua
2018-04-01
In this paper, we propose an image semantic segmentation model which is trained from image-level labeled images. The proposed model starts with superpixel segmenting, and features of the superpixels are extracted by trained CNN. We introduce a superpixel-based graph followed by applying the graph partition method to group correlated superpixels into clusters. For the acquisition of inter-label correlations between the image-level labels in dataset, we not only utilize label co-occurrence statistics but also exploit visual contextual cues simultaneously. At last, we formulate the task of mapping appropriate image-level labels to the detected clusters as a problem of convex minimization. Experimental results on MSRC-21 dataset and LableMe dataset show that the proposed method has a better performance than most of the weakly supervised methods and is even comparable to fully supervised methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaplanoglu, Erkan; Safak, Koray K.; Varol, H. Selcuk
2009-01-12
An experiment based method is proposed for parameter estimation of a class of linear multivariable systems. The method was applied to a pressure-level control process. Experimental time domain input/output data was utilized in a gray-box modeling approach. Prior knowledge of the form of the system transfer function matrix elements is assumed to be known. Continuous-time system transfer function matrix parameters were estimated in real-time by the least-squares method. Simulation results of experimentally determined system transfer function matrix compare very well with the experimental results. For comparison and as an alternative to the proposed real-time estimation method, we also implemented anmore » offline identification method using artificial neural networks and obtained fairly good results. The proposed methods can be implemented conveniently on a desktop PC equipped with a data acquisition board for parameter estimation of moderately complex linear multivariable systems.« less
NASA Astrophysics Data System (ADS)
Kostyuchenko, Yuriy V.; Sztoyka, Yulia; Kopachevsky, Ivan; Artemenko, Igor; Yuschenko, Maxim
2017-10-01
Multi-model approach for remote sensing data processing and interpretation is described. The problem of satellite data utilization in multi-modeling approach for socio-ecological risks assessment is formally defined. Observation, measurement and modeling data utilization method in the framework of multi-model approach is described. Methodology and models of risk assessment in framework of decision support approach are defined and described. Method of water quality assessment using satellite observation data is described. Method is based on analysis of spectral reflectance of aquifers. Spectral signatures of freshwater bodies and offshores are analyzed. Correlations between spectral reflectance, pollutions and selected water quality parameters are analyzed and quantified. Data of MODIS, MISR, AIRS and Landsat sensors received in 2002-2014 have been utilized verified by in-field spectrometry and lab measurements. Fuzzy logic based approach for decision support in field of water quality degradation risk is discussed. Decision on water quality category is making based on fuzzy algorithm using limited set of uncertain parameters. Data from satellite observations, field measurements and modeling is utilizing in the framework of the approach proposed. It is shown that this algorithm allows estimate water quality degradation rate and pollution risks. Problems of construction of spatial and temporal distribution of calculated parameters, as well as a problem of data regularization are discussed. Using proposed approach, maps of surface water pollution risk from point and diffuse sources are calculated and discussed.
Load Model Verification, Validation and Calibration Framework by Statistical Analysis on Field Data
NASA Astrophysics Data System (ADS)
Jiao, Xiangqing; Liao, Yuan; Nguyen, Thai
2017-11-01
Accurate load models are critical for power system analysis and operation. A large amount of research work has been done on load modeling. Most of the existing research focuses on developing load models, while little has been done on developing formal load model verification and validation (V&V) methodologies or procedures. Most of the existing load model validation is based on qualitative rather than quantitative analysis. In addition, not all aspects of model V&V problem have been addressed by the existing approaches. To complement the existing methods, this paper proposes a novel load model verification and validation framework that can systematically and more comprehensively examine load model's effectiveness and accuracy. Statistical analysis, instead of visual check, quantifies the load model's accuracy, and provides a confidence level of the developed load model for model users. The analysis results can also be used to calibrate load models. The proposed framework can be used as a guidance to systematically examine load models for utility engineers and researchers. The proposed method is demonstrated through analysis of field measurements collected from a utility system.
Brain tissues volume measurements from 2D MRI using parametric approach
NASA Astrophysics Data System (ADS)
L'vov, A. A.; Toropova, O. A.; Litovka, Yu. V.
2018-04-01
The purpose of the paper is to propose a fully automated method of volume assessment of structures within human brain. Our statistical approach uses maximum interdependency principle for decision making process of measurements consistency and unequal observations. Detecting outliers performed using maximum normalized residual test. We propose a statistical model which utilizes knowledge of tissues distribution in human brain and applies partial data restoration for precision improvement. The approach proposes completed computationally efficient and independent from segmentation algorithm used in the application.
Decentralized method for utility regulation: a comment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharkey, W.W.
The author comments on the article by Loeb and Magat in this journal issue (P 399); he feels their idea is worthy of more-detailed examination on an industry-specific level. He confines his comments, however, to a more-general comparison of the Loeb-Magat (L-M) scheme, traditional rate-of-return regulation, and pure franchise bidding. The basic L-M proposal consists of two parts. First it is shown that if a utility is subsidized by an amount corresponding to total consumer surplus, then it will have the incentive to pursue cost-minimizing behavior and to set its price equal to the marginal cost of production. Mr. Sharkeymore » believes that this pure subsidy scheme would be wholly unworkable in practice. The second part of the L-M proposal consists of the subsidy scheme combined with either franchise bidding or a lump-sum tax. Mr. Sharkey feels that this proposal has considerable merit if conditions exist such that the net subsidy paid to the utility is sufficiently small; net subsidy is defined as the excess of the actual subsidy plus revenues of the firm over the total cost of production. Thus, the net subsidy is the excess profit the utility receives compared to a utility perfectly regulated by traditional means. Mr. Sharkey elaborates on some of his objections to the L-M proposal for cases in which the net subsidy is large. Then, he briefly considers the characteristics of a natural monopoly market which could potentially be regulated by a combined subsidy-franchise-tax scheme.« less
Orthogonal Chirp-Based Ultrasonic Positioning
Khyam, Mohammad Omar; Ge, Shuzhi Sam; Li, Xinde; Pickering, Mark
2017-01-01
This paper presents a chirp based ultrasonic positioning system (UPS) using orthogonal chirp waveforms. In the proposed method, multiple transmitters can simultaneously transmit chirp signals, as a result, it can efficiently utilize the entire available frequency spectrum. The fundamental idea behind the proposed multiple access scheme is to utilize the oversampling methodology of orthogonal frequency-division multiplexing (OFDM) modulation and orthogonality of the discrete frequency components of a chirp waveform. In addition, the proposed orthogonal chirp waveforms also have all the advantages of a classical chirp waveform. Firstly, the performance of the waveforms is investigated through correlation analysis and then, in an indoor environment, evaluated through simulations and experiments for ultrasonic (US) positioning. For an operational range of approximately 1000 mm, the positioning root-mean-square-errors (RMSEs) &90% error were 4.54 mm and 6.68 mm respectively. PMID:28448454
Orthogonal Chirp-Based Ultrasonic Positioning.
Khyam, Mohammad Omar; Ge, Shuzhi Sam; Li, Xinde; Pickering, Mark
2017-04-27
This paper presents a chirp based ultrasonic positioning system (UPS) using orthogonal chirp waveforms. In the proposed method, multiple transmitters can simultaneously transmit chirp signals, as a result, it can efficiently utilize the entire available frequency spectrum. The fundamental idea behind the proposed multiple access scheme is to utilize the oversampling methodology of orthogonal frequency-division multiplexing (OFDM) modulation and orthogonality of the discrete frequency components of a chirp waveform. In addition, the proposed orthogonal chirp waveforms also have all the advantages of a classical chirp waveform. Firstly, the performance of the waveforms is investigated through correlation analysis and then, in an indoor environment, evaluated through simulations and experiments for ultrasonic (US) positioning. For an operational range of approximately 1000 mm, the positioning root-mean-square-errors (RMSEs) &90% error were 4.54 mm and 6.68 mm respectively.
Advanced system demonstration for utilization of biomass as an energy source. Environmental report
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCollom, M.
1979-01-01
The conclusions and findings of extensive analyses undertaken to assess the environmental impacts and effects of the proposal to assist in an Advanced System Demonstration for Utilization of Biomass as an Energy Source by means of a wood-fueled power plant. Included are a description of the proposed project, a discussion of the existing environment that the project would affect, a summary of the project's impacts on the natural and human environments, a discussion of the project's relationships to other government policies and plans, and an extensive review of the alternatives which were considered in evaluating the proposed action. All findingsmore » of the research undertaken are discussed. More extensive presentations of the methods of analysis used to arrive at the various conclusions are available in ten topical technical appendices.« less
The development of a revised version of multi-center molecular Ornstein-Zernike equation
NASA Astrophysics Data System (ADS)
Kido, Kentaro; Yokogawa, Daisuke; Sato, Hirofumi
2012-04-01
Ornstein-Zernike (OZ)-type theory is a powerful tool to obtain 3-dimensional solvent distribution around solute molecule. Recently, we proposed multi-center molecular OZ method, which is suitable for parallel computing of 3D solvation structure. The distribution function in this method consists of two components, namely reference and residue parts. Several types of the function were examined as the reference part to investigate the numerical robustness of the method. As the benchmark, the method is applied to water, benzene in aqueous solution and single-walled carbon nanotube in chloroform solution. The results indicate that fully-parallelization is achieved by utilizing the newly proposed reference functions.
Multiple-3D-object secure information system based on phase shifting method and single interference.
Li, Wei-Na; Shi, Chen-Xiao; Piao, Mei-Lan; Kim, Nam
2016-05-20
We propose a multiple-3D-object secure information system for encrypting multiple three-dimensional (3D) objects based on the three-step phase shifting method. During the decryption procedure, five phase functions (PFs) are decreased to three PFs, in comparison with our previous method, which implies that one cross beam splitter is utilized to implement the single decryption interference. Moreover, the advantages of the proposed scheme also include: each 3D object can be decrypted discretionarily without decrypting a series of other objects earlier; the quality of the decrypted slice image of each object is high according to the correlation coefficient values, none of which is lower than 0.95; no iterative algorithm is involved. The feasibility of the proposed scheme is demonstrated by computer simulation results.
Beyond Happiness and Satisfaction: Toward Well-Being Indices Based on Stated Preference*
Benjamin, Daniel J.; Kimball, Miles S.; Heffetz, Ori; Szembrot, Nichole
2014-01-01
This paper proposes foundations and a methodology for survey-based tracking of well-being. First, we develop a theory in which utility depends on “fundamental aspects” of well-being, measurable with surveys. Second, drawing from psychologists, philosophers, and economists, we compile a comprehensive list of such aspects. Third, we demonstrate our proposed method for estimating the aspects’ relative marginal utilities—a necessary input for constructing an individual-level well-being index—by asking ~4,600 U.S. survey respondents to state their preference between pairs of aspect bundles. We estimate high relative marginal utilities for aspects related to family, health, security, values, freedom, happiness, and life satisfaction. PMID:25404760
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hao, He; Sun, Yannan; Carroll, Thomas E.
We propose a coordination algorithm for cooperative power allocation among a collection of commercial buildings within a campus. We introduced thermal and power models of a typical commercial building Heating, Ventilation, and Air Conditioning (HVAC) system, and utilize model predictive control to characterize their power flexibility. The power allocation problem is formulated as a cooperative game using the Nash Bargaining Solution (NBS) concept, in which buildings collectively maximize the product of their utilities subject to their local flexibility constraints and a total power limit set by the campus coordinator. To solve the optimal allocation problem, a distributed protocol is designedmore » using dual decomposition of the Nash bargaining problem. Numerical simulations are performed to demonstrate the efficacy of our proposed allocation method« less
NASA Astrophysics Data System (ADS)
Chen, Li-Chieh; Huang, Mei-Jiau
2017-02-01
A 2D simulation method for a rigid body moving in an incompressible viscous fluid is proposed. It combines one of the immersed-boundary methods, the DFFD (direct forcing fictitious domain) method with the spectral element method; the former is employed for efficiently capturing the two-way FSI (fluid-structure interaction) and the geometric flexibility of the latter is utilized for any possibly co-existing stationary and complicated solid or flow boundary. A pseudo body force is imposed within the solid domain to enforce the rigid body motion and a Lagrangian mesh composed of triangular elements is employed for tracing the rigid body. In particular, a so called sub-cell scheme is proposed to smooth the discontinuity at the fluid-solid interface and to execute integrations involving Eulerian variables over the moving-solid domain. The accuracy of the proposed method is verified through an observed agreement of the simulation results of some typical flows with analytical solutions or existing literatures.
Fast spacecraft adaptive attitude tracking control through immersion and invariance design
NASA Astrophysics Data System (ADS)
Wen, Haowei; Yue, Xiaokui; Li, Peng; Yuan, Jianping
2017-10-01
This paper presents a novel non-certainty-equivalence adaptive control method for the attitude tracking control problem of spacecraft with inertia uncertainties. The proposed immersion and invariance (I&I) based adaptation law provides a more direct and flexible approach to circumvent the limitations of the basic I&I method without employing any filter signal. By virtue of the adaptation high-gain equivalence property derived from the proposed adaptive method, the closed-loop adaptive system with a low adaptation gain could recover the high adaptation gain performance of the filter-based I&I method, and the resulting control torque demands during the initial transient has been significantly reduced. A special feature of this method is that the convergence of the parameter estimation error has been observably improved by utilizing an adaptation gain matrix instead of a single adaptation gain value. Numerical simulations are presented to highlight the various benefits of the proposed method compared with the certainty-equivalence-based control method and filter-based I&I control schemes.
Wang, Shuang; Yue, Bo; Liang, Xuefeng; Jiao, Licheng
2018-03-01
Wisely utilizing the internal and external learning methods is a new challenge in super-resolution problem. To address this issue, we analyze the attributes of two methodologies and find two observations of their recovered details: 1) they are complementary in both feature space and image plane and 2) they distribute sparsely in the spatial space. These inspire us to propose a low-rank solution which effectively integrates two learning methods and then achieves a superior result. To fit this solution, the internal learning method and the external learning method are tailored to produce multiple preliminary results. Our theoretical analysis and experiment prove that the proposed low-rank solution does not require massive inputs to guarantee the performance, and thereby simplifying the design of two learning methods for the solution. Intensive experiments show the proposed solution improves the single learning method in both qualitative and quantitative assessments. Surprisingly, it shows more superior capability on noisy images and outperforms state-of-the-art methods.
Comparing the Performance of Two Dynamic Load Distribution Methods
NASA Technical Reports Server (NTRS)
Kale, L. V.
1987-01-01
Parallel processing of symbolic computations on a message-passing multi-processor presents one challenge: To effectively utilize the available processors, the load must be distributed uniformly to all the processors. However, the structure of these computations cannot be predicted in advance. go, static scheduling methods are not applicable. In this paper, we compare the performance of two dynamic, distributed load balancing methods with extensive simulation studies. The two schemes are: the Contracting Within a Neighborhood (CWN) scheme proposed by us, and the Gradient Model proposed by Lin and Keller. We conclude that although simpler, the CWN is significantly more effective at distributing the work than the Gradient model.
Estimating integrated variance in the presence of microstructure noise using linear regression
NASA Astrophysics Data System (ADS)
Holý, Vladimír
2017-07-01
Using financial high-frequency data for estimation of integrated variance of asset prices is beneficial but with increasing number of observations so-called microstructure noise occurs. This noise can significantly bias the realized variance estimator. We propose a method for estimation of the integrated variance robust to microstructure noise as well as for testing the presence of the noise. Our method utilizes linear regression in which realized variances estimated from different data subsamples act as dependent variable while the number of observations act as explanatory variable. We compare proposed estimator with other methods on simulated data for several microstructure noise structures.
Real-Time Single Frequency Precise Point Positioning Using SBAS Corrections
Li, Liang; Jia, Chun; Zhao, Lin; Cheng, Jianhua; Liu, Jianxu; Ding, Jicheng
2016-01-01
Real-time single frequency precise point positioning (PPP) is a promising technique for high-precision navigation with sub-meter or even centimeter-level accuracy because of its convenience and low cost. The navigation performance of single frequency PPP heavily depends on the real-time availability and quality of correction products for satellite orbits and satellite clocks. Satellite-based augmentation system (SBAS) provides the correction products in real-time, but they are intended to be used for wide area differential positioning at 1 meter level precision. By imposing the constraints for ionosphere error, we have developed a real-time single frequency PPP method by sufficiently utilizing SBAS correction products. The proposed PPP method are tested with static and kinematic data, respectively. The static experimental results show that the position accuracy of the proposed PPP method can reach decimeter level, and achieve an improvement of at least 30% when compared with the traditional SBAS method. The positioning convergence of the proposed PPP method can be achieved in 636 epochs at most in static mode. In the kinematic experiment, the position accuracy of the proposed PPP method can be improved by at least 20 cm relative to the SBAS method. Furthermore, it has revealed that the proposed PPP method can achieve decimeter level convergence within 500 s in the kinematic mode. PMID:27517930
A Random Walk Approach to Query Informative Constraints for Clustering.
Abin, Ahmad Ali
2017-08-09
This paper presents a random walk approach to the problem of querying informative constraints for clustering. The proposed method is based on the properties of the commute time, that is the expected time taken for a random walk to travel between two nodes and return, on the adjacency graph of data. Commute time has the nice property of that, the more short paths connect two given nodes in a graph, the more similar those nodes are. Since computing the commute time takes the Laplacian eigenspectrum into account, we use this property in a recursive fashion to query informative constraints for clustering. At each recursion, the proposed method constructs the adjacency graph of data and utilizes the spectral properties of the commute time matrix to bipartition the adjacency graph. Thereafter, the proposed method benefits from the commute times distance on graph to query informative constraints between partitions. This process iterates for each partition until the stop condition becomes true. Experiments on real-world data show the efficiency of the proposed method for constraints selection.
Treatment selection in a randomized clinical trial via covariate-specific treatment effect curves.
Ma, Yunbei; Zhou, Xiao-Hua
2017-02-01
For time-to-event data in a randomized clinical trial, we proposed two new methods for selecting an optimal treatment for a patient based on the covariate-specific treatment effect curve, which is used to represent the clinical utility of a predictive biomarker. To select an optimal treatment for a patient with a specific biomarker value, we proposed pointwise confidence intervals for each covariate-specific treatment effect curve and the difference between covariate-specific treatment effect curves of two treatments. Furthermore, to select an optimal treatment for a future biomarker-defined subpopulation of patients, we proposed confidence bands for each covariate-specific treatment effect curve and the difference between each pair of covariate-specific treatment effect curve over a fixed interval of biomarker values. We constructed the confidence bands based on a resampling technique. We also conducted simulation studies to evaluate finite-sample properties of the proposed estimation methods. Finally, we illustrated the application of the proposed method in a real-world data set.
NASA Astrophysics Data System (ADS)
Areekul, Phatchakorn; Senjyu, Tomonobu; Urasaki, Naomitsu; Yona, Atsushi
Electricity price forecasting is becoming increasingly relevant to power producers and consumers in the new competitive electric power markets, when planning bidding strategies in order to maximize their benefits and utilities, respectively. This paper proposed a method to predict hourly electricity prices for next-day electricity markets by combination methodology of ARIMA and ANN models. The proposed method is examined on the Australian National Electricity Market (NEM), New South Wales regional in year 2006. Comparison of forecasting performance with the proposed ARIMA, ANN and combination (ARIMA-ANN) models are presented. Empirical results indicate that an ARIMA-ANN model can improve the price forecasting accuracy.
Hyperspectral image classification based on local binary patterns and PCANet
NASA Astrophysics Data System (ADS)
Yang, Huizhen; Gao, Feng; Dong, Junyu; Yang, Yang
2018-04-01
Hyperspectral image classification has been well acknowledged as one of the challenging tasks of hyperspectral data processing. In this paper, we propose a novel hyperspectral image classification framework based on local binary pattern (LBP) features and PCANet. In the proposed method, linear prediction error (LPE) is first employed to select a subset of informative bands, and LBP is utilized to extract texture features. Then, spectral and texture features are stacked into a high dimensional vectors. Next, the extracted features of a specified position are transformed to a 2-D image. The obtained images of all pixels are fed into PCANet for classification. Experimental results on real hyperspectral dataset demonstrate the effectiveness of the proposed method.
The Long Term Effectiveness of Intensive Stuttering Therapy: A Mixed Methods Study
ERIC Educational Resources Information Center
Irani, Farzan; Gabel, Rodney; Daniels, Derek; Hughes, Stephanie
2012-01-01
Purpose: The purpose of this study was to gain a deeper understanding of client perceptions of an intensive stuttering therapy program that utilizes a multi-faceted approach to therapy. The study also proposed to gain a deeper understanding about the process involved in long-term maintenance of meaningful changes made in therapy. Methods: The…
A Novel Noncircular MUSIC Algorithm Based on the Concept of the Difference and Sum Coarray.
Chen, Zhenhong; Ding, Yingtao; Ren, Shiwei; Chen, Zhiming
2018-01-25
In this paper, we propose a vectorized noncircular MUSIC (VNCM) algorithm based on the concept of the coarray, which can construct the difference and sum (diff-sum) coarray, for direction finding of the noncircular (NC) quasi-stationary sources. Utilizing both the NC property and the concept of the Khatri-Rao product, the proposed method can be applied to not only the ULA but also sparse arrays. In addition, we utilize the quasi-stationary characteristic instead of the spatial smoothing method to solve the coherent issue generated by the Khatri-Rao product operation so that the available degree of freedom (DOF) of the constructed virtual array will not be reduced by half. Compared with the traditional NC virtual array obtained in the NC MUSIC method, the diff-sum coarray achieves a higher number of DOFs as it comprises both the difference set and the sum set. Due to the complementarity between the difference set and the sum set for the coprime array, we choose the coprime array with multiperiod subarrays (CAMpS) as the array model and summarize the properties of the corresponding diff-sum coarray. Furthermore, we develop a diff-sum coprime array with multiperiod subarrays (DsCAMpS) whose diff-sum coarray has a higher DOF. Simulation results validate the effectiveness of the proposed method and the high DOF of the diff-sum coarray.
Image Processing of Porous Silicon Microarray in Refractive Index Change Detection.
Guo, Zhiqing; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola; Li, Chuanxi
2017-06-08
A new method for extracting the dots is proposed by the reflected light image of porous silicon (PSi) microarray utilization in this paper. The method consists of three parts: pretreatment, tilt correction and spot segmentation. First, based on the characteristics of different components in HSV (Hue, Saturation, Value) space, a special pretreatment is proposed for the reflected light image to obtain the contour edges of the array cells in the image. Second, through the geometric relationship of the target object between the initial external rectangle and the minimum bounding rectangle (MBR), a new tilt correction algorithm based on the MBR is proposed to adjust the image. Third, based on the specific requirements of the reflected light image segmentation, the array cells are segmented into dots as large as possible and the distance between the dots is equal in the corrected image. Experimental results show that the pretreatment part of this method can effectively avoid the influence of complex background and complete the binarization processing of the image. The tilt correction algorithm has a shorter computation time, which makes it highly suitable for tilt correction of reflected light images. The segmentation algorithm makes the dots in a regular arrangement, excludes the edges and the bright spots. This method could be utilized in the fast, accurate and automatic dots extraction of the PSi microarray reflected light image.
A Novel Noncircular MUSIC Algorithm Based on the Concept of the Difference and Sum Coarray
Chen, Zhenhong; Ding, Yingtao; Chen, Zhiming
2018-01-01
In this paper, we propose a vectorized noncircular MUSIC (VNCM) algorithm based on the concept of the coarray, which can construct the difference and sum (diff–sum) coarray, for direction finding of the noncircular (NC) quasi-stationary sources. Utilizing both the NC property and the concept of the Khatri–Rao product, the proposed method can be applied to not only the ULA but also sparse arrays. In addition, we utilize the quasi-stationary characteristic instead of the spatial smoothing method to solve the coherent issue generated by the Khatri–Rao product operation so that the available degree of freedom (DOF) of the constructed virtual array will not be reduced by half. Compared with the traditional NC virtual array obtained in the NC MUSIC method, the diff–sum coarray achieves a higher number of DOFs as it comprises both the difference set and the sum set. Due to the complementarity between the difference set and the sum set for the coprime array, we choose the coprime array with multiperiod subarrays (CAMpS) as the array model and summarize the properties of the corresponding diff–sum coarray. Furthermore, we develop a diff–sum coprime array with multiperiod subarrays (DsCAMpS) whose diff–sum coarray has a higher DOF. Simulation results validate the effectiveness of the proposed method and the high DOF of the diff–sum coarray. PMID:29370138
Multi-Scale Stochastic Resonance Spectrogram for fault diagnosis of rolling element bearings
NASA Astrophysics Data System (ADS)
He, Qingbo; Wu, Enhao; Pan, Yuanyuan
2018-04-01
It is not easy to identify incipient defect of a rolling element bearing by analyzing the vibration data because of the disturbance of background noise. The weak and unrecognizable transient fault signal of a mechanical system can be enhanced by the stochastic resonance (SR) technique that utilizes the noise in the system. However, it is challenging for the SR technique to identify sensitive fault information in non-stationary signals. This paper proposes a new method called multi-scale SR spectrogram (MSSRS) for bearing defect diagnosis. The new method considers the non-stationary property of the defective bearing vibration signals, and treats every scale of the time-frequency distribution (TFD) as a modulation system. Then the SR technique is utilized on each modulation system according to each frequencies in the TFD. The SR results are sensitive to the defect information because the energy of transient vibration is distributed in a limited frequency band in the TFD. Collecting the spectra of the SR outputs at all frequency scales then generates the MSSRS. The proposed MSSRS is able to well deal with the non-stationary transient signal, and can highlight the defect-induced frequency component corresponding to the impulse information. Experimental results with practical defective bearing vibration data have shown that the proposed method outperforms the former SR methods and exhibits a good application prospect in rolling element bearing fault diagnosis.
Image Processing of Porous Silicon Microarray in Refractive Index Change Detection
Guo, Zhiqing; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola; Li, Chuanxi
2017-01-01
A new method for extracting the dots is proposed by the reflected light image of porous silicon (PSi) microarray utilization in this paper. The method consists of three parts: pretreatment, tilt correction and spot segmentation. First, based on the characteristics of different components in HSV (Hue, Saturation, Value) space, a special pretreatment is proposed for the reflected light image to obtain the contour edges of the array cells in the image. Second, through the geometric relationship of the target object between the initial external rectangle and the minimum bounding rectangle (MBR), a new tilt correction algorithm based on the MBR is proposed to adjust the image. Third, based on the specific requirements of the reflected light image segmentation, the array cells are segmented into dots as large as possible and the distance between the dots is equal in the corrected image. Experimental results show that the pretreatment part of this method can effectively avoid the influence of complex background and complete the binarization processing of the image. The tilt correction algorithm has a shorter computation time, which makes it highly suitable for tilt correction of reflected light images. The segmentation algorithm makes the dots in a regular arrangement, excludes the edges and the bright spots. This method could be utilized in the fast, accurate and automatic dots extraction of the PSi microarray reflected light image. PMID:28594383
Estimation of phase derivatives using discrete chirp-Fourier-transform-based method.
Gorthi, Sai Siva; Rastogi, Pramod
2009-08-15
Estimation of phase derivatives is an important task in many interferometric measurements in optical metrology. This Letter introduces a method based on discrete chirp-Fourier transform for accurate and direct estimation of phase derivatives, even in the presence of noise. The method is introduced in the context of the analysis of reconstructed interference fields in digital holographic interferometry. We present simulation and experimental results demonstrating the utility of the proposed method.
Detecting subsurface fluid leaks in real-time using injection and production rates
NASA Astrophysics Data System (ADS)
Singh, Harpreet; Huerta, Nicolas J.
2017-12-01
CO2 injection into geologic formations for either enhanced oil recovery or carbon storage introduces a risk for undesired fluid leakage into overlying groundwater or to the surface. Despite decades of subsurface CO2 production and injection, the technologies and methods for detecting CO2 leaks are still costly and prone to large uncertainties. This is especially true for pressure-based monitoring methods, which require the use of simplified geological and reservoir flow models to simulate the pressure behavior as well as background noise affecting pressure measurements. In this study, we propose a method to detect the time and volume of fluid leakage based on real-time measurements of well injection and production rates. The approach utilizes analogies between fluid flow and capacitance-resistance modeling. Unlike other leak detection methods (e.g. pressure-based), the proposed method does not require geological and reservoir flow models to simulate the behavior that often carry significant sources of uncertainty; therefore, with our approach the leak can be detected with greater certainty. The method can be applied to detect when a leak begins by tracking a departure in fluid production rate from the expected pattern. The method has been tuned to detect the effect of boundary conditions and fluid compressibility on leakage. To highlight the utility of this approach we use our method to detect leaks for two scenarios. The first scenario simulates a fluid leak from the storage formation into an above-zone monitoring interval. The second scenario simulates intra-reservoir migration between two compartments. We illustrate this method to detect fluid leakage in three different reservoirs with varying levels of geological and structural complexity. The proposed leakage detection method has three novelties: i) requires only readily-available data (injection and production rates), ii) accounts for fluid compressibility and boundary effects, and iii) in addition to detecting the time when a leak is activated and the volume of that leakage, this method provides an insight about the leak location, and reservoir connectivity. We are proposing this as a complementary method that can be used with other, more expensive, methods early on in the injection process. This will allow an operator to conduct more expensive surveys less often because the proposed method can show if there are no leaks on a monthly basis that is cheap and fast.
Attitude Determination Using a MEMS-Based Flight Information Measurement Unit
Ma, Der-Ming; Shiau, Jaw-Kuen; Wang, I.-Chiang; Lin, Yu-Heng
2012-01-01
Obtaining precise attitude information is essential for aircraft navigation and control. This paper presents the results of the attitude determination using an in-house designed low-cost MEMS-based flight information measurement unit. This study proposes a quaternion-based extended Kalman filter to integrate the traditional quaternion and gravitational force decomposition methods for attitude determination algorithm. The proposed extended Kalman filter utilizes the evolution of the four elements in the quaternion method for attitude determination as the dynamic model, with the four elements as the states of the filter. The attitude angles obtained from the gravity computations and from the electronic magnetic sensors are regarded as the measurement of the filter. The immeasurable gravity accelerations are deduced from the outputs of the three axes accelerometers, the relative accelerations, and the accelerations due to body rotation. The constraint of the four elements of the quaternion method is treated as a perfect measurement and is integrated into the filter computation. Approximations of the time-varying noise variances of the measured signals are discussed and presented with details through Taylor series expansions. The algorithm is intuitive, easy to implement, and reliable for long-term high dynamic maneuvers. Moreover, a set of flight test data is utilized to demonstrate the success and practicality of the proposed algorithm and the filter design. PMID:22368455
Attitude determination using a MEMS-based flight information measurement unit.
Ma, Der-Ming; Shiau, Jaw-Kuen; Wang, I-Chiang; Lin, Yu-Heng
2012-01-01
Obtaining precise attitude information is essential for aircraft navigation and control. This paper presents the results of the attitude determination using an in-house designed low-cost MEMS-based flight information measurement unit. This study proposes a quaternion-based extended Kalman filter to integrate the traditional quaternion and gravitational force decomposition methods for attitude determination algorithm. The proposed extended Kalman filter utilizes the evolution of the four elements in the quaternion method for attitude determination as the dynamic model, with the four elements as the states of the filter. The attitude angles obtained from the gravity computations and from the electronic magnetic sensors are regarded as the measurement of the filter. The immeasurable gravity accelerations are deduced from the outputs of the three axes accelerometers, the relative accelerations, and the accelerations due to body rotation. The constraint of the four elements of the quaternion method is treated as a perfect measurement and is integrated into the filter computation. Approximations of the time-varying noise variances of the measured signals are discussed and presented with details through Taylor series expansions. The algorithm is intuitive, easy to implement, and reliable for long-term high dynamic maneuvers. Moreover, a set of flight test data is utilized to demonstrate the success and practicality of the proposed algorithm and the filter design.
Wang, Ruijia; Chen, Jie; Wang, Xing; Sun, Bing
2017-01-09
Retransmission deception jamming seriously degrades the Synthetic Aperture Radar (SAR) detection efficiency and can mislead SAR image interpretation by forming false targets. In order to suppress retransmission deception jamming, this paper proposes a novel multiple input and multiple output (MIMO) SAR structure range direction MIMO SAR, whose multiple channel antennas are vertical to the azimuth. First, based on the multiple channels of range direction MIMO SAR, the orthogonal frequency division multiplexing (OFDM) linear frequency modulation (LFM) signal was adopted as the transmission signal of each channel, which is defined as a sub-band signal. This sub-band signal corresponds to the transmission channel. Then, all of the sub-band signals are modulated with random initial phases and concurrently transmitted. The signal form is more complex and difficult to intercept. Next, the echoes of the sub-band signal are utilized to synthesize a wide band signal after preprocessing. The proposed method will increase the signal to interference ratio and peak amplitude ratio of the signal to resist retransmission deception jamming. Finally, well-focused SAR imagery is obtained using a conventional imaging method where the retransmission deception jamming strength is degraded and defocused. Simulations demonstrated the effectiveness of the proposed method.
Wang, Ruijia; Chen, Jie; Wang, Xing; Sun, Bing
2017-01-01
Retransmission deception jamming seriously degrades the Synthetic Aperture Radar (SAR) detection efficiency and can mislead SAR image interpretation by forming false targets. In order to suppress retransmission deception jamming, this paper proposes a novel multiple input and multiple output (MIMO) SAR structure range direction MIMO SAR, whose multiple channel antennas are vertical to the azimuth. First, based on the multiple channels of range direction MIMO SAR, the orthogonal frequency division multiplexing (OFDM) linear frequency modulation (LFM) signal was adopted as the transmission signal of each channel, which is defined as a sub-band signal. This sub-band signal corresponds to the transmission channel. Then, all of the sub-band signals are modulated with random initial phases and concurrently transmitted. The signal form is more complex and difficult to intercept. Next, the echoes of the sub-band signal are utilized to synthesize a wide band signal after preprocessing. The proposed method will increase the signal to interference ratio and peak amplitude ratio of the signal to resist retransmission deception jamming. Finally, well-focused SAR imagery is obtained using a conventional imaging method where the retransmission deception jamming strength is degraded and defocused. Simulations demonstrated the effectiveness of the proposed method. PMID:28075367
Scalable privacy-preserving data sharing methodology for genome-wide association studies.
Yu, Fei; Fienberg, Stephen E; Slavković, Aleksandra B; Uhler, Caroline
2014-08-01
The protection of privacy of individual-level information in genome-wide association study (GWAS) databases has been a major concern of researchers following the publication of "an attack" on GWAS data by Homer et al. (2008). Traditional statistical methods for confidentiality and privacy protection of statistical databases do not scale well to deal with GWAS data, especially in terms of guarantees regarding protection from linkage to external information. The more recent concept of differential privacy, introduced by the cryptographic community, is an approach that provides a rigorous definition of privacy with meaningful privacy guarantees in the presence of arbitrary external information, although the guarantees may come at a serious price in terms of data utility. Building on such notions, Uhler et al. (2013) proposed new methods to release aggregate GWAS data without compromising an individual's privacy. We extend the methods developed in Uhler et al. (2013) for releasing differentially-private χ(2)-statistics by allowing for arbitrary number of cases and controls, and for releasing differentially-private allelic test statistics. We also provide a new interpretation by assuming the controls' data are known, which is a realistic assumption because some GWAS use publicly available data as controls. We assess the performance of the proposed methods through a risk-utility analysis on a real data set consisting of DNA samples collected by the Wellcome Trust Case Control Consortium and compare the methods with the differentially-private release mechanism proposed by Johnson and Shmatikov (2013). Copyright © 2014 Elsevier Inc. All rights reserved.
Action-Driven Visual Object Tracking With Deep Reinforcement Learning.
Yun, Sangdoo; Choi, Jongwon; Yoo, Youngjoon; Yun, Kimin; Choi, Jin Young
2018-06-01
In this paper, we propose an efficient visual tracker, which directly captures a bounding box containing the target object in a video by means of sequential actions learned using deep neural networks. The proposed deep neural network to control tracking actions is pretrained using various training video sequences and fine-tuned during actual tracking for online adaptation to a change of target and background. The pretraining is done by utilizing deep reinforcement learning (RL) as well as supervised learning. The use of RL enables even partially labeled data to be successfully utilized for semisupervised learning. Through the evaluation of the object tracking benchmark data set, the proposed tracker is validated to achieve a competitive performance at three times the speed of existing deep network-based trackers. The fast version of the proposed method, which operates in real time on graphics processing unit, outperforms the state-of-the-art real-time trackers with an accuracy improvement of more than 8%.
MIMO channel estimation and evaluation for airborne traffic surveillance in cellular networks
NASA Astrophysics Data System (ADS)
Vahidi, Vahid; Saberinia, Ebrahim
2018-01-01
A channel estimation (CE) procedure based on compressed sensing is proposed to estimate the multiple-input multiple-output sparse channel for traffic data transmission from drones to ground stations. The proposed procedure consists of an offline phase and a real-time phase. In the offline phase, a pilot arrangement method, which considers the interblock and block mutual coherence simultaneously, is proposed. The real-time phase contains three steps. At the first step, it obtains the priori estimate of the channel by block orthogonal matching pursuit; afterward, it utilizes that estimated channel to calculate the linear minimum mean square error of the received pilots. Finally, the block compressive sampling matching pursuit utilizes the enhanced received pilots to estimate the channel more accurately. The performance of the CE procedure is evaluated by simulating the transmission of traffic data through the communication channel and evaluating its fidelity for car detection after demodulation. Simulation results indicate that the proposed CE technique enhances the performance of the car detection in a traffic image considerably.
NASA Astrophysics Data System (ADS)
Tan, Zhukui; Xie, Baiming; Zhao, Yuanliang; Dou, Jinyue; Yan, Tong; Liu, Bin; Zeng, Ming
2018-06-01
This paper presents a new integrated planning framework for effective accommodating electric vehicles in smart distribution systems (SDS). The proposed method incorporates various investment options available for the utility collectively, including distributed generation (DG), capacitors and network reinforcement. Using a back-propagation algorithm combined with cost-benefit analysis, the optimal network upgrade plan, allocation and sizing of the selected components are determined, with the purpose of minimizing the total system capital and operating costs of DG and EV accommodation. Furthermore, a new iterative reliability test method is proposed. It can check the optimization results by subsequently simulating the reliability level of the planning scheme, and modify the generation reserve margin to guarantee acceptable adequacy levels for each year of the planning horizon. Numerical results based on a 32-bus distribution system verify the effectiveness of the proposed method.
Equivalent radiation source of 3D package for electromagnetic characteristics analysis
NASA Astrophysics Data System (ADS)
Li, Jun; Wei, Xingchang; Shu, Yufei
2017-10-01
An equivalent radiation source method is proposed to characterize electromagnetic emission and interference of complex three dimensional integrated circuits (IC) in this paper. The method utilizes amplitude-only near-field scanning data to reconstruct an equivalent magnetic dipole array, and the differential evolution optimization algorithm is proposed to extract the locations, orientation and moments of those dipoles. By importing the equivalent dipoles model into a 3D full-wave simulator together with the victim circuit model, the electromagnetic interference issues in mixed RF/digital systems can be well predicted. A commercial IC is used to validate the accuracy and efficiency of this proposed method. The coupled power at the victim antenna port calculated by the equivalent radiation source is compared with the measured data. Good consistency is obtained which confirms the validity and efficiency of the method. Project supported by the National Nature Science Foundation of China (No. 61274110).
Petri Net controller synthesis based on decomposed manufacturing models.
Dideban, Abbas; Zeraatkar, Hashem
2018-06-01
Utilizing of supervisory control theory on the real systems in many modeling tools such as Petri Net (PN) becomes challenging in recent years due to the significant states in the automata models or uncontrollable events. The uncontrollable events initiate the forbidden states which might be removed by employing some linear constraints. Although there are many methods which have been proposed to reduce these constraints, enforcing them to a large-scale system is very difficult and complicated. This paper proposes a new method for controller synthesis based on PN modeling. In this approach, the original PN model is broken down into some smaller models in which the computational cost reduces significantly. Using this method, it is easy to reduce and enforce the constraints to a Petri net model. The appropriate results of our proposed method on the PN models denote worthy controller synthesis for the large scale systems. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Trust-aware recommendation for improving aggregate diversity
NASA Astrophysics Data System (ADS)
Liu, Haifeng; Bai, Xiaomei; Yang, Zhuo; Tolba, Amr; Xia, Feng
2015-10-01
Recommender systems are becoming increasingly important and prevalent because of the ability of solving information overload. In recent years, researchers are paying increasing attention to aggregate diversity as a key metric beyond accuracy, because improving aggregate recommendation diversity may increase long tails and sales diversity. Trust is often used to improve recommendation accuracy. However, how to utilize trust to improve aggregate recommendation diversity is unexplored. In this paper, we focus on solving this problem and propose a novel trust-aware recommendation method by incorporating time factor into similarity computation. The rationale underlying the proposed method is that, trustees with later creation time of trust relation can bring more diverse items to recommend to their trustors than other trustees with earlier creation time of trust relation. Through relevant experiments on publicly available dataset, we demonstrate that the proposed method outperforms the baseline method in terms of aggregate diversity while maintaining almost the same recall.
Peng, Shao-Hu; Kim, Deok-Hwan; Lee, Seok-Lyong; Lim, Myung-Kwan
2010-01-01
Texture feature is one of most important feature analysis methods in the computer-aided diagnosis (CAD) systems for disease diagnosis. In this paper, we propose a Uniformity Estimation Method (UEM) for local brightness and structure to detect the pathological change in the chest CT images. Based on the characteristics of the chest CT images, we extract texture features by proposing an extension of rotation invariant LBP (ELBP(riu4)) and the gradient orientation difference so as to represent a uniform pattern of the brightness and structure in the image. The utilization of the ELBP(riu4) and the gradient orientation difference allows us to extract rotation invariant texture features in multiple directions. Beyond this, we propose to employ the integral image technique to speed up the texture feature computation of the spatial gray level dependent method (SGLDM). Copyright © 2010 Elsevier Ltd. All rights reserved.
A Novel Evaluation Model for the Vehicle Navigation Device Market Using Hybrid MCDM Techniques
NASA Astrophysics Data System (ADS)
Lin, Chia-Li; Hsieh, Meng-Shu; Tzeng, Gwo-Hshiung
The developing strategy of ND is also presented to initiate the product roadmap. Criteria for evaluation are constructed via reviewing papers, interviewing experts and brain-storming. The ISM (interpretive structural modeling) method was used to construct the relationship between each criterion. The existing NDs were sampled to benchmark the gap between the consumer’s aspired/desired utilities with respect to the utilities of existing/developing NDs. The VIKOR method was applied to rank the sampled NDs. This paper will propose the key driving criteria of purchasing new ND and compare the consumer behavior of various characters. Those conclusions can be served as a reference for ND producers for improving existing functions or planning further utilities in the next e-era ND generation.
A contourlet transform based algorithm for real-time video encoding
NASA Astrophysics Data System (ADS)
Katsigiannis, Stamos; Papaioannou, Georgios; Maroulis, Dimitris
2012-06-01
In recent years, real-time video communication over the internet has been widely utilized for applications like video conferencing. Streaming live video over heterogeneous IP networks, including wireless networks, requires video coding algorithms that can support various levels of quality in order to adapt to the network end-to-end bandwidth and transmitter/receiver resources. In this work, a scalable video coding and compression algorithm based on the Contourlet Transform is proposed. The algorithm allows for multiple levels of detail, without re-encoding the video frames, by just dropping the encoded information referring to higher resolution than needed. Compression is achieved by means of lossy and lossless methods, as well as variable bit rate encoding schemes. Furthermore, due to the transformation utilized, it does not suffer from blocking artifacts that occur with many widely adopted compression algorithms. Another highly advantageous characteristic of the algorithm is the suppression of noise induced by low-quality sensors usually encountered in web-cameras, due to the manipulation of the transform coefficients at the compression stage. The proposed algorithm is designed to introduce minimal coding delay, thus achieving real-time performance. Performance is enhanced by utilizing the vast computational capabilities of modern GPUs, providing satisfactory encoding and decoding times at relatively low cost. These characteristics make this method suitable for applications like video-conferencing that demand real-time performance, along with the highest visual quality possible for each user. Through the presented performance and quality evaluation of the algorithm, experimental results show that the proposed algorithm achieves better or comparable visual quality relative to other compression and encoding methods tested, while maintaining a satisfactory compression ratio. Especially at low bitrates, it provides more human-eye friendly images compared to algorithms utilizing block-based coding, like the MPEG family, as it introduces fuzziness and blurring instead of artificial block artifacts.
Clustered Multi-Task Learning for Automatic Radar Target Recognition
Li, Cong; Bao, Weimin; Xu, Luping; Zhang, Hua
2017-01-01
Model training is a key technique for radar target recognition. Traditional model training algorithms in the framework of single task leaning ignore the relationships among multiple tasks, which degrades the recognition performance. In this paper, we propose a clustered multi-task learning, which can reveal and share the multi-task relationships for radar target recognition. To further make full use of these relationships, the latent multi-task relationships in the projection space are taken into consideration. Specifically, a constraint term in the projection space is proposed, the main idea of which is that multiple tasks within a close cluster should be close to each other in the projection space. In the proposed method, the cluster structures and multi-task relationships can be autonomously learned and utilized in both of the original and projected space. In view of the nonlinear characteristics of radar targets, the proposed method is extended to a non-linear kernel version and the corresponding non-linear multi-task solving method is proposed. Comprehensive experimental studies on simulated high-resolution range profile dataset and MSTAR SAR public database verify the superiority of the proposed method to some related algorithms. PMID:28953267
A Multistage Approach for Image Registration.
Bowen, Francis; Hu, Jianghai; Du, Eliza Yingzi
2016-09-01
Successful image registration is an important step for object recognition, target detection, remote sensing, multimodal content fusion, scene blending, and disaster assessment and management. The geometric and photometric variations between images adversely affect the ability for an algorithm to estimate the transformation parameters that relate the two images. Local deformations, lighting conditions, object obstructions, and perspective differences all contribute to the challenges faced by traditional registration techniques. In this paper, a novel multistage registration approach is proposed that is resilient to view point differences, image content variations, and lighting conditions. Robust registration is realized through the utilization of a novel region descriptor which couples with the spatial and texture characteristics of invariant feature points. The proposed region descriptor is exploited in a multistage approach. A multistage process allows the utilization of the graph-based descriptor in many scenarios thus allowing the algorithm to be applied to a broader set of images. Each successive stage of the registration technique is evaluated through an effective similarity metric which determines subsequent action. The registration of aerial and street view images from pre- and post-disaster provide strong evidence that the proposed method estimates more accurate global transformation parameters than traditional feature-based methods. Experimental results show the robustness and accuracy of the proposed multistage image registration methodology.
Feature Grouping and Selection Over an Undirected Graph.
Yang, Sen; Yuan, Lei; Lai, Ying-Cheng; Shen, Xiaotong; Wonka, Peter; Ye, Jieping
2012-01-01
High-dimensional regression/classification continues to be an important and challenging problem, especially when features are highly correlated. Feature selection, combined with additional structure information on the features has been considered to be promising in promoting regression/classification performance. Graph-guided fused lasso (GFlasso) has recently been proposed to facilitate feature selection and graph structure exploitation, when features exhibit certain graph structures. However, the formulation in GFlasso relies on pairwise sample correlations to perform feature grouping, which could introduce additional estimation bias. In this paper, we propose three new feature grouping and selection methods to resolve this issue. The first method employs a convex function to penalize the pairwise l ∞ norm of connected regression/classification coefficients, achieving simultaneous feature grouping and selection. The second method improves the first one by utilizing a non-convex function to reduce the estimation bias. The third one is the extension of the second method using a truncated l 1 regularization to further reduce the estimation bias. The proposed methods combine feature grouping and feature selection to enhance estimation accuracy. We employ the alternating direction method of multipliers (ADMM) and difference of convex functions (DC) programming to solve the proposed formulations. Our experimental results on synthetic data and two real datasets demonstrate the effectiveness of the proposed methods.
NASA Astrophysics Data System (ADS)
Chen, Qingcai; Wang, Mamin; Wang, Yuqin; Zhang, Lixin; Xue, Jian; Sun, Haoyao; Mu, Zhen
2018-07-01
Environmentally persistent free radicals (EPFRs) are present within atmospheric fine particles, and they are assumed to be a potential factor responsible for human pneumonia and lung cancer. This study presents a new method for the rapid quantification of EPFRs in atmospheric particles with a quartz sheet-based approach using electron paramagnetic resonance (EPR) spectroscopy. The three-dimensional distributions of the relative response factors in a cavity resonator were simulated and utilized for an accurate quantitative determination of EPFRs in samples. Comparisons between the proposed method and conventional quantitative methods were also performed to illustrate the advantages of the proposed method. The results suggest that the reproducibility and accuracy of the proposed method are superior to those of the quartz tube-based method. Although the solvent extraction method is capable of extracting specific EPFR species, the developed method can be used to determine the total EPFR content; moreover, the analysis process of the proposed approach is substantially quicker than that of the solvent extraction method. The proposed method has been applied in this study to determine the EPFRs in ambient PM2.5 samples collected over Xi'an, the results of which will be useful for extensive research on the sources, concentrations, and physical-chemical characteristics of EPFRs in the atmosphere.
A Sector Capacity Assessment Method Based on Airspace Utilization Efficiency
NASA Astrophysics Data System (ADS)
Zhang, Jianping; Zhang, Ping; Li, Zhen; Zou, Xiang
2018-02-01
Sector capacity is one of the core factors affecting the safety and the efficiency of the air traffic system. Most of previous sector capacity assessment methods only considered the air traffic controller’s (ATCO’s) workload. These methods are not only limited which only concern about the safety, but also not accurate enough. In this paper, we employ the integrated quantitative index system proposed in one of our previous literatures. We use the principal component analysis (PCA) to find out the principal indicators among the indicators so as to calculate the airspace utilization efficiency. In addition, we use a series of fitting functions to test and define the correlation between the dense of air traffic flow and the airspace utilization efficiency. The sector capacity is then decided as the value of the dense of air traffic flow corresponding to the maximum airspace utilization efficiency. We also use the same series of fitting functions to test the correlation between the dese of air traffic flow and the ATCOs’ workload. We examine our method with a large amount of empirical operating data of Chengdu Controlling Center and obtain a reliable sector capacity value. Experiment results also show superiority of our method against those only consider the ATCO’s workload in terms of better correlation between the airspace utilization efficiency and the dense of air traffic flow.
Link Prediction in Evolving Networks Based on Popularity of Nodes.
Wang, Tong; He, Xing-Sheng; Zhou, Ming-Yang; Fu, Zhong-Qian
2017-08-02
Link prediction aims to uncover the underlying relationship behind networks, which could be utilized to predict missing edges or identify the spurious edges. The key issue of link prediction is to estimate the likelihood of potential links in networks. Most classical static-structure based methods ignore the temporal aspects of networks, limited by the time-varying features, such approaches perform poorly in evolving networks. In this paper, we propose a hypothesis that the ability of each node to attract links depends not only on its structural importance, but also on its current popularity (activeness), since active nodes have much more probability to attract future links. Then a novel approach named popularity based structural perturbation method (PBSPM) and its fast algorithm are proposed to characterize the likelihood of an edge from both existing connectivity structure and current popularity of its two endpoints. Experiments on six evolving networks show that the proposed methods outperform state-of-the-art methods in accuracy and robustness. Besides, visual results and statistical analysis reveal that the proposed methods are inclined to predict future edges between active nodes, rather than edges between inactive nodes.
Differential Characteristics Based Iterative Multiuser Detection for Wireless Sensor Networks
Chen, Xiaoguang; Jiang, Xu; Wu, Zhilu; Zhuang, Shufeng
2017-01-01
High throughput, low latency and reliable communication has always been a hot topic for wireless sensor networks (WSNs) in various applications. Multiuser detection is widely used to suppress the bad effect of multiple access interference in WSNs. In this paper, a novel multiuser detection method based on differential characteristics is proposed to suppress multiple access interference. The proposed iterative receive method consists of three stages. Firstly, a differential characteristics function is presented based on the optimal multiuser detection decision function; then on the basis of differential characteristics, a preliminary threshold detection is utilized to find the potential wrongly received bits; after that an error bit corrector is employed to correct the wrong bits. In order to further lower the bit error ratio (BER), the differential characteristics calculation, threshold detection and error bit correction process described above are iteratively executed. Simulation results show that after only a few iterations the proposed multiuser detection method can achieve satisfactory BER performance. Besides, BER and near far resistance performance are much better than traditional suboptimal multiuser detection methods. Furthermore, the proposed iterative multiuser detection method also has a large system capacity. PMID:28212328
Classification of complex networks based on similarity of topological network features
NASA Astrophysics Data System (ADS)
Attar, Niousha; Aliakbary, Sadegh
2017-09-01
Over the past few decades, networks have been widely used to model real-world phenomena. Real-world networks exhibit nontrivial topological characteristics and therefore, many network models are proposed in the literature for generating graphs that are similar to real networks. Network models reproduce nontrivial properties such as long-tail degree distributions or high clustering coefficients. In this context, we encounter the problem of selecting the network model that best fits a given real-world network. The need for a model selection method reveals the network classification problem, in which a target-network is classified into one of the candidate network models. In this paper, we propose a novel network classification method which is independent of the network size and employs an alignment-free metric of network comparison. The proposed method is based on supervised machine learning algorithms and utilizes the topological similarities of networks for the classification task. The experiments show that the proposed method outperforms state-of-the-art methods with respect to classification accuracy, time efficiency, and robustness to noise.
Bio-Optics Based Sensation Imaging for Breast Tumor Detection Using Tissue Characterization
Lee, Jong-Ha; Kim, Yoon Nyun; Park, Hee-Jun
2015-01-01
The tissue inclusion parameter estimation method is proposed to measure the stiffness as well as geometric parameters. The estimation is performed based on the tactile data obtained at the surface of the tissue using an optical tactile sensation imaging system (TSIS). A forward algorithm is designed to comprehensively predict the tactile data based on the mechanical properties of tissue inclusion using finite element modeling (FEM). This forward information is used to develop an inversion algorithm that will be used to extract the size, depth, and Young's modulus of a tissue inclusion from the tactile data. We utilize the artificial neural network (ANN) for the inversion algorithm. The proposed estimation method was validated by a realistic tissue phantom with stiff inclusions. The experimental results showed that the proposed estimation method can measure the size, depth, and Young's modulus of a tissue inclusion with 0.58%, 3.82%, and 2.51% relative errors, respectively. The obtained results prove that the proposed method has potential to become a useful screening and diagnostic method for breast cancer. PMID:25785306
ACCELERATING MR PARAMETER MAPPING USING SPARSITY-PROMOTING REGULARIZATION IN PARAMETRIC DIMENSION
Velikina, Julia V.; Alexander, Andrew L.; Samsonov, Alexey
2013-01-01
MR parameter mapping requires sampling along additional (parametric) dimension, which often limits its clinical appeal due to a several-fold increase in scan times compared to conventional anatomic imaging. Data undersampling combined with parallel imaging is an attractive way to reduce scan time in such applications. However, inherent SNR penalties of parallel MRI due to noise amplification often limit its utility even at moderate acceleration factors, requiring regularization by prior knowledge. In this work, we propose a novel regularization strategy, which utilizes smoothness of signal evolution in the parametric dimension within compressed sensing framework (p-CS) to provide accurate and precise estimation of parametric maps from undersampled data. The performance of the method was demonstrated with variable flip angle T1 mapping and compared favorably to two representative reconstruction approaches, image space-based total variation regularization and an analytical model-based reconstruction. The proposed p-CS regularization was found to provide efficient suppression of noise amplification and preservation of parameter mapping accuracy without explicit utilization of analytical signal models. The developed method may facilitate acceleration of quantitative MRI techniques that are not suitable to model-based reconstruction because of complex signal models or when signal deviations from the expected analytical model exist. PMID:23213053
NASA Astrophysics Data System (ADS)
Karlita, Tita; Yuniarno, Eko Mulyanto; Purnama, I. Ketut Eddy; Purnomo, Mauridhi Hery
2017-06-01
Analyzing ultrasound (US) images to get the shapes and structures of particular anatomical regions is an interesting field of study since US imaging is a non-invasive method to capture internal structures of a human body. However, bone segmentation of US images is still challenging because it is strongly influenced by speckle noises and it has poor image quality. This paper proposes a combination of local phase symmetry and quadratic polynomial fitting methods to extract bone outer contour (BOC) from two dimensional (2D) B-modes US image as initial steps of three-dimensional (3D) bone surface reconstruction. By using local phase symmetry, the bone is initially extracted from US images. BOC is then extracted by scanning one pixel on the bone boundary in each column of the US images using first phase features searching method. Quadratic polynomial fitting is utilized to refine and estimate the pixel location that fails to be detected during the extraction process. Hole filling method is then applied by utilize the polynomial coefficients to fill the gaps with new pixel. The proposed method is able to estimate the new pixel position and ensures smoothness and continuity of the contour path. Evaluations are done using cow and goat bones by comparing the resulted BOCs with the contours produced by manual segmentation and contours produced by canny edge detection. The evaluation shows that our proposed methods produces an excellent result with average MSE before and after hole filling at the value of 0.65.
Fiber fault location utilizing traffic signal in optical network.
Zhao, Tong; Wang, Anbang; Wang, Yuncai; Zhang, Mingjiang; Chang, Xiaoming; Xiong, Lijuan; Hao, Yi
2013-10-07
We propose and experimentally demonstrate a method for fault location in optical communication network. This method utilizes the traffic signal transmitted across the network as probe signal, and then locates the fault by correlation technique. Compared with conventional techniques, our method has a simple structure and low operation expenditure, because no additional device is used, such as light source, modulator and signal generator. The correlation detection in this method overcomes the tradeoff between spatial resolution and measurement range in pulse ranging technique. Moreover, signal extraction process can improve the location result considerably. Experimental results show that we achieve a spatial resolution of 8 cm and detection range of over 23 km with -8-dBm mean launched power in optical network based on synchronous digital hierarchy protocols.
NASA Astrophysics Data System (ADS)
Lotfy, Hayam M.; Saleh, Sarah S.; Hassan, Nagiba Y.; Salem, Hesham
This work represents the application of the isosbestic points present in different absorption spectra. Three novel spectrophotometric methods were developed, the first method is the absorption subtraction method (AS) utilizing the isosbestic point in zero-order absorption spectra; the second method is the amplitude modulation method (AM) utilizing the isosbestic point in ratio spectra; and third method is the amplitude summation method (A-Sum) utilizing the isosbestic point in derivative spectra. The three methods were applied for the analysis of the ternary mixture of chloramphenicol (CHL), dexamethasone sodium phosphate (DXM) and tetryzoline hydrochloride (TZH) in eye drops in the presence of benzalkonium chloride as a preservative. The components at the isosbestic point were determined using the corresponding unified regression equation at this point with no need for a complementary method. The obtained results were statistically compared to each other and to that of the developed PLS model. The specificity of the developed methods was investigated by analyzing laboratory prepared mixtures and the combined dosage form. The methods were validated as per ICH guidelines where accuracy, repeatability, inter-day precision and robustness were found to be within the acceptable limits. The results obtained from the proposed methods were statistically compared with official ones where no significant difference was observed.
Method for Producing Launch/Landing Pads and Structures Project
NASA Technical Reports Server (NTRS)
Mueller, Robert P. (Compiler)
2015-01-01
Current plans for deep space exploration include building landing-launch pads capable of withstanding the rocket blast of much larger spacecraft that that of the Apollo days. The proposed concept will develop lightweight launch and landing pad materials from in-situ materials, utilizing regolith to produce controllable porous cast metallic foam brickstiles shapes. These shapes can be utilized to lay a landing launch platform, as a construction material or as more complex parts of mechanical assemblies.
An Unconditionally Monotone C 2 Quartic Spline Method with Nonoscillation Derivatives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Jin; Nelson, Karl E.
Here, a one-dimensional monotone interpolation method based on interface reconstruction with partial volumes in the slope-space utilizing the Hermite cubic-spline, is proposed. The new method is only quartic, however is C 2 and unconditionally monotone. A set of control points is employed to constrain the curvature of the interpolation function and to eliminate possible nonphysical oscillations in the slope space. An extension of this method in two-dimensions is also discussed.
Resonant power processors. II - Methods of control
NASA Technical Reports Server (NTRS)
Oruganti, R.; Lee, F. C.
1984-01-01
The nature of resonant converter control is discussed. Employing the state-portrait, different control methods for series resonant converter are identified and their performance evaluated based on their stability, response to control and load changes and range of operation. A new control method, optimal-trajectory control, is proposed which, by utilizing the state trajectories as control laws, continuously monitors the energy level of the resonant tank. The method is shown to have superior control properties especially under transient operation.
An Unconditionally Monotone C 2 Quartic Spline Method with Nonoscillation Derivatives
Yao, Jin; Nelson, Karl E.
2018-01-24
Here, a one-dimensional monotone interpolation method based on interface reconstruction with partial volumes in the slope-space utilizing the Hermite cubic-spline, is proposed. The new method is only quartic, however is C 2 and unconditionally monotone. A set of control points is employed to constrain the curvature of the interpolation function and to eliminate possible nonphysical oscillations in the slope space. An extension of this method in two-dimensions is also discussed.
MRT letter: Guided filtering of image focus volume for 3D shape recovery of microscopic objects.
Mahmood, Muhammad Tariq
2014-12-01
In this letter, a shape from focus (SFF) method is proposed that utilizes the guided image filtering to enhance the image focus volume efficiently. First, image focus volume is computed using a conventional focus measure. Then each layer of image focus volume is filtered using guided filtering. In this work, the all-in-focus image, which can be obtained from the initial focus volume, is used as guidance image. Finally, improved depth map is obtained from the filtered image focus volume by maximizing the focus measure along the optical axis. The proposed SFF method is efficient and provides better depth maps. The improved performance is highlighted by conducting several experiments using image sequences of simulated and real microscopic objects. The comparative analysis demonstrates the effectiveness of the proposed SFF method. © 2014 Wiley Periodicals, Inc.
Camouflage target reconnaissance based on hyperspectral imaging technology
NASA Astrophysics Data System (ADS)
Hua, Wenshen; Guo, Tong; Liu, Xun
2015-08-01
Efficient camouflaged target reconnaissance technology makes great influence on modern warfare. Hyperspectral images can provide large spectral range and high spectral resolution, which are invaluable in discriminating between camouflaged targets and backgrounds. Hyperspectral target detection and classification technology are utilized to achieve single class and multi-class camouflaged targets reconnaissance respectively. Constrained energy minimization (CEM), a widely used algorithm in hyperspectral target detection, is employed to achieve one class camouflage target reconnaissance. Then, support vector machine (SVM), a classification method, is proposed to achieve multi-class camouflage target reconnaissance. Experiments have been conducted to demonstrate the efficiency of the proposed method.
Time multiplexing based extended depth of focus imaging.
Ilovitsh, Asaf; Zalevsky, Zeev
2016-01-01
We propose to utilize the time multiplexing super resolution method to extend the depth of focus of an imaging system. In standard time multiplexing, the super resolution is achieved by generating duplication of the optical transfer function in the spectrum domain, by the use of moving gratings. While this improves the spatial resolution, it does not increase the depth of focus. By changing the gratings frequency and, by that changing the duplication positions, it is possible to obtain an extended depth of focus. The proposed method is presented analytically, demonstrated via numerical simulations and validated by a laboratory experiment.
Colour computer-generated holography for point clouds utilizing the Phong illumination model.
Symeonidou, Athanasia; Blinder, David; Schelkens, Peter
2018-04-16
A technique integrating the bidirectional reflectance distribution function (BRDF) is proposed to generate realistic high-quality colour computer-generated holograms (CGHs). We build on prior work, namely a fast computer-generated holography method for point clouds that handles occlusions. We extend the method by integrating the Phong illumination model so that the properties of the objects' surfaces are taken into account to achieve natural light phenomena such as reflections and shadows. Our experiments show that rendering holograms with the proposed algorithm provides realistic looking objects without any noteworthy increase to the computational cost.
Landsman, V; Lou, W Y W; Graubard, B I
2015-05-20
We present a two-step approach for estimating hazard rates and, consequently, survival probabilities, by levels of general categorical exposure. The resulting estimator utilizes three sources of data: vital statistics data and census data are used at the first step to estimate the overall hazard rate for a given combination of gender and age group, and cohort data constructed from a nationally representative complex survey with linked mortality records, are used at the second step to divide the overall hazard rate by exposure levels. We present an explicit expression for the resulting estimator and consider two methods for variance estimation that account for complex multistage sample design: (1) the leaving-one-out jackknife method, and (2) the Taylor linearization method, which provides an analytic formula for the variance estimator. The methods are illustrated with smoking and all-cause mortality data from the US National Health Interview Survey Linked Mortality Files, and the proposed estimator is compared with a previously studied crude hazard rate estimator that uses survey data only. The advantages of a two-step approach and possible extensions of the proposed estimator are discussed. Copyright © 2015 John Wiley & Sons, Ltd.
DOT National Transportation Integrated Search
1983-11-01
Author's abstract: A detailed re-analysis of available pedestrian accident data was utilized to define three sets of pedestrian safety public information and education (PI&E) messages. These messages were then produced and field tested. The objective...
Evaluation of Two PCR-Based Swine-Specific Fecal Source Tracking Assays (Poster)
Several PCR-based methods have been proposed to identify swine fecal pollution in environmental waters. However, the specificity and distribution of these targets have not been adequately assessed. Consequently, the utility of these assays in identifying swine fecal contamination...
Learning What to Want: Context-Sensitive Preference Learning
Srivastava, Nisheeth; Schrater, Paul
2015-01-01
We have developed a method for learning relative preferences from histories of choices made, without requiring an intermediate utility computation. Our method infers preferences that are rational in a psychological sense, where agent choices result from Bayesian inference of what to do from observable inputs. We further characterize conditions on choice histories wherein it is appropriate for modelers to describe relative preferences using ordinal utilities, and illustrate the importance of the influence of choice history by explaining all major categories of context effects using them. Our proposal clarifies the relationship between economic and psychological definitions of rationality and rationalizes several behaviors heretofore judged irrational by behavioral economists. PMID:26496645
Segmentation of oil spills in SAR images by using discriminant cuts
NASA Astrophysics Data System (ADS)
Ding, Xianwen; Zou, Xiaolin
2018-02-01
The discriminant cut is used to segment the oil spills in synthetic aperture radar (SAR) images. The proposed approach is a region-based one, which is able to capture and utilize spatial information in SAR images. The real SAR images, i.e. ALOS-1 PALSAR and Sentinel-1 SAR images were collected and used to validate the accuracy of the proposed approach for oil spill segmentation in SAR images. The accuracy of the proposed approach is higher than that of the fuzzy C-means classification method.
NASA Astrophysics Data System (ADS)
Shen, Wei; Li, Dongsheng; Zhang, Shuaifang; Ou, Jinping
2017-07-01
This paper presents a hybrid method that combines the B-spline wavelet on the interval (BSWI) finite element method and spectral analysis based on fast Fourier transform (FFT) to study wave propagation in One-Dimensional (1D) structures. BSWI scaling functions are utilized to approximate the theoretical wave solution in the spatial domain and construct a high-accuracy dynamic stiffness matrix. Dynamic reduction on element level is applied to eliminate the interior degrees of freedom of BSWI elements and substantially reduce the size of the system matrix. The dynamic equations of the system are then transformed and solved in the frequency domain through FFT-based spectral analysis which is especially suitable for parallel computation. A comparative analysis of four different finite element methods is conducted to demonstrate the validity and efficiency of the proposed method when utilized in high-frequency wave problems. Other numerical examples are utilized to simulate the influence of crack and delamination on wave propagation in 1D rods and beams. Finally, the errors caused by FFT and their corresponding solutions are presented.
Hybrid Intrusion Forecasting Framework for Early Warning System
NASA Astrophysics Data System (ADS)
Kim, Sehun; Shin, Seong-Jun; Kim, Hyunwoo; Kwon, Ki Hoon; Han, Younggoo
Recently, cyber attacks have become a serious hindrance to the stability of Internet. These attacks exploit interconnectivity of networks, propagate in an instant, and have become more sophisticated and evolutionary. Traditional Internet security systems such as firewalls, IDS and IPS are limited in terms of detecting recent cyber attacks in advance as these systems respond to Internet attacks only after the attacks inflict serious damage. In this paper, we propose a hybrid intrusion forecasting system framework for an early warning system. The proposed system utilizes three types of forecasting methods: time-series analysis, probabilistic modeling, and data mining method. By combining these methods, it is possible to take advantage of the forecasting technique of each while overcoming their drawbacks. Experimental results show that the hybrid intrusion forecasting method outperforms each of three forecasting methods.
Image dehazing based on non-local saturation
NASA Astrophysics Data System (ADS)
Wang, Linlin; Zhang, Qian; Yang, Deyun; Hou, Yingkun; He, Xiaoting
2018-04-01
In this paper, a method based on non-local saturation algorithm is proposed to avoid block and halo effect for single image dehazing with dark channel prior. First we convert original image from RGB color space into HSV color space with the idea of non-local method. Image saturation is weighted equally by the size of fixed window according to image resolution. Second we utilize the saturation to estimate the atmospheric light value and transmission rate. Then through the function of saturation and transmission, the haze-free image is obtained based on the atmospheric scattering model. Comparing the results of existing methods, our method can restore image color and enhance contrast. We guarantee the proposed method with quantitative and qualitative evaluation respectively. Experiments show the better visual effect with high efficiency.
An atlas-based multimodal registration method for 2D images with discrepancy structures.
Lv, Wenchao; Chen, Houjin; Peng, Yahui; Li, Yanfeng; Li, Jupeng
2018-06-04
An atlas-based multimodal registration method for 2-dimension images with discrepancy structures was proposed in this paper. Atlas was utilized for complementing the discrepancy structure information in multimodal medical images. The scheme includes three steps: floating image to atlas registration, atlas to reference image registration, and field-based deformation. To evaluate the performance, a frame model, a brain model, and clinical images were employed in registration experiments. We measured the registration performance by the squared sum of intensity differences. Results indicate that this method is robust and performs better than the direct registration for multimodal images with discrepancy structures. We conclude that the proposed method is suitable for multimodal images with discrepancy structures. Graphical Abstract An Atlas-based multimodal registration method schematic diagram.
NASA Technical Reports Server (NTRS)
Williams, L., Jr.
1978-01-01
The applicability of the tele-conference method of curriculum sharing as well as the sharing of scientific research results between universities and industrial organizations was evaluated in relation to other techniques and methods. Ten universities cooperated with NC A&T State University in an effort to increase the number of minority scientists and engineers in the USA via the utilization of the communication features of satellites. Research activities, experiments and studies in curriculum sharing are described as well as the techniques, interconnections and equipment utilized. Suggested methods and recommendations for a continuation of innovative applications of satellite technology in higher education at NC A&T State University are included.
Fast depth decision for HEVC inter prediction based on spatial and temporal correlation
NASA Astrophysics Data System (ADS)
Chen, Gaoxing; Liu, Zhenyu; Ikenaga, Takeshi
2016-07-01
High efficiency video coding (HEVC) is a video compression standard that outperforms the predecessor H.264/AVC by doubling the compression efficiency. To enhance the compression accuracy, the partition sizes ranging is from 4x4 to 64x64 in HEVC. However, the manifold partition sizes dramatically increase the encoding complexity. This paper proposes a fast depth decision based on spatial and temporal correlation. Spatial correlation utilize the code tree unit (CTU) Splitting information and temporal correlation utilize the motion vector predictor represented CTU in inter prediction to determine the maximum depth in each CTU. Experimental results show that the proposed method saves about 29.1% of the original processing time with 0.9% of BD-bitrate increase on average.
NASA Astrophysics Data System (ADS)
Wang, Han; Yan, Jie; Liu, Yongqian; Han, Shuang; Li, Li; Zhao, Jing
2017-11-01
Increasing the accuracy of wind speed prediction lays solid foundation to the reliability of wind power forecasting. Most traditional correction methods for wind speed prediction establish the mapping relationship between wind speed of the numerical weather prediction (NWP) and the historical measurement data (HMD) at the corresponding time slot, which is free of time-dependent impacts of wind speed time series. In this paper, a multi-step-ahead wind speed prediction correction method is proposed with consideration of the passing effects from wind speed at the previous time slot. To this end, the proposed method employs both NWP and HMD as model inputs and the training labels. First, the probabilistic analysis of the NWP deviation for different wind speed bins is calculated to illustrate the inadequacy of the traditional time-independent mapping strategy. Then, support vector machine (SVM) is utilized as example to implement the proposed mapping strategy and to establish the correction model for all the wind speed bins. One Chinese wind farm in northern part of China is taken as example to validate the proposed method. Three benchmark methods of wind speed prediction are used to compare the performance. The results show that the proposed model has the best performance under different time horizons.
NASA Astrophysics Data System (ADS)
Iwamura, Koji; Kuwahara, Shinya; Tanimizu, Yoshitaka; Sugimura, Nobuhiro
Recently, new distributed architectures of manufacturing systems are proposed, aiming at realizing more flexible control structures of the manufacturing systems. Many researches have been carried out to deal with the distributed architectures for planning and control of the manufacturing systems. However, the human operators have not yet been discussed for the autonomous components of the distributed manufacturing systems. A real-time scheduling method is proposed, in this research, to select suitable combinations of the human operators, the resources and the jobs for the manufacturing processes. The proposed scheduling method consists of following three steps. In the first step, the human operators select their favorite manufacturing processes which they will carry out in the next time period, based on their preferences. In the second step, the machine tools and the jobs select suitable combinations for the next machining processes. In the third step, the automated guided vehicles and the jobs select suitable combinations for the next transportation processes. The second and third steps are carried out by using the utility value based method and the dispatching rule-based method proposed in the previous researches. Some case studies have been carried out to verify the effectiveness of the proposed method.
A motion compensation technique using sliced blocks and its application to hybrid video coding
NASA Astrophysics Data System (ADS)
Kondo, Satoshi; Sasai, Hisao
2005-07-01
This paper proposes a new motion compensation method using "sliced blocks" in DCT-based hybrid video coding. In H.264 ? MPEG-4 Advance Video Coding, a brand-new international video coding standard, motion compensation can be performed by splitting macroblocks into multiple square or rectangular regions. In the proposed method, on the other hand, macroblocks or sub-macroblocks are divided into two regions (sliced blocks) by an arbitrary line segment. The result is that the shapes of the segmented regions are not limited to squares or rectangles, allowing the shapes of the segmented regions to better match the boundaries between moving objects. Thus, the proposed method can improve the performance of the motion compensation. In addition, adaptive prediction of the shape according to the region shape of the surrounding macroblocks can reduce overheads to describe shape information in the bitstream. The proposed method also has the advantage that conventional coding techniques such as mode decision using rate-distortion optimization can be utilized, since coding processes such as frequency transform and quantization are performed on a macroblock basis, similar to the conventional coding methods. The proposed method is implemented in an H.264-based P-picture codec and an improvement in bit rate of 5% is confirmed in comparison with H.264.
Fast Noncircular 2D-DOA Estimation for Rectangular Planar Array
Xu, Lingyun; Wen, Fangqing
2017-01-01
A novel scheme is proposed for direction finding with uniform rectangular planar array. First, the characteristics of noncircular signals and Euler’s formula are exploited to construct a new real-valued rectangular array data. Then, the rotational invariance relations for real-valued signal space are depicted in a new way. Finally the real-valued propagator method is utilized to estimate the pairing two-dimensional direction of arrival (2D-DOA). The proposed algorithm provides better angle estimation performance and can discern more sources than the 2D propagator method. At the same time, it has very close angle estimation performance to the noncircular propagator method (NC-PM) with reduced computational complexity. PMID:28417926
NASA Astrophysics Data System (ADS)
Su, Jinlong; Tian, Yan; Hu, Fei; Gui, Liangqi; Cheng, Yayun; Peng, Xiaohui
2017-10-01
Dielectric constant is an important role to describe the properties of matter. This paper proposes This paper proposes the concept of mixed dielectric constant(MDC) in passive microwave radiometric measurement. In addition, a MDC inversion method is come up, Ratio of Angle-Polarization Difference(RAPD) is utilized in this method. The MDC of several materials are investigated using RAPD. Brightness temperatures(TBs) which calculated by MDC and original dielectric constant are compared. Random errors are added to the simulation to test the robustness of the algorithm. Keywords: Passive detection, microwave/millimeter, radiometric measurement, ratio of angle-polarization difference (RAPD), mixed dielectric constant (MDC), brightness temperatures, remote sensing, target recognition.
Automatic Extraction of Planetary Image Features
NASA Technical Reports Server (NTRS)
Troglio, G.; LeMoigne, J.; Moser, G.; Serpico, S. B.; Benediktsson, J. A.
2009-01-01
With the launch of several Lunar missions such as the Lunar Reconnaissance Orbiter (LRO) and Chandrayaan-1, a large amount of Lunar images will be acquired and will need to be analyzed. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to Lunar data that often present low contrast and uneven illumination characteristics. In this paper, we propose a new method for the extraction of Lunar features (that can be generalized to other planetary images), based on the combination of several image processing techniques, a watershed segmentation and the generalized Hough Transform. This feature extraction has many applications, among which image registration.
NASA Astrophysics Data System (ADS)
Salem, Hesham; Mohamed, Dalia
2015-04-01
Six simple, specific, accurate and precise spectrophotometric methods were developed and validated for the simultaneous determination of the analgesic drug; paracetamol (PARA) and the skeletal muscle relaxant; dantrolene sodium (DANT). Three methods are manipulating ratio spectra namely; ratio difference (RD), ratio subtraction (RS) and mean centering (MC). The other three methods are utilizing the isoabsorptive point either at zero order namely; absorbance ratio (AR) and absorbance subtraction (AS) or at ratio spectrum namely; amplitude modulation (AM). The proposed spectrophotometric procedures do not require any preliminary separation step. The accuracy, precision and linearity ranges of the proposed methods were determined. The selectivity of the developed methods was investigated by analyzing laboratory prepared mixtures of the drugs and their combined dosage form. Standard deviation values are less than 1.5 in the assay of raw materials and capsules. The obtained results were statistically compared with each other and with those of reported spectrophotometric ones. The comparison showed that there is no significant difference between the proposed methods and the reported methods regarding both accuracy and precision.
NASA Astrophysics Data System (ADS)
Lv, ZhuoKai; Yang, Tiejun; Zhu, Chunhua
2018-03-01
Through utilizing the technology of compressive sensing (CS), the channel estimation methods can achieve the purpose of reducing pilots and improving spectrum efficiency. The channel estimation and pilot design scheme are explored during the correspondence under the help of block-structured CS in massive MIMO systems. The block coherence property of the aggregate system matrix can be minimized so that the pilot design scheme based on stochastic search is proposed. Moreover, the block sparsity adaptive matching pursuit (BSAMP) algorithm under the common sparsity model is proposed so that the channel estimation can be caught precisely. Simulation results are to be proved the proposed design algorithm with superimposed pilots design and the BSAMP algorithm can provide better channel estimation than existing methods.
Jackin, Boaz Jessie; Watanabe, Shinpei; Ootsu, Kanemitsu; Ohkawa, Takeshi; Yokota, Takashi; Hayasaki, Yoshio; Yatagai, Toyohiko; Baba, Takanobu
2018-04-20
A parallel computation method for large-size Fresnel computer-generated hologram (CGH) is reported. The method was introduced by us in an earlier report as a technique for calculating Fourier CGH from 2D object data. In this paper we extend the method to compute Fresnel CGH from 3D object data. The scale of the computation problem is also expanded to 2 gigapixels, making it closer to real application requirements. The significant feature of the reported method is its ability to avoid communication overhead and thereby fully utilize the computing power of parallel devices. The method exhibits three layers of parallelism that favor small to large scale parallel computing machines. Simulation and optical experiments were conducted to demonstrate the workability and to evaluate the efficiency of the proposed technique. A two-times improvement in computation speed has been achieved compared to the conventional method, on a 16-node cluster (one GPU per node) utilizing only one layer of parallelism. A 20-times improvement in computation speed has been estimated utilizing two layers of parallelism on a very large-scale parallel machine with 16 nodes, where each node has 16 GPUs.
Differentially Private Frequent Sequence Mining via Sampling-based Candidate Pruning
Xu, Shengzhi; Cheng, Xiang; Li, Zhengyi; Xiong, Li
2016-01-01
In this paper, we study the problem of mining frequent sequences under the rigorous differential privacy model. We explore the possibility of designing a differentially private frequent sequence mining (FSM) algorithm which can achieve both high data utility and a high degree of privacy. We found, in differentially private FSM, the amount of required noise is proportionate to the number of candidate sequences. If we could effectively reduce the number of unpromising candidate sequences, the utility and privacy tradeoff can be significantly improved. To this end, by leveraging a sampling-based candidate pruning technique, we propose a novel differentially private FSM algorithm, which is referred to as PFS2. The core of our algorithm is to utilize sample databases to further prune the candidate sequences generated based on the downward closure property. In particular, we use the noisy local support of candidate sequences in the sample databases to estimate which sequences are potentially frequent. To improve the accuracy of such private estimations, a sequence shrinking method is proposed to enforce the length constraint on the sample databases. Moreover, to decrease the probability of misestimating frequent sequences as infrequent, a threshold relaxation method is proposed to relax the user-specified threshold for the sample databases. Through formal privacy analysis, we show that our PFS2 algorithm is ε-differentially private. Extensive experiments on real datasets illustrate that our PFS2 algorithm can privately find frequent sequences with high accuracy. PMID:26973430
Designing Hyperchaotic Cat Maps With Any Desired Number of Positive Lyapunov Exponents.
Hua, Zhongyun; Yi, Shuang; Zhou, Yicong; Li, Chengqing; Wu, Yue
2018-02-01
Generating chaotic maps with expected dynamics of users is a challenging topic. Utilizing the inherent relation between the Lyapunov exponents (LEs) of the Cat map and its associated Cat matrix, this paper proposes a simple but efficient method to construct an -dimensional ( -D) hyperchaotic Cat map (HCM) with any desired number of positive LEs. The method first generates two basic -D Cat matrices iteratively and then constructs the final -D Cat matrix by performing similarity transformation on one basic -D Cat matrix by the other. Given any number of positive LEs, it can generate an -D HCM with desired hyperchaotic complexity. Two illustrative examples of -D HCMs were constructed to show the effectiveness of the proposed method, and to verify the inherent relation between the LEs and Cat matrix. Theoretical analysis proves that the parameter space of the generated HCM is very large. Performance evaluations show that, compared with existing methods, the proposed method can construct -D HCMs with lower computation complexity and their outputs demonstrate strong randomness and complex ergodicity.
Fault Diagnosis for Micro-Gas Turbine Engine Sensors via Wavelet Entropy
Yu, Bing; Liu, Dongdong; Zhang, Tianhong
2011-01-01
Sensor fault diagnosis is necessary to ensure the normal operation of a gas turbine system. However, the existing methods require too many resources and this need can’t be satisfied in some occasions. Since the sensor readings are directly affected by sensor state, sensor fault diagnosis can be performed by extracting features of the measured signals. This paper proposes a novel fault diagnosis method for sensors based on wavelet entropy. Based on the wavelet theory, wavelet decomposition is utilized to decompose the signal in different scales. Then the instantaneous wavelet energy entropy (IWEE) and instantaneous wavelet singular entropy (IWSE) are defined based on the previous wavelet entropy theory. Subsequently, a fault diagnosis method for gas turbine sensors is proposed based on the results of a numerically simulated example. Then, experiments on this method are carried out on a real micro gas turbine engine. In the experiment, four types of faults with different magnitudes are presented. The experimental results show that the proposed method for sensor fault diagnosis is efficient. PMID:22163734
Fault diagnosis for micro-gas turbine engine sensors via wavelet entropy.
Yu, Bing; Liu, Dongdong; Zhang, Tianhong
2011-01-01
Sensor fault diagnosis is necessary to ensure the normal operation of a gas turbine system. However, the existing methods require too many resources and this need can't be satisfied in some occasions. Since the sensor readings are directly affected by sensor state, sensor fault diagnosis can be performed by extracting features of the measured signals. This paper proposes a novel fault diagnosis method for sensors based on wavelet entropy. Based on the wavelet theory, wavelet decomposition is utilized to decompose the signal in different scales. Then the instantaneous wavelet energy entropy (IWEE) and instantaneous wavelet singular entropy (IWSE) are defined based on the previous wavelet entropy theory. Subsequently, a fault diagnosis method for gas turbine sensors is proposed based on the results of a numerically simulated example. Then, experiments on this method are carried out on a real micro gas turbine engine. In the experiment, four types of faults with different magnitudes are presented. The experimental results show that the proposed method for sensor fault diagnosis is efficient.
Control system and method for a universal power conditioning system
Lai, Jih-Sheng; Park, Sung Yeul; Chen, Chien-Liang
2014-09-02
A new current loop control system method is proposed for a single-phase grid-tie power conditioning system that can be used under a standalone or a grid-tie mode. This type of inverter utilizes an inductor-capacitor-inductor (LCL) filter as the interface in between inverter and the utility grid. The first set of inductor-capacitor (LC) can be used in the standalone mode, and the complete LCL can be used for the grid-tie mode. A new admittance compensation technique is proposed for the controller design to avoid low stability margin while maintaining sufficient gain at the fundamental frequency. The proposed current loop controller system and admittance compensation technique have been simulated and tested. Simulation results indicate that without the admittance path compensation, the current loop controller output duty cycle is largely offset by an undesired admittance path. At the initial simulation cycle, the power flow may be erratically fed back to the inverter causing catastrophic failure. With admittance path compensation, the output power shows a steady-state offset that matches the design value. Experimental results show that the inverter is capable of both a standalone and a grid-tie connection mode using the LCL filter configuration.
Towards higher stability of resonant absorption measurements in pulsed plasmas.
Britun, Nikolay; Michiels, Matthieu; Snyders, Rony
2015-12-01
Possible ways to increase the reliability of time-resolved particle density measurements in pulsed gaseous discharges using resonant absorption spectroscopy are proposed. A special synchronization, called "dynamic source triggering," between a gated detector and two pulsed discharges, one representing the discharge of interest and another being used as a reference source, is developed. An internal digital delay generator in the intensified charge coupled device camera, used at the same time as a detector, is utilized for this purpose. According to the proposed scheme, the light pulses from the reference source follow the gates of detector, passing through the discharge of interest only when necessary. This allows for the utilization of short-pulse plasmas as reference sources, which is critical for time-resolved absorption analysis of strongly emitting pulsed discharges. In addition to dynamic source triggering, the reliability of absorption measurements can be further increased using simultaneous detection of spectra relevant for absorption method, which is also demonstrated in this work. The proposed methods are illustrated by the time-resolved measurements of the metal atom density in a high-power impulse magnetron sputtering (HiPIMS) discharge, using either a hollow cathode lamp or another HiPIMS discharge as a pulsed reference source.
A Drive Method for Small Inductance PM Motor Under No-Load Condition
NASA Astrophysics Data System (ADS)
Tanaka, Daisuke; Ohishi, Kiyoshi
The harmonic wave of the exciting current of the motor is generated by the pulsewidth modulated voltage of the inverter. The motors that have low inpedance genetate more harmonics and make larger iron loss. This paper describes an implementation of drive control for a small inductance permanent-magnet motor drive. A comparative experiment has been carried out with conventional methods and the utility of the proposed method has been verified.
NASA Astrophysics Data System (ADS)
Yao, Wen; Chen, Xiaoqian; Huang, Yiyong; van Tooren, Michel
2013-06-01
To assess the on-orbit servicing (OOS) paradigm and optimize its utilities by taking advantage of its inherent flexibility and responsiveness, the OOS system assessment and optimization methods based on lifecycle simulation under uncertainties are studied. The uncertainty sources considered in this paper include both the aleatory (random launch/OOS operation failure and on-orbit component failure) and the epistemic (the unknown trend of the end-used market price) types. Firstly, the lifecycle simulation under uncertainties is discussed. The chronological flowchart is presented. The cost and benefit models are established, and the uncertainties thereof are modeled. The dynamic programming method to make optimal decision in face of the uncertain events is introduced. Secondly, the method to analyze the propagation effects of the uncertainties on the OOS utilities is studied. With combined probability and evidence theory, a Monte Carlo lifecycle Simulation based Unified Uncertainty Analysis (MCS-UUA) approach is proposed, based on which the OOS utility assessment tool under mixed uncertainties is developed. Thirdly, to further optimize the OOS system under mixed uncertainties, the reliability-based optimization (RBO) method is studied. To alleviate the computational burden of the traditional RBO method which involves nested optimum search and uncertainty analysis, the framework of Sequential Optimization and Mixed Uncertainty Analysis (SOMUA) is employed to integrate MCS-UUA, and the RBO algorithm SOMUA-MCS is developed. Fourthly, a case study on the OOS system for a hypothetical GEO commercial communication satellite is investigated with the proposed assessment tool. Furthermore, the OOS system is optimized with SOMUA-MCS. Lastly, some conclusions are given and future research prospects are highlighted.
NASA Astrophysics Data System (ADS)
Tada, Hiroshi; Miyatake, Ichiro; Mouri, Junji; Ajiki, Norihiko; Fueta, Toshiharu
In Japan, various approaches have been taken to ensure the quality of public works or to support the procurement regime of the governmental agencies, as a means to utilize external resources, which include the procurement support service or the construction management (CM) method. Although discussions on these measures to utilize external resources (hereinafter referred to as external support measure) have been going on, as well as the follow-up surveys showing the positive effects of such measures have been conducted, the surveys only deal with the matters concerning the overall effects of the external support measure on the whole, meaning that the effect of each item of the tasks have not been addressed, and that the extent it dealt with the expectations of the client is unknown. However, the effective use of the external support measure in future cannot be achieved without knowing what was the purpose to introduce the external support measure, and what effect was expected on each task item, and what extent the expectation fulfilled. Furthermore, it is important to clarify not only the effect as compared to the client's expectation (performance), but also the public benefit of this measure (value improvement). From this point of view, there is not an established method to figure out the effect of the client's measure to utilize external resources. In view of this background, this study takes the CM method as an example of the external support measure, and proposes a method to measure and evaluate the effect by each task item, and suggests the future issues and possible responses, in the aim of contributing the promotion, improvement, and proper implementation of the external support measures in future.
An improved AE detection method of rail defect based on multi-level ANC with VSS-LMS
NASA Astrophysics Data System (ADS)
Zhang, Xin; Cui, Yiming; Wang, Yan; Sun, Mingjian; Hu, Hengshan
2018-01-01
In order to ensure the safety and reliability of railway system, Acoustic Emission (AE) method is employed to investigate rail defect detection. However, little attention has been paid to the defect detection at high speed, especially for noise interference suppression. Based on AE technology, this paper presents an improved rail defect detection method by multi-level ANC with VSS-LMS. Multi-level noise cancellation based on SANC and ANC is utilized to eliminate complex noises at high speed, and tongue-shaped curve with index adjustment factor is proposed to enhance the performance of variable step-size algorithm. Defect signals and reference signals are acquired by the rail-wheel test rig. The features of noise signals and defect signals are analyzed for effective detection. The effectiveness of the proposed method is demonstrated by comparing with the previous study, and different filter lengths are investigated to obtain a better noise suppression performance. Meanwhile, the detection ability of the proposed method is verified at the top speed of the test rig. The results clearly illustrate that the proposed method is effective in detecting rail defects at high speed, especially for noise interference suppression.
Sun, Chenglu; Li, Wei; Chen, Wei
2017-01-01
For extracting the pressure distribution image and respiratory waveform unobtrusively and comfortably, we proposed a smart mat which utilized a flexible pressure sensor array, printed electrodes and novel soft seven-layer structure to monitor those physiological information. However, in order to obtain high-resolution pressure distribution and more accurate respiratory waveform, it needs more time to acquire the pressure signal of all the pressure sensors embedded in the smart mat. In order to reduce the sampling time while keeping the same resolution and accuracy, a novel method based on compressed sensing (CS) theory was proposed. By utilizing the CS based method, 40% of the sampling time can be decreased by means of acquiring nearly one-third of original sampling points. Then several experiments were carried out to validate the performance of the CS based method. While less than one-third of original sampling points were measured, the correlation degree coefficient between reconstructed respiratory waveform and original waveform can achieve 0.9078, and the accuracy of the respiratory rate (RR) extracted from the reconstructed respiratory waveform can reach 95.54%. The experimental results demonstrated that the novel method can fit the high resolution smart mat system and be a viable option for reducing the sampling time of the pressure sensor array. PMID:28796188
Fault Diagnosis for Rolling Bearings under Variable Conditions Based on Visual Cognition
Cheng, Yujie; Zhou, Bo; Lu, Chen; Yang, Chao
2017-01-01
Fault diagnosis for rolling bearings has attracted increasing attention in recent years. However, few studies have focused on fault diagnosis for rolling bearings under variable conditions. This paper introduces a fault diagnosis method for rolling bearings under variable conditions based on visual cognition. The proposed method includes the following steps. First, the vibration signal data are transformed into a recurrence plot (RP), which is a two-dimensional image. Then, inspired by the visual invariance characteristic of the human visual system (HVS), we utilize speed up robust feature to extract fault features from the two-dimensional RP and generate a 64-dimensional feature vector, which is invariant to image translation, rotation, scaling variation, etc. Third, based on the manifold perception characteristic of HVS, isometric mapping, a manifold learning method that can reflect the intrinsic manifold embedded in the high-dimensional space, is employed to obtain a low-dimensional feature vector. Finally, a classical classification method, support vector machine, is utilized to realize fault diagnosis. Verification data were collected from Case Western Reserve University Bearing Data Center, and the experimental result indicates that the proposed fault diagnosis method based on visual cognition is highly effective for rolling bearings under variable conditions, thus providing a promising approach from the cognitive computing field. PMID:28772943
- and Graph-Based Point Cloud Segmentation of 3d Scenes Using Perceptual Grouping Laws
NASA Astrophysics Data System (ADS)
Xu, Y.; Hoegner, L.; Tuttas, S.; Stilla, U.
2017-05-01
Segmentation is the fundamental step for recognizing and extracting objects from point clouds of 3D scene. In this paper, we present a strategy for point cloud segmentation using voxel structure and graph-based clustering with perceptual grouping laws, which allows a learning-free and completely automatic but parametric solution for segmenting 3D point cloud. To speak precisely, two segmentation methods utilizing voxel and supervoxel structures are reported and tested. The voxel-based data structure can increase efficiency and robustness of the segmentation process, suppressing the negative effect of noise, outliers, and uneven points densities. The clustering of voxels and supervoxel is carried out using graph theory on the basis of the local contextual information, which commonly conducted utilizing merely pairwise information in conventional clustering algorithms. By the use of perceptual laws, our method conducts the segmentation in a pure geometric way avoiding the use of RGB color and intensity information, so that it can be applied to more general applications. Experiments using different datasets have demonstrated that our proposed methods can achieve good results, especially for complex scenes and nonplanar surfaces of objects. Quantitative comparisons between our methods and other representative segmentation methods also confirms the effectiveness and efficiency of our proposals.
Evaluating Research Administration: Methods and Utility
ERIC Educational Resources Information Center
Marina, Sarah; Davis-Hamilton, Zoya; Charmanski, Kara E.
2015-01-01
Three studies were jointly conducted by the Office of Research Administration and Office of Proposal Development at Tufts University to evaluate the services within each respective office. The studies featured assessments that used, respectively, (1) quantitative metrics; (2) a quantitative satisfaction survey with limited qualitative questions;…
Synthesis of actual knowledge on machine-tool monitoring methods and equipment
NASA Astrophysics Data System (ADS)
Tanguy, J. C.
1988-06-01
Problems connected with the automatic supervision of production were studied. Many different automatic control devices are now able to identify defects in the tools, but the solutions proposed to detect optimal limits in the utilization of a tool are not satisfactory.
NASA Astrophysics Data System (ADS)
Li, Haifeng; Zhu, Qing; Yang, Xiaoxia; Xu, Linrong
2012-10-01
Typical characteristics of remote sensing applications are concurrent tasks, such as those found in disaster rapid response. The existing composition approach to geographical information processing service chain, searches for an optimisation solution and is what can be deemed a "selfish" way. This way leads to problems of conflict amongst concurrent tasks and decreases the performance of all service chains. In this study, a non-cooperative game-based mathematical model to analyse the competitive relationships between tasks, is proposed. A best response function is used, to assure each task maintains utility optimisation by considering composition strategies of other tasks and quantifying conflicts between tasks. Based on this, an iterative algorithm that converges to Nash equilibrium is presented, the aim being to provide good convergence and maximise the utilisation of all tasks under concurrent task conditions. Theoretical analyses and experiments showed that the newly proposed method, when compared to existing service composition methods, has better practical utility in all tasks.
An EEG-based functional connectivity measure for automatic detection of alcohol use disorder.
Mumtaz, Wajid; Saad, Mohamad Naufal B Mohamad; Kamel, Nidal; Ali, Syed Saad Azhar; Malik, Aamir Saeed
2018-01-01
The abnormal alcohol consumption could cause toxicity and could alter the human brain's structure and function, termed as alcohol used disorder (AUD). Unfortunately, the conventional screening methods for AUD patients are subjective and manual. Hence, to perform automatic screening of AUD patients, objective methods are needed. The electroencephalographic (EEG) data have been utilized to study the differences of brain signals between alcoholics and healthy controls that could further developed as an automatic screening tool for alcoholics. In this work, resting-state EEG-derived features were utilized as input data to the proposed feature selection and classification method. The aim was to perform automatic classification of AUD patients and healthy controls. The validation of the proposed method involved real-EEG data acquired from 30 AUD patients and 30 age-matched healthy controls. The resting-state EEG-derived features such as synchronization likelihood (SL) were computed involving 19 scalp locations resulted into 513 features. Furthermore, the features were rank-ordered to select the most discriminant features involving a rank-based feature selection method according to a criterion, i.e., receiver operating characteristics (ROC). Consequently, a reduced set of most discriminant features was identified and utilized further during classification of AUD patients and healthy controls. In this study, three different classification models such as Support Vector Machine (SVM), Naïve Bayesian (NB), and Logistic Regression (LR) were used. The study resulted into SVM classification accuracy=98%, sensitivity=99.9%, specificity=95%, and f-measure=0.97; LR classification accuracy=91.7%, sensitivity=86.66%, specificity=96.6%, and f-measure=0.90; NB classification accuracy=93.6%, sensitivity=100%, specificity=87.9%, and f-measure=0.95. The SL features could be utilized as objective markers to screen the AUD patients and healthy controls. Copyright © 2017 Elsevier B.V. All rights reserved.
Sabar, Nasser R; Ayob, Masri; Kendall, Graham; Qu, Rong
2015-02-01
Hyper-heuristics are search methodologies that aim to provide high-quality solutions across a wide variety of problem domains, rather than developing tailor-made methodologies for each problem instance/domain. A traditional hyper-heuristic framework has two levels, namely, the high level strategy (heuristic selection mechanism and the acceptance criterion) and low level heuristics (a set of problem specific heuristics). Due to the different landscape structures of different problem instances, the high level strategy plays an important role in the design of a hyper-heuristic framework. In this paper, we propose a new high level strategy for a hyper-heuristic framework. The proposed high-level strategy utilizes a dynamic multiarmed bandit-extreme value-based reward as an online heuristic selection mechanism to select the appropriate heuristic to be applied at each iteration. In addition, we propose a gene expression programming framework to automatically generate the acceptance criterion for each problem instance, instead of using human-designed criteria. Two well-known, and very different, combinatorial optimization problems, one static (exam timetabling) and one dynamic (dynamic vehicle routing) are used to demonstrate the generality of the proposed framework. Compared with state-of-the-art hyper-heuristics and other bespoke methods, empirical results demonstrate that the proposed framework is able to generalize well across both domains. We obtain competitive, if not better results, when compared to the best known results obtained from other methods that have been presented in the scientific literature. We also compare our approach against the recently released hyper-heuristic competition test suite. We again demonstrate the generality of our approach when we compare against other methods that have utilized the same six benchmark datasets from this test suite.
Ko, Nak Yong; Kuc, Tae-Yong
2015-01-01
This paper proposes a method for mobile robot localization in a partially unknown indoor environment. The method fuses two types of range measurements: the range from the robot to the beacons measured by ultrasonic sensors and the range from the robot to the walls surrounding the robot measured by a laser range finder (LRF). For the fusion, the unscented Kalman filter (UKF) is utilized. Because finding the Jacobian matrix is not feasible for range measurement using an LRF, UKF has an advantage in this situation over the extended KF. The locations of the beacons and range data from the beacons are available, whereas the correspondence of the range data to the beacon is not given. Therefore, the proposed method also deals with the problem of data association to determine which beacon corresponds to the given range data. The proposed approach is evaluated using different sets of design parameter values and is compared with the method that uses only an LRF or ultrasonic beacons. Comparative analysis shows that even though ultrasonic beacons are sparsely populated, have a large error and have a slow update rate, they improve the localization performance when fused with the LRF measurement. In addition, proper adjustment of the UKF design parameters is crucial for full utilization of the UKF approach for sensor fusion. This study contributes to the derivation of a UKF-based design methodology to fuse two exteroceptive measurements that are complementary to each other in localization. PMID:25970259
Ghassabi Kondalaji, Samaneh; Khakinejad, Mahdiar; Tafreshian, Amirmahdi; J Valentine, Stephen
2017-05-01
Collision cross-section (CCS) measurements with a linear drift tube have been utilized to study the gas-phase conformers of a model peptide (acetyl-PAAAAKAAAAKAAAAKAAAAK). Extensive molecular dynamics (MD) simulations have been conducted to derive an advanced protocol for the generation of a comprehensive pool of in-silico structures; both higher energy and more thermodynamically stable structures are included to provide an unbiased sampling of conformational space. MD simulations at 300 K are applied to the in-silico structures to more accurately describe the gas-phase transport properties of the ion conformers including their dynamics. Different methods used previously for trajectory method (TM) CCS calculation employing the Mobcal software [1] are evaluated. A new method for accurate CCS calculation is proposed based on clustering and data mining techniques. CCS values are calculated for all in-silico structures, and those with matching CCS values are chosen as candidate structures. With this approach, more than 300 candidate structures with significant structural variation are produced; although no final gas-phase structure is proposed here, in a second installment of this work, gas-phase hydrogen deuterium exchange data will be utilized as a second criterion to select among these structures as well as to propose relative populations for these ion conformers. Here the need to increase conformer diversity and accurate CCS calculation is demonstrated and the advanced methods are discussed. Graphical Abstract ᅟ.
NASA Astrophysics Data System (ADS)
Ghassabi Kondalaji, Samaneh; Khakinejad, Mahdiar; Tafreshian, Amirmahdi; J. Valentine, Stephen
2017-05-01
Collision cross-section (CCS) measurements with a linear drift tube have been utilized to study the gas-phase conformers of a model peptide (acetyl-PAAAAKAAAAKAAAAKAAAAK). Extensive molecular dynamics (MD) simulations have been conducted to derive an advanced protocol for the generation of a comprehensive pool of in-silico structures; both higher energy and more thermodynamically stable structures are included to provide an unbiased sampling of conformational space. MD simulations at 300 K are applied to the in-silico structures to more accurately describe the gas-phase transport properties of the ion conformers including their dynamics. Different methods used previously for trajectory method (TM) CCS calculation employing the Mobcal software [1] are evaluated. A new method for accurate CCS calculation is proposed based on clustering and data mining techniques. CCS values are calculated for all in-silico structures, and those with matching CCS values are chosen as candidate structures. With this approach, more than 300 candidate structures with significant structural variation are produced; although no final gas-phase structure is proposed here, in a second installment of this work, gas-phase hydrogen deuterium exchange data will be utilized as a second criterion to select among these structures as well as to propose relative populations for these ion conformers. Here the need to increase conformer diversity and accurate CCS calculation is demonstrated and the advanced methods are discussed.
Orthogonal fast spherical Bessel transform on uniform grid
NASA Astrophysics Data System (ADS)
Serov, Vladislav V.
2017-07-01
We propose an algorithm for the orthogonal fast discrete spherical Bessel transform on a uniform grid. Our approach is based upon the spherical Bessel transform factorization into the two subsequent orthogonal transforms, namely the fast Fourier transform and the orthogonal transform founded on the derivatives of the discrete Legendre orthogonal polynomials. The method utility is illustrated by its implementation for the problem of a two-atomic molecule in a time-dependent external field simulating the one utilized in the attosecond streaking technique.
HEp-2 cell image classification method based on very deep convolutional networks with small datasets
NASA Astrophysics Data System (ADS)
Lu, Mengchi; Gao, Long; Guo, Xifeng; Liu, Qiang; Yin, Jianping
2017-07-01
Human Epithelial-2 (HEp-2) cell images staining patterns classification have been widely used to identify autoimmune diseases by the anti-Nuclear antibodies (ANA) test in the Indirect Immunofluorescence (IIF) protocol. Because manual test is time consuming, subjective and labor intensive, image-based Computer Aided Diagnosis (CAD) systems for HEp-2 cell classification are developing. However, methods proposed recently are mostly manual features extraction with low accuracy. Besides, the scale of available benchmark datasets is small, which does not exactly suitable for using deep learning methods. This issue will influence the accuracy of cell classification directly even after data augmentation. To address these issues, this paper presents a high accuracy automatic HEp-2 cell classification method with small datasets, by utilizing very deep convolutional networks (VGGNet). Specifically, the proposed method consists of three main phases, namely image preprocessing, feature extraction and classification. Moreover, an improved VGGNet is presented to address the challenges of small-scale datasets. Experimental results over two benchmark datasets demonstrate that the proposed method achieves superior performance in terms of accuracy compared with existing methods.
A study on scattering correction for γ-photon 3D imaging test method
NASA Astrophysics Data System (ADS)
Xiao, Hui; Zhao, Min; Liu, Jiantang; Chen, Hao
2018-03-01
A pair of 511KeV γ-photons is generated during a positron annihilation. Their directions differ by 180°. The moving path and energy information can be utilized to form the 3D imaging test method in industrial domain. However, the scattered γ-photons are the major factors influencing the imaging precision of the test method. This study proposes a γ-photon single scattering correction method from the perspective of spatial geometry. The method first determines possible scattering points when the scattered γ-photon pair hits the detector pair. The range of scattering angle can then be calculated according to the energy window. Finally, the number of scattered γ-photons denotes the attenuation of the total scattered γ-photons along its moving path. The corrected γ-photons are obtained by deducting the scattered γ-photons from the original ones. Two experiments are conducted to verify the effectiveness of the proposed scattering correction method. The results concluded that the proposed scattering correction method can efficiently correct scattered γ-photons and improve the test accuracy.
Three optical methods for remotely measuring aerosol size distributions.
NASA Technical Reports Server (NTRS)
Reagan, J. A.; Herman, B. M.
1971-01-01
Three optical probing methods for remotely measuring atmospheric aerosol size distributions are discussed and contrasted. The particular detection methods which are considered make use of monostatic lidar (laser radar), bistatic lidar, and solar radiometer sensing techniques. The theory of each of these measurement techniques is discussed briefly, and the necessary constraints which must be applied to obtain aerosol size distribution information from such measurements are pointed out. Theoretical and/or experimental results are also presented which demonstrate the utility of the three proposed probing methods.
Design Optimization of Irregular Cellular Structure for Additive Manufacturing
NASA Astrophysics Data System (ADS)
Song, Guo-Hua; Jing, Shi-Kai; Zhao, Fang-Lei; Wang, Ye-Dong; Xing, Hao; Zhou, Jing-Tao
2017-09-01
Irregularcellular structurehas great potential to be considered in light-weight design field. However, the research on optimizing irregular cellular structures has not yet been reporteddue to the difficulties in their modeling technology. Based on the variable density topology optimization theory, an efficient method for optimizing the topology of irregular cellular structures fabricated through additive manufacturing processes is proposed. The proposed method utilizes tangent circles to automatically generate the main outline of irregular cellular structure. The topological layoutof each cellstructure is optimized using the relative density informationobtained from the proposed modified SIMP method. A mapping relationship between cell structure and relative densityelement is builtto determine the diameter of each cell structure. The results show that the irregular cellular structure can be optimized with the proposed method. The results of simulation and experimental test are similar for irregular cellular structure, which indicate that the maximum deformation value obtained using the modified Solid Isotropic Microstructures with Penalization (SIMP) approach is lower 5.4×10-5 mm than that using the SIMP approach under the same under the same external load. The proposed research provides the instruction to design the other irregular cellular structure.
Composite marginal quantile regression analysis for longitudinal adolescent body mass index data.
Yang, Chi-Chuan; Chen, Yi-Hau; Chang, Hsing-Yi
2017-09-20
Childhood and adolescenthood overweight or obesity, which may be quantified through the body mass index (BMI), is strongly associated with adult obesity and other health problems. Motivated by the child and adolescent behaviors in long-term evolution (CABLE) study, we are interested in individual, family, and school factors associated with marginal quantiles of longitudinal adolescent BMI values. We propose a new method for composite marginal quantile regression analysis for longitudinal outcome data, which performs marginal quantile regressions at multiple quantile levels simultaneously. The proposed method extends the quantile regression coefficient modeling method introduced by Frumento and Bottai (Biometrics 2016; 72:74-84) to longitudinal data accounting suitably for the correlation structure in longitudinal observations. A goodness-of-fit test for the proposed modeling is also developed. Simulation results show that the proposed method can be much more efficient than the analysis without taking correlation into account and the analysis performing separate quantile regressions at different quantile levels. The application to the longitudinal adolescent BMI data from the CABLE study demonstrates the practical utility of our proposal. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Direction of Arrival Estimation for MIMO Radar via Unitary Nuclear Norm Minimization
Wang, Xianpeng; Huang, Mengxing; Wu, Xiaoqin; Bi, Guoan
2017-01-01
In this paper, we consider the direction of arrival (DOA) estimation issue of noncircular (NC) source in multiple-input multiple-output (MIMO) radar and propose a novel unitary nuclear norm minimization (UNNM) algorithm. In the proposed method, the noncircular properties of signals are used to double the virtual array aperture, and the real-valued data are obtained by utilizing unitary transformation. Then a real-valued block sparse model is established based on a novel over-complete dictionary, and a UNNM algorithm is formulated for recovering the block-sparse matrix. In addition, the real-valued NC-MUSIC spectrum is used to design a weight matrix for reweighting the nuclear norm minimization to achieve the enhanced sparsity of solutions. Finally, the DOA is estimated by searching the non-zero blocks of the recovered matrix. Because of using the noncircular properties of signals to extend the virtual array aperture and an additional real structure to suppress the noise, the proposed method provides better performance compared with the conventional sparse recovery based algorithms. Furthermore, the proposed method can handle the case of underdetermined DOA estimation. Simulation results show the effectiveness and advantages of the proposed method. PMID:28441770
Meng, Qier; Kitasaka, Takayuki; Nimura, Yukitaka; Oda, Masahiro; Ueno, Junji; Mori, Kensaku
2017-02-01
Airway segmentation plays an important role in analyzing chest computed tomography (CT) volumes for computerized lung cancer detection, emphysema diagnosis and pre- and intra-operative bronchoscope navigation. However, obtaining a complete 3D airway tree structure from a CT volume is quite a challenging task. Several researchers have proposed automated airway segmentation algorithms basically based on region growing and machine learning techniques. However, these methods fail to detect the peripheral bronchial branches, which results in a large amount of leakage. This paper presents a novel approach for more accurate extraction of the complex airway tree. This proposed segmentation method is composed of three steps. First, Hessian analysis is utilized to enhance the tube-like structure in CT volumes; then, an adaptive multiscale cavity enhancement filter is employed to detect the cavity-like structure with different radii. In the second step, support vector machine learning will be utilized to remove the false positive (FP) regions from the result obtained in the previous step. Finally, the graph-cut algorithm is used to refine the candidate voxels to form an integrated airway tree. A test dataset including 50 standard-dose chest CT volumes was used for evaluating our proposed method. The average extraction rate was about 79.1 % with the significantly decreased FP rate. A new method of airway segmentation based on local intensity structure and machine learning technique was developed. The method was shown to be feasible for airway segmentation in a computer-aided diagnosis system for a lung and bronchoscope guidance system.
3D reconstruction of synapses with deep learning based on EM Images
NASA Astrophysics Data System (ADS)
Xiao, Chi; Rao, Qiang; Zhang, Dandan; Chen, Xi; Han, Hua; Xie, Qiwei
2017-03-01
Recently, due to the rapid development of electron microscope (EM) with its high resolution, stacks delivered by EM can be used to analyze a variety of components that are critical to understand brain function. Since synaptic study is essential in neurobiology and can be analyzed by EM stacks, the automated routines for reconstruction of synapses based on EM Images can become a very useful tool for analyzing large volumes of brain tissue and providing the ability to understand the mechanism of brain. In this article, we propose a novel automated method to realize 3D reconstruction of synapses for Automated Tapecollecting Ultra Microtome Scanning Electron Microscopy (ATUM-SEM) with deep learning. Being different from other reconstruction algorithms, which employ classifier to segment synaptic clefts directly. We utilize deep learning method and segmentation algorithm to obtain synaptic clefts as well as promote the accuracy of reconstruction. The proposed method contains five parts: (1) using modified Moving Least Square (MLS) deformation algorithm and Scale Invariant Feature Transform (SIFT) features to register adjacent sections, (2) adopting Faster Region Convolutional Neural Networks (Faster R-CNN) algorithm to detect synapses, (3) utilizing screening method which takes context cues of synapses into consideration to reduce the false positive rate, (4) combining a practical morphology algorithm with a suitable fitting function to segment synaptic clefts and optimize the shape of them, (5) applying the plugin in FIJI to show the final 3D visualization of synapses. Experimental results on ATUM-SEM images demonstrate the effectiveness of our proposed method.
NASA Astrophysics Data System (ADS)
Zhang, Wenkun; Zhang, Hanming; Wang, Linyuan; Cai, Ailong; Li, Lei; Yan, Bin
2018-02-01
Limited angle computed tomography (CT) reconstruction is widely performed in medical diagnosis and industrial testing because of the size of objects, engine/armor inspection requirements, and limited scan flexibility. Limited angle reconstruction necessitates usage of optimization-based methods that utilize additional sparse priors. However, most of conventional methods solely exploit sparsity priors of spatial domains. When CT projection suffers from serious data deficiency or various noises, obtaining reconstruction images that meet the requirement of quality becomes difficult and challenging. To solve this problem, this paper developed an adaptive reconstruction method for limited angle CT problem. The proposed method simultaneously uses spatial and Radon domain regularization model based on total variation (TV) and data-driven tight frame. Data-driven tight frame being derived from wavelet transformation aims at exploiting sparsity priors of sinogram in Radon domain. Unlike existing works that utilize pre-constructed sparse transformation, the framelets of the data-driven regularization model can be adaptively learned from the latest projection data in the process of iterative reconstruction to provide optimal sparse approximations for given sinogram. At the same time, an effective alternating direction method is designed to solve the simultaneous spatial and Radon domain regularization model. The experiments for both simulation and real data demonstrate that the proposed algorithm shows better performance in artifacts depression and details preservation than the algorithms solely using regularization model of spatial domain. Quantitative evaluations for the results also indicate that the proposed algorithm applying learning strategy performs better than the dual domains algorithms without learning regularization model
Research and implementation of finger-vein recognition algorithm
NASA Astrophysics Data System (ADS)
Pang, Zengyao; Yang, Jie; Chen, Yilei; Liu, Yin
2017-06-01
In finger vein image preprocessing, finger angle correction and ROI extraction are important parts of the system. In this paper, we propose an angle correction algorithm based on the centroid of the vein image, and extract the ROI region according to the bidirectional gray projection method. Inspired by the fact that features in those vein areas have similar appearance as valleys, a novel method was proposed to extract center and width of palm vein based on multi-directional gradients, which is easy-computing, quick and stable. On this basis, an encoding method was designed to determine the gray value distribution of texture image. This algorithm could effectively overcome the edge of the texture extraction error. Finally, the system was equipped with higher robustness and recognition accuracy by utilizing fuzzy threshold determination and global gray value matching algorithm. Experimental results on pairs of matched palm images show that, the proposed method has a EER with 3.21% extracts features at the speed of 27ms per image. It can be concluded that the proposed algorithm has obvious advantages in grain extraction efficiency, matching accuracy and algorithm efficiency.
Detection of glucose in the human brain with 1 H MRS at 7 Tesla.
Kaiser, Lana G; Hirokazu, Kawaguchi; Fukunaga, Masaki; B Matson, Gerald
2016-12-01
A new method is proposed for noninvasive detection of glucose in vivo using proton MR spectroscopy at 7 Tesla. The proposed method utilizes J-difference editing to uncover the resonance of beta-glucose (β-glc) at 3.23 ppm, which is strongly overlapped with choline. Calculations using the density matrix formalism are used to maximize the signal-to-noise ratio of the β-glc resonance at 3.23 ppm. The calculations are verified using phantom and in vivo data collected at 7 Tesla. The proposed method allows observation of the glucose signal at 3.23 ppm in the human brain spectrum. Additional co-edited resonances of N-acetylaspartylglutamatate and glutathione are also detected in the same experiment. The proposed method does not require carbon ( 13 C)- labeled glucose injections and 13 C hardware; as such, it has a potential to provide valuable information on intrinsic glucose concentration in the human brain in vivo. Magn Reson Med 76:1653-1660, 2016. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
A Distributed Signature Detection Method for Detecting Intrusions in Sensor Systems
Kim, Ilkyu; Oh, Doohwan; Yoon, Myung Kuk; Yi, Kyueun; Ro, Won Woo
2013-01-01
Sensor nodes in wireless sensor networks are easily exposed to open and unprotected regions. A security solution is strongly recommended to prevent networks against malicious attacks. Although many intrusion detection systems have been developed, most systems are difficult to implement for the sensor nodes owing to limited computation resources. To address this problem, we develop a novel distributed network intrusion detection system based on the Wu–Manber algorithm. In the proposed system, the algorithm is divided into two steps; the first step is dedicated to a sensor node, and the second step is assigned to a base station. In addition, the first step is modified to achieve efficient performance under limited computation resources. We conduct evaluations with random string sets and actual intrusion signatures to show the performance improvement of the proposed method. The proposed method achieves a speedup factor of 25.96 and reduces 43.94% of packet transmissions to the base station compared with the previously proposed method. The system achieves efficient utilization of the sensor nodes and provides a structural basis of cooperative systems among the sensors. PMID:23529146
A distributed signature detection method for detecting intrusions in sensor systems.
Kim, Ilkyu; Oh, Doohwan; Yoon, Myung Kuk; Yi, Kyueun; Ro, Won Woo
2013-03-25
Sensor nodes in wireless sensor networks are easily exposed to open and unprotected regions. A security solution is strongly recommended to prevent networks against malicious attacks. Although many intrusion detection systems have been developed, most systems are difficult to implement for the sensor nodes owing to limited computation resources. To address this problem, we develop a novel distributed network intrusion detection system based on the Wu-Manber algorithm. In the proposed system, the algorithm is divided into two steps; the first step is dedicated to a sensor node, and the second step is assigned to a base station. In addition, the first step is modified to achieve efficient performance under limited computation resources. We conduct evaluations with random string sets and actual intrusion signatures to show the performance improvement of the proposed method. The proposed method achieves a speedup factor of 25.96 and reduces 43.94% of packet transmissions to the base station compared with the previously proposed method. The system achieves efficient utilization of the sensor nodes and provides a structural basis of cooperative systems among the sensors.
Multi-objective based spectral unmixing for hyperspectral images
NASA Astrophysics Data System (ADS)
Xu, Xia; Shi, Zhenwei
2017-02-01
Sparse hyperspectral unmixing assumes that each observed pixel can be expressed by a linear combination of several pure spectra in a priori library. Sparse unmixing is challenging, since it is usually transformed to a NP-hard l0 norm based optimization problem. Existing methods usually utilize a relaxation to the original l0 norm. However, the relaxation may bring in sensitive weighted parameters and additional calculation error. In this paper, we propose a novel multi-objective based algorithm to solve the sparse unmixing problem without any relaxation. We transform sparse unmixing to a multi-objective optimization problem, which contains two correlative objectives: minimizing the reconstruction error and controlling the endmember sparsity. To improve the efficiency of multi-objective optimization, a population-based randomly flipping strategy is designed. Moreover, we theoretically prove that the proposed method is able to recover a guaranteed approximate solution from the spectral library within limited iterations. The proposed method can directly deal with l0 norm via binary coding for the spectral signatures in the library. Experiments on both synthetic and real hyperspectral datasets demonstrate the effectiveness of the proposed method.
Kim, Byeong Hak; Kim, Min Young; Chae, You Seong
2017-01-01
Unmanned aerial vehicles (UAVs) are equipped with optical systems including an infrared (IR) camera such as electro-optical IR (EO/IR), target acquisition and designation sights (TADS), or forward looking IR (FLIR). However, images obtained from IR cameras are subject to noise such as dead pixels, lines, and fixed pattern noise. Nonuniformity correction (NUC) is a widely employed method to reduce noise in IR images, but it has limitations in removing noise that occurs during operation. Methods have been proposed to overcome the limitations of the NUC method, such as two-point correction (TPC) and scene-based NUC (SBNUC). However, these methods still suffer from unfixed pattern noise. In this paper, a background registration-based adaptive noise filtering (BRANF) method is proposed to overcome the limitations of conventional methods. The proposed BRANF method utilizes background registration processing and robust principle component analysis (RPCA). In addition, image quality verification methods are proposed that can measure the noise filtering performance quantitatively without ground truth images. Experiments were performed for performance verification with middle wave infrared (MWIR) and long wave infrared (LWIR) images obtained from practical military optical systems. As a result, it is found that the image quality improvement rate of BRANF is 30% higher than that of conventional NUC. PMID:29280970
Kim, Byeong Hak; Kim, Min Young; Chae, You Seong
2017-12-27
Unmanned aerial vehicles (UAVs) are equipped with optical systems including an infrared (IR) camera such as electro-optical IR (EO/IR), target acquisition and designation sights (TADS), or forward looking IR (FLIR). However, images obtained from IR cameras are subject to noise such as dead pixels, lines, and fixed pattern noise. Nonuniformity correction (NUC) is a widely employed method to reduce noise in IR images, but it has limitations in removing noise that occurs during operation. Methods have been proposed to overcome the limitations of the NUC method, such as two-point correction (TPC) and scene-based NUC (SBNUC). However, these methods still suffer from unfixed pattern noise. In this paper, a background registration-based adaptive noise filtering (BRANF) method is proposed to overcome the limitations of conventional methods. The proposed BRANF method utilizes background registration processing and robust principle component analysis (RPCA). In addition, image quality verification methods are proposed that can measure the noise filtering performance quantitatively without ground truth images. Experiments were performed for performance verification with middle wave infrared (MWIR) and long wave infrared (LWIR) images obtained from practical military optical systems. As a result, it is found that the image quality improvement rate of BRANF is 30% higher than that of conventional NUC.
A Novel Attitude Estimation Algorithm Based on the Non-Orthogonal Magnetic Sensors
Zhu, Jianliang; Wu, Panlong; Bo, Yuming
2016-01-01
Because the existing extremum ratio method for projectile attitude measurement is vulnerable to random disturbance, a novel integral ratio method is proposed to calculate the projectile attitude. First, the non-orthogonal measurement theory of the magnetic sensors is analyzed. It is found that the projectile rotating velocity is constant in one spinning circle and the attitude error is actually the pitch error. Next, by investigating the model of the extremum ratio method, an integral ratio mathematical model is established to improve the anti-disturbance performance. Finally, by combining the preprocessed magnetic sensor data based on the least-square method and the rotating extremum features in one cycle, the analytical expression of the proposed integral ratio algorithm is derived with respect to the pitch angle. The simulation results show that the proposed integral ratio method gives more accurate attitude calculations than does the extremum ratio method, and that the attitude error variance can decrease by more than 90%. Compared to the extremum ratio method (which collects only a single data point in one rotation cycle), the proposed integral ratio method can utilize all of the data collected in the high spin environment, which is a clearly superior calculation approach, and can be applied to the actual projectile environment disturbance. PMID:27213389
A rod type linear ultrasonic motor utilizing longitudinal traveling waves: proof of concept
NASA Astrophysics Data System (ADS)
Wang, Liang; Wielert, Tim; Twiefel, Jens; Jin, Jiamei; Wallaschek, Jörg
2017-08-01
This paper proposes a non-resonant linear ultrasonic motor utilizing longitudinal traveling waves. The longitudinal traveling waves in the rod type stator are generated by inducing longitudinal vibrations at one end of the waveguide and eliminating reflections at the opposite end by a passive damper. Considering the Poisson’s effect, the stator surface points move on elliptic trajectories and the slider is driven forward by friction. In contrast to many other flexural traveling wave linear ultrasonic motors, the driving direction of the proposed motor is identical to the wave propagation direction. The feasibility of the motor concept is demonstrated theoretically and experimentally. First, the design and operation principle of the motor are presented in detail. Then, the stator is modeled utilizing the transfer matrix method and verified by experimental studies. In addition, experimental parameter studies are carried out to identify the motor characteristics. Finally, the performance of the proposed motor is investigated. Overall, the results indicate very dynamic drive characteristics. The motor prototype achieves a maximum mean velocity of 115 mm s-1 and a maximum load of 0.25 N. Thereby, the start-up and shutdown times from the maximum speed are lower than 5 ms.
Concept and design philosophy of a person-accompanying robot
NASA Astrophysics Data System (ADS)
Mizoguchi, Hiroshi; Shigehara, Takaomi; Goto, Yoshiyasu; Hidai, Ken-ichi; Mishima, Taketoshi
1999-01-01
This paper proposes a person accompanying robot as a novel human collaborative robot. The person accompanying robot is such legged mobile robot that is possible to follow the person utilizing its vision. towards future aging society, human collaboration and human support are required as novel applications of robots. Such human collaborative robots share the same space with humans. But conventional robots are isolated from humans and lack the capability to observe humans. Study on human observing function of robot is crucial to realize novel robot such as service and pet robot. To collaborate and support humans properly human collaborative robot must have capability to observe and recognize humans. Study on human observing function of robot is crucial to realize novel robot such as service and pet robot. The authors are currently implementing a prototype of the proposed accompanying robot.As a base for the human observing function of the prototype robot, we have realized face tracking utilizing skin color extraction and correlation based tracking. We also develop a method for the robot to pick up human voice clearly and remotely by utilizing microphone arrays. Results of these preliminary study suggest feasibility of the proposed robot.
Scaling of mode shapes from operational modal analysis using harmonic forces
NASA Astrophysics Data System (ADS)
Brandt, A.; Berardengo, M.; Manzoni, S.; Cigada, A.
2017-10-01
This paper presents a new method for scaling mode shapes obtained by means of operational modal analysis. The method is capable of scaling mode shapes on any structure, also structures with closely coupled modes, and the method can be used in the presence of ambient vibration from traffic or wind loads, etc. Harmonic excitation can be relatively easily accomplished by using general-purpose actuators, also for force levels necessary for driving large structures such as bridges and highrise buildings. The signal processing necessary for mode shape scaling by the proposed method is simple and the method can easily be implemented in most measurement systems capable of generating a sine wave output. The tests necessary to scale the modes are short compared to typical operational modal analysis test time. The proposed method is thus easy to apply and inexpensive relative to some other methods for scaling mode shapes that are available in literature. Although it is not necessary per se, we propose to excite the structure at, or close to, the eigenfrequencies of the modes to be scaled, since this provides better signal-to-noise ratio in the response sensors, thus permitting the use of smaller actuators. An extensive experimental activity on a real structure was carried out and the results reported demonstrate the feasibility and accuracy of the proposed method. Since the method utilizes harmonic excitation for the mode shape scaling, we propose to call the method OMAH.
Scaling Support Vector Machines On Modern HPC Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
You, Yang; Fu, Haohuan; Song, Shuaiwen
2015-02-01
We designed and implemented MIC-SVM, a highly efficient parallel SVM for x86 based multicore and many-core architectures, such as the Intel Ivy Bridge CPUs and Intel Xeon Phi co-processor (MIC). We propose various novel analysis methods and optimization techniques to fully utilize the multilevel parallelism provided by these architectures and serve as general optimization methods for other machine learning tools.
ERIC Educational Resources Information Center
Heim, Bradley T.
2009-01-01
This paper proposes a new method for estimating family labor supply in the presence of taxes. This method accounts for continuous hours choices, measurement error, unobserved heterogeneity in tastes for work, the nonlinear form of the tax code, and fixed costs of work in one comprehensive specification. Estimated on data from the 2001 PSID, the…
Assessing the Utility of Item Response Theory Models: Differential Item Functioning.
ERIC Educational Resources Information Center
Scheuneman, Janice Dowd
The current status of item response theory (IRT) is discussed. Several IRT methods exist for assessing whether an item is biased. Focus is on methods proposed by L. M. Rudner (1975), F. M. Lord (1977), D. Thissen et al. (1988) and R. L. Linn and D. Harnisch (1981). Rudner suggested a measure of the area lying between the two item characteristic…
Verifying entanglement in the Hong-Ou-Mandel dip
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, Megan R.; Enk, S. J. van
2011-04-15
The Hong-Ou-Mandel interference dip is caused by an entangled state, a delocalized biphoton state. We propose a method of detecting this entanglement by utilizing inverse Hong-Ou-Mandel interference, while taking into account vacuum and multiphoton contaminations, phase noise, and other imperfections. The method uses just linear optics and photodetectors, and for single-mode photodetectors we find a lower bound on the amount of entanglement.
Visualization of Traffic Accidents
NASA Technical Reports Server (NTRS)
Wang, Jie; Shen, Yuzhong; Khattak, Asad
2010-01-01
Traffic accidents have tremendous impact on society. Annually approximately 6.4 million vehicle accidents are reported by police in the US and nearly half of them result in catastrophic injuries. Visualizations of traffic accidents using geographic information systems (GIS) greatly facilitate handling and analysis of traffic accidents in many aspects. Environmental Systems Research Institute (ESRI), Inc. is the world leader in GIS research and development. ArcGIS, a software package developed by ESRI, has the capabilities to display events associated with a road network, such as accident locations, and pavement quality. But when event locations related to a road network are processed, the existing algorithm used by ArcGIS does not utilize all the information related to the routes of the road network and produces erroneous visualization results of event locations. This software bug causes serious problems for applications in which accurate location information is critical for emergency responses, such as traffic accidents. This paper aims to address this problem and proposes an improved method that utilizes all relevant information of traffic accidents, namely, route number, direction, and mile post, and extracts correct event locations for accurate traffic accident visualization and analysis. The proposed method generates a new shape file for traffic accidents and displays them on top of the existing road network in ArcGIS. Visualization of traffic accidents along Hampton Roads Bridge Tunnel is included to demonstrate the effectiveness of the proposed method.
Localized Spatio-Temporal Constraints for Accelerated CMR Perfusion
Akçakaya, Mehmet; Basha, Tamer A.; Pflugi, Silvio; Foppa, Murilo; Kissinger, Kraig V.; Hauser, Thomas H.; Nezafat, Reza
2013-01-01
Purpose To develop and evaluate an image reconstruction technique for cardiac MRI (CMR)perfusion that utilizes localized spatio-temporal constraints. Methods CMR perfusion plays an important role in detecting myocardial ischemia in patients with coronary artery disease. Breath-hold k-t based image acceleration techniques are typically used in CMR perfusion for superior spatial/temporal resolution, and improved coverage. In this study, we propose a novel compressed sensing based image reconstruction technique for CMR perfusion, with applicability to free-breathing examinations. This technique uses local spatio-temporal constraints by regularizing image patches across a small number of dynamics. The technique is compared to conventional dynamic-by-dynamic reconstruction, and sparsity regularization using a temporal principal-component (pc) basis, as well as zerofilled data in multi-slice 2D and 3D CMR perfusion. Qualitative image scores are used (1=poor, 4=excellent) to evaluate the technique in 3D perfusion in 10 patients and 5 healthy subjects. On 4 healthy subjects, the proposed technique was also compared to a breath-hold multi-slice 2D acquisition with parallel imaging in terms of signal intensity curves. Results The proposed technique results in images that are superior in terms of spatial and temporal blurring compared to the other techniques, even in free-breathing datasets. The image scores indicate a significant improvement compared to other techniques in 3D perfusion (2.8±0.5 vs. 2.3±0.5 for x-pc regularization, 1.7±0.5 for dynamic-by-dynamic, 1.1±0.2 for zerofilled). Signal intensity curves indicate similar dynamics of uptake between the proposed method with a 3D acquisition and the breath-hold multi-slice 2D acquisition with parallel imaging. Conclusion The proposed reconstruction utilizes sparsity regularization based on localized information in both spatial and temporal domains for highly-accelerated CMR perfusion with potential utility in free-breathing 3D acquisitions. PMID:24123058
NASA Astrophysics Data System (ADS)
Farahi, Maria; Rabbani, Hossein; Talebi, Ardeshir; Sarrafzadeh, Omid; Ensafi, Shahab
2015-12-01
Visceral Leishmaniasis is a parasitic disease that affects liver, spleen and bone marrow. According to World Health Organization report, definitive diagnosis is possible just by direct observation of the Leishman body in the microscopic image taken from bone marrow samples. We utilize morphological and CV level set method to segment Leishman bodies in digital color microscopic images captured from bone marrow samples. Linear contrast stretching method is used for image enhancement and morphological method is applied to determine the parasite regions and wipe up unwanted objects. Modified global and local CV level set methods are proposed for segmentation and a shape based stopping factor is used to hasten the algorithm. Manual segmentation is considered as ground truth to evaluate the proposed method. This method is tested on 28 samples and achieved 10.90% mean of segmentation error for global model and 9.76% for local model.
A method to evaluate process performance by integrating time and resources
NASA Astrophysics Data System (ADS)
Wang, Yu; Wei, Qingjie; Jin, Shuang
2017-06-01
The purpose of process mining is to improve the existing process of the enterprise, so how to measure the performance of the process is particularly important. However, the current research on the performance evaluation method is still insufficient. The main methods of evaluation are mainly using time or resource. These basic statistics cannot evaluate process performance very well. In this paper, a method of evaluating the performance of the process based on time dimension and resource dimension is proposed. This method can be used to measure the utilization and redundancy of resources in the process. This paper will introduce the design principle and formula of the evaluation algorithm. Then, the design and the implementation of the evaluation method will be introduced. Finally, we will use the evaluating method to analyse the event log from a telephone maintenance process and propose an optimization plan.
NASA Astrophysics Data System (ADS)
Heidari, A. A.; Moayedi, A.; Abbaspour, R. Ali
2017-09-01
Automated fare collection (AFC) systems are regarded as valuable resources for public transport planners. In this paper, the AFC data are utilized to analysis and extract mobility patterns in a public transportation system. For this purpose, the smart card data are inserted into a proposed metaheuristic-based aggregation model and then converted to O-D matrix between stops, since the size of O-D matrices makes it difficult to reproduce the measured passenger flows precisely. The proposed strategy is applied to a case study from Haaglanden, Netherlands. In this research, moth-flame optimizer (MFO) is utilized and evaluated for the first time as a new metaheuristic algorithm (MA) in estimating transit origin-destination matrices. The MFO is a novel, efficient swarm-based MA inspired from the celestial navigation of moth insects in nature. To investigate the capabilities of the proposed MFO-based approach, it is compared to methods that utilize the K-means algorithm, gray wolf optimization algorithm (GWO) and genetic algorithm (GA). The sum of the intra-cluster distances and computational time of operations are considered as the evaluation criteria to assess the efficacy of the optimizers. The optimality of solutions of different algorithms is measured in detail. The traveler's behavior is analyzed to achieve to a smooth and optimized transport system. The results reveal that the proposed MFO-based aggregation strategy can outperform other evaluated approaches in terms of convergence tendency and optimality of the results. The results show that it can be utilized as an efficient approach to estimating the transit O-D matrices.
Zhou, Guoxu; Yang, Zuyuan; Xie, Shengli; Yang, Jun-Mei
2011-04-01
Online blind source separation (BSS) is proposed to overcome the high computational cost problem, which limits the practical applications of traditional batch BSS algorithms. However, the existing online BSS methods are mainly used to separate independent or uncorrelated sources. Recently, nonnegative matrix factorization (NMF) shows great potential to separate the correlative sources, where some constraints are often imposed to overcome the non-uniqueness of the factorization. In this paper, an incremental NMF with volume constraint is derived and utilized for solving online BSS. The volume constraint to the mixing matrix enhances the identifiability of the sources, while the incremental learning mode reduces the computational cost. The proposed method takes advantage of the natural gradient based multiplication updating rule, and it performs especially well in the recovery of dependent sources. Simulations in BSS for dual-energy X-ray images, online encrypted speech signals, and high correlative face images show the validity of the proposed method.
Vessel Segmentation in Retinal Images Using Multi-scale Line Operator and K-Means Clustering.
Saffarzadeh, Vahid Mohammadi; Osareh, Alireza; Shadgar, Bita
2014-04-01
Detecting blood vessels is a vital task in retinal image analysis. The task is more challenging with the presence of bright and dark lesions in retinal images. Here, a method is proposed to detect vessels in both normal and abnormal retinal fundus images based on their linear features. First, the negative impact of bright lesions is reduced by using K-means segmentation in a perceptive space. Then, a multi-scale line operator is utilized to detect vessels while ignoring some of the dark lesions, which have intensity structures different from the line-shaped vessels in the retina. The proposed algorithm is tested on two publicly available STARE and DRIVE databases. The performance of the method is measured by calculating the area under the receiver operating characteristic curve and the segmentation accuracy. The proposed method achieves 0.9483 and 0.9387 localization accuracy against STARE and DRIVE respectively.
Sharifahmadian, Ershad
2006-01-01
The set partitioning in hierarchical trees (SPIHT) algorithm is very effective and computationally simple technique for image and signal compression. Here the author modified the algorithm which provides even better performance than the SPIHT algorithm. The enhanced set partitioning in hierarchical trees (ESPIHT) algorithm has performance faster than the SPIHT algorithm. In addition, the proposed algorithm reduces the number of bits in a bit stream which is stored or transmitted. I applied it to compression of multichannel ECG data. Also, I presented a specific procedure based on the modified algorithm for more efficient compression of multichannel ECG data. This method employed on selected records from the MIT-BIH arrhythmia database. According to experiments, the proposed method attained the significant results regarding compression of multichannel ECG data. Furthermore, in order to compress one signal which is stored for a long time, the proposed multichannel compression method can be utilized efficiently.
Force sensing using 3D displacement measurements in linear elastic bodies
NASA Astrophysics Data System (ADS)
Feng, Xinzeng; Hui, Chung-Yuen
2016-07-01
In cell traction microscopy, the mechanical forces exerted by a cell on its environment is usually determined from experimentally measured displacement by solving an inverse problem in elasticity. In this paper, an innovative numerical method is proposed which finds the "optimal" traction to the inverse problem. When sufficient regularization is applied, we demonstrate that the proposed method significantly improves the widely used approach using Green's functions. Motivated by real cell experiments, the equilibrium condition of a slowly migrating cell is imposed as a set of equality constraints on the unknown traction. Our validation benchmarks demonstrate that the numeric solution to the constrained inverse problem well recovers the actual traction when the optimal regularization parameter is used. The proposed method can thus be applied to study general force sensing problems, which utilize displacement measurements to sense inaccessible forces in linear elastic bodies with a priori constraints.
Neuro-adaptive backstepping control of SISO non-affine systems with unknown gain sign.
Ramezani, Zahra; Arefi, Mohammad Mehdi; Zargarzadeh, Hassan; Jahed-Motlagh, Mohammad Reza
2016-11-01
This paper presents two neuro-adaptive controllers for a class of uncertain single-input, single-output (SISO) nonlinear non-affine systems with unknown gain sign. The first approach is state feedback controller, so that a neuro-adaptive state-feedback controller is constructed based on the backstepping technique. The second approach is an observer-based controller and K-filters are designed to estimate the system states. The proposed method relaxes a priori knowledge of control gain sign and therefore by utilizing the Nussbaum-type functions this problem is addressed. In these methods, neural networks are employed to approximate the unknown nonlinear functions. The proposed adaptive control schemes guarantee that all the closed-loop signals are semi-globally uniformly ultimately bounded (SGUUB). Finally, the theoretical results are numerically verified through simulation examples. Simulation results show the effectiveness of the proposed methods. Copyright © 2016 ISA. All rights reserved.
Privacy preserving data anonymization of spontaneous ADE reporting system dataset.
Lin, Wen-Yang; Yang, Duen-Chuan; Wang, Jie-Teng
2016-07-18
To facilitate long-term safety surveillance of marketing drugs, many spontaneously reporting systems (SRSs) of ADR events have been established world-wide. Since the data collected by SRSs contain sensitive personal health information that should be protected to prevent the identification of individuals, it procures the issue of privacy preserving data publishing (PPDP), that is, how to sanitize (anonymize) raw data before publishing. Although much work has been done on PPDP, very few studies have focused on protecting privacy of SRS data and none of the anonymization methods is favorable for SRS datasets, due to which contain some characteristics such as rare events, multiple individual records, and multi-valued sensitive attributes. We propose a new privacy model called MS(k, θ (*) )-bounding for protecting published spontaneous ADE reporting data from privacy attacks. Our model has the flexibility of varying privacy thresholds, i.e., θ (*) , for different sensitive values and takes the characteristics of SRS data into consideration. We also propose an anonymization algorithm for sanitizing the raw data to meet the requirements specified through the proposed model. Our algorithm adopts a greedy-based clustering strategy to group the records into clusters, conforming to an innovative anonymization metric aiming to minimize the privacy risk as well as maintain the data utility for ADR detection. Empirical study was conducted using FAERS dataset from 2004Q1 to 2011Q4. We compared our model with four prevailing methods, including k-anonymity, (X, Y)-anonymity, Multi-sensitive l-diversity, and (α, k)-anonymity, evaluated via two measures, Danger Ratio (DR) and Information Loss (IL), and considered three different scenarios of threshold setting for θ (*) , including uniform setting, level-wise setting and frequency-based setting. We also conducted experiments to inspect the impact of anonymized data on the strengths of discovered ADR signals. With all three different threshold settings for sensitive value, our method can successively prevent the disclosure of sensitive values (nearly all observed DRs are zeros) without sacrificing too much of data utility. With non-uniform threshold setting, level-wise or frequency-based, our MS(k, θ (*))-bounding exhibits the best data utility and the least privacy risk among all the models. The experiments conducted on selected ADR signals from MedWatch show that only very small difference on signal strength (PRR or ROR) were observed. The results show that our method can effectively prevent the disclosure of patient sensitive information without sacrificing data utility for ADR signal detection. We propose a new privacy model for protecting SRS data that possess some characteristics overlooked by contemporary models and an anonymization algorithm to sanitize SRS data in accordance with the proposed model. Empirical evaluation on the real SRS dataset, i.e., FAERS, shows that our method can effectively solve the privacy problem in SRS data without influencing the ADR signal strength.
Tensions in Distributed Leadership
ERIC Educational Resources Information Center
Ho, Jeanne; Ng, David
2017-01-01
Purpose: This article proposes the utility of using activity theory as an analytical lens to examine the theoretical construct of distributed leadership, specifically to illuminate tensions encountered by leaders and how they resolved these tensions. Research Method: The study adopted the naturalistic inquiry approach of a case study of an…
Compulsory Birth Control and Fertility Measures in India.
ERIC Educational Resources Information Center
Halli, S. S.
1983-01-01
Discussion of possible applications of the microsimulation approach to analysis of population policy proposes compulsory sterilization policy for all of India. Topics covered include India's population problem, methods for generating a distribution of couples to be sterilized, model validation, data utilized, data analysis, program limitations,…
Evaluation of Two PCR-based Swine-specific Fecal Source Tracking Assays (Abstract)
Several PCR-based methods have been proposed to identify swine fecal pollution in environmental waters. However, the utility of these assays in identifying swine fecal contamination on a broad geographic scale is largely unknown. In this study, we evaluated the specificity, distr...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Inaba, Kensuke; Tamaki, Kiyoshi; Igeta, Kazuhiro
2014-12-04
In this study, we propose a method for generating cluster states of atoms in an optical lattice. By utilizing the quantum properties of Wannier orbitals, we create an tunable Ising interaction between atoms without inducing the spin-exchange interactions. We investigate the cause of errors that occur during entanglement generations, and then we propose an error-management scheme, which allows us to create high-fidelity cluster states in a short time.
A rule-based automatic sleep staging method.
Liang, Sheng-Fu; Kuo, Chin-En; Hu, Yu-Han; Cheng, Yu-Shian
2012-03-30
In this paper, a rule-based automatic sleep staging method was proposed. Twelve features including temporal and spectrum analyses of the EEG, EOG, and EMG signals were utilized. Normalization was applied to each feature to eliminating individual differences. A hierarchical decision tree with fourteen rules was constructed for sleep stage classification. Finally, a smoothing process considering the temporal contextual information was applied for the continuity. The overall agreement and kappa coefficient of the proposed method applied to the all night polysomnography (PSG) of seventeen healthy subjects compared with the manual scorings by R&K rules can reach 86.68% and 0.79, respectively. This method can integrate with portable PSG system for sleep evaluation at-home in the near future. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Bigdeli, Behnaz; Pahlavani, Parham
2017-01-01
Interpretation of synthetic aperture radar (SAR) data processing is difficult because the geometry and spectral range of SAR are different from optical imagery. Consequently, SAR imaging can be a complementary data to multispectral (MS) optical remote sensing techniques because it does not depend on solar illumination and weather conditions. This study presents a multisensor fusion of SAR and MS data based on the use of classification and regression tree (CART) and support vector machine (SVM) through a decision fusion system. First, different feature extraction strategies were applied on SAR and MS data to produce more spectral and textural information. To overcome the redundancy and correlation between features, an intrinsic dimension estimation method based on noise-whitened Harsanyi, Farrand, and Chang determines the proper dimension of the features. Then, principal component analysis and independent component analysis were utilized on stacked feature space of two data. Afterward, SVM and CART classified each reduced feature space. Finally, a fusion strategy was utilized to fuse the classification results. To show the effectiveness of the proposed methodology, single classification on each data was compared to the obtained results. A coregistered Radarsat-2 and WorldView-2 data set from San Francisco, USA, was available to examine the effectiveness of the proposed method. The results show that combinations of SAR data with optical sensor based on the proposed methodology improve the classification results for most of the classes. The proposed fusion method provided approximately 93.24% and 95.44% for two different areas of the data.
Toward a Smartphone Application for Estimation of Pulse Transit Time
Liu, He; Ivanov, Kamen; Wang, Yadong; Wang, Lei
2015-01-01
Pulse transit time (PTT) is an important physiological parameter that directly correlates with the elasticity and compliance of vascular walls and variations in blood pressure. This paper presents a PTT estimation method based on photoplethysmographic imaging (PPGi). The method utilizes two opposing cameras for simultaneous acquisition of PPGi waveform signals from the index fingertip and the forehead temple. An algorithm for the detection of maxima and minima in PPGi signals was developed, which includes technology for interpolation of the real positions of these points. We compared our PTT measurements with those obtained from the current methodological standards. Statistical results indicate that the PTT measured by our proposed method exhibits a good correlation with the established method. The proposed method is especially suitable for implementation in dual-camera-smartphones, which could facilitate PTT measurement among populations affected by cardiac complications. PMID:26516861
Multi-camera digital image correlation method with distributed fields of view
NASA Astrophysics Data System (ADS)
Malowany, Krzysztof; Malesa, Marcin; Kowaluk, Tomasz; Kujawinska, Malgorzata
2017-11-01
A multi-camera digital image correlation (DIC) method and system for measurements of large engineering objects with distributed, non-overlapping areas of interest are described. The data obtained with individual 3D DIC systems are stitched by an algorithm which utilizes the positions of fiducial markers determined simultaneously by Stereo-DIC units and laser tracker. The proposed calibration method enables reliable determination of transformations between local (3D DIC) and global coordinate systems. The applicability of the method was proven during in-situ measurements of a hall made of arch-shaped (18 m span) self-supporting metal-plates. The proposed method is highly recommended for 3D measurements of shape and displacements of large and complex engineering objects made from multiple directions and it provides the suitable accuracy of data for further advanced structural integrity analysis of such objects.
Toward a Smartphone Application for Estimation of Pulse Transit Time.
Liu, He; Ivanov, Kamen; Wang, Yadong; Wang, Lei
2015-10-27
Pulse transit time (PTT) is an important physiological parameter that directly correlates with the elasticity and compliance of vascular walls and variations in blood pressure. This paper presents a PTT estimation method based on photoplethysmographic imaging (PPGi). The method utilizes two opposing cameras for simultaneous acquisition of PPGi waveform signals from the index fingertip and the forehead temple. An algorithm for the detection of maxima and minima in PPGi signals was developed, which includes technology for interpolation of the real positions of these points. We compared our PTT measurements with those obtained from the current methodological standards. Statistical results indicate that the PTT measured by our proposed method exhibits a good correlation with the established method. The proposed method is especially suitable for implementation in dual-camera-smartphones, which could facilitate PTT measurement among populations affected by cardiac complications.
12 CFR 1320.12 - Advance notice of proposed determination.
Code of Federal Regulations, 2012 CFR
2012-01-01
... MARKET UTILITIES Consultations, Determinations and Hearings § 1320.12 Advance notice of proposed determination. (a) Notice of proposed determination and opportunity for hearing. Before making any final... provide the financial market utility with advance notice of the proposed determination, and proposed...
12 CFR 1320.12 - Advance notice of proposed determination.
Code of Federal Regulations, 2013 CFR
2013-01-01
... MARKET UTILITIES Consultations, Determinations and Hearings § 1320.12 Advance notice of proposed determination. (a) Notice of proposed determination and opportunity for hearing. Before making any final... provide the financial market utility with advance notice of the proposed determination, and proposed...
12 CFR 1320.12 - Advance notice of proposed determination.
Code of Federal Regulations, 2014 CFR
2014-01-01
... MARKET UTILITIES Consultations, Determinations and Hearings § 1320.12 Advance notice of proposed determination. (a) Notice of proposed determination and opportunity for hearing. Before making any final... provide the financial market utility with advance notice of the proposed determination, and proposed...
3D Measurement of Anatomical Cross-sections of Foot while Walking
NASA Astrophysics Data System (ADS)
Kimura, Makoto; Mochimaru, Masaaki; Kanade, Takeo
Recently, techniques for measuring and modeling of human body are taking attention, because human models are useful for ergonomic design in manufacturing. We aim to measure accurate shape of human foot that will be useful for the design of shoes. For such purpose, shape measurement of foot in motion is obviously important, because foot shape in the shoe is deformed while walking or running. In this paper, we propose a method to measure anatomical cross-sections of foot while walking. No one had ever measured dynamic shape of anatomical cross-sections, though they are very basic and popular in the field of biomechanics. Our proposed method is based on multi-view stereo method. The target cross-sections are painted in individual colors (red, green, yellow and blue), and the proposed method utilizes the characteristic of target shape in the camera captured images. Several nonlinear conditions are introduced in the process to find the consistent correspondence in all images. Our desired accuracy is less than 1mm error, which is similar to the existing 3D scanners for static foot measurement. In our experiments, the proposed method achieved the desired accuracy.
Wang, Yuliang; Zhang, Zaicheng; Wang, Huimin; Bi, Shusheng
2015-01-01
Cell image segmentation plays a central role in numerous biology studies and clinical applications. As a result, the development of cell image segmentation algorithms with high robustness and accuracy is attracting more and more attention. In this study, an automated cell image segmentation algorithm is developed to get improved cell image segmentation with respect to cell boundary detection and segmentation of the clustered cells for all cells in the field of view in negative phase contrast images. A new method which combines the thresholding method and edge based active contour method was proposed to optimize cell boundary detection. In order to segment clustered cells, the geographic peaks of cell light intensity were utilized to detect numbers and locations of the clustered cells. In this paper, the working principles of the algorithms are described. The influence of parameters in cell boundary detection and the selection of the threshold value on the final segmentation results are investigated. At last, the proposed algorithm is applied to the negative phase contrast images from different experiments. The performance of the proposed method is evaluated. Results show that the proposed method can achieve optimized cell boundary detection and highly accurate segmentation for clustered cells. PMID:26066315
FPGA Implementation of Optimal 3D-Integer DCT Structure for Video Compression
2015-01-01
A novel optimal structure for implementing 3D-integer discrete cosine transform (DCT) is presented by analyzing various integer approximation methods. The integer set with reduced mean squared error (MSE) and high coding efficiency are considered for implementation in FPGA. The proposed method proves that the least resources are utilized for the integer set that has shorter bit values. Optimal 3D-integer DCT structure is determined by analyzing the MSE, power dissipation, coding efficiency, and hardware complexity of different integer sets. The experimental results reveal that direct method of computing the 3D-integer DCT using the integer set [10, 9, 6, 2, 3, 1, 1] performs better when compared to other integer sets in terms of resource utilization and power dissipation. PMID:26601120
Mono-isotope Prediction for Mass Spectra Using Bayes Network.
Li, Hui; Liu, Chunmei; Rwebangira, Mugizi Robert; Burge, Legand
2014-12-01
Mass spectrometry is one of the widely utilized important methods to study protein functions and components. The challenge of mono-isotope pattern recognition from large scale protein mass spectral data needs computational algorithms and tools to speed up the analysis and improve the analytic results. We utilized naïve Bayes network as the classifier with the assumption that the selected features are independent to predict mono-isotope pattern from mass spectrometry. Mono-isotopes detected from validated theoretical spectra were used as prior information in the Bayes method. Three main features extracted from the dataset were employed as independent variables in our model. The application of the proposed algorithm to publicMo dataset demonstrates that our naïve Bayes classifier is advantageous over existing methods in both accuracy and sensitivity.
NASA Astrophysics Data System (ADS)
Hadyan, Fadhlil; Shaufiah; Arif Bijaksana, Moch.
2017-01-01
Automatic summarization is a system that can help someone to take the core information of a long text instantly. The system can help by summarizing text automatically. there’s Already many summarization systems that have been developed at this time but there are still many problems in those system. In this final task proposed summarization method using document index graph. This method utilizes the PageRank and HITS formula used to assess the web page, adapted to make an assessment of words in the sentences in a text document. The expected outcome of this final task is a system that can do summarization of a single document, by utilizing document index graph with TextRank and HITS to improve the quality of the summary results automatically.
Lamie, Nesrine T; Yehia, Ali M
2015-01-01
Simultaneous determination of Dimenhydrinate (DIM) and Cinnarizine (CIN) binary mixture with simple procedures were applied. Three ratio manipulating spectrophotometric methods were proposed. Normalized spectrum was utilized as a divisor for simultaneous determination of both drugs with minimum manipulation steps. The proposed methods were simultaneous constant center (SCC), simultaneous derivative ratio spectrophotometry (S(1)DD) and ratio H-point standard addition method (RHPSAM). Peak amplitudes at isoabsorptive point in ratio spectra were measured for determination of total concentrations of DIM and CIN. For subsequent determination of DIM concentration, difference between peak amplitudes at 250 nm and 267 nm were used in SCC. While the peak amplitude at 275 nm of the first derivative ratio spectra were used in S(1)DD; then subtraction of DIM concentration from the total one provided the CIN concentration. The last RHPSAM was a dual wavelength method in which two calibrations were plotted at 220 nm and 230 nm. The coordinates of intersection point between the two calibration lines were corresponding to DIM and CIN concentrations. The proposed methods were successfully applied for combined dosage form analysis, Moreover statistical comparison between the proposed and reported spectrophotometric methods was applied. Copyright © 2015 Elsevier B.V. All rights reserved.
Segmentation of mouse dynamic PET images using a multiphase level set method
NASA Astrophysics Data System (ADS)
Cheng-Liao, Jinxiu; Qi, Jinyi
2010-11-01
Image segmentation plays an important role in medical diagnosis. Here we propose an image segmentation method for four-dimensional mouse dynamic PET images. We consider that voxels inside each organ have similar time activity curves. The use of tracer dynamic information allows us to separate regions that have similar integrated activities in a static image but with different temporal responses. We develop a multiphase level set method that utilizes both the spatial and temporal information in a dynamic PET data set. Different weighting factors are assigned to each image frame based on the noise level and activity difference among organs of interest. We used a weighted absolute difference function in the data matching term to increase the robustness of the estimate and to avoid over-partition of regions with high contrast. We validated the proposed method using computer simulated dynamic PET data, as well as real mouse data from a microPET scanner, and compared the results with those of a dynamic clustering method. The results show that the proposed method results in smoother segments with the less number of misclassified voxels.
Guided filter and principal component analysis hybrid method for hyperspectral pansharpening
NASA Astrophysics Data System (ADS)
Qu, Jiahui; Li, Yunsong; Dong, Wenqian
2018-01-01
Hyperspectral (HS) pansharpening aims to generate a fused HS image with high spectral and spatial resolution through integrating an HS image with a panchromatic (PAN) image. A guided filter (GF) and principal component analysis (PCA) hybrid HS pansharpening method is proposed. First, the HS image is interpolated and the PCA transformation is performed on the interpolated HS image. The first principal component (PC1) channel concentrates on the spatial information of the HS image. Different from the traditional PCA method, the proposed method sharpens the PAN image and utilizes the GF to obtain the spatial information difference between the HS image and the enhanced PAN image. Then, in order to reduce spectral and spatial distortion, an appropriate tradeoff parameter is defined and the spatial information difference is injected into the PC1 channel through multiplying by this tradeoff parameter. Once the new PC1 channel is obtained, the fused image is finally generated by the inverse PCA transformation. Experiments performed on both synthetic and real datasets show that the proposed method outperforms other several state-of-the-art HS pansharpening methods in both subjective and objective evaluations.
Comparing and combining biomarkers as principle surrogates for time-to-event clinical endpoints.
Gabriel, Erin E; Sachs, Michael C; Gilbert, Peter B
2015-02-10
Principal surrogate endpoints are useful as targets for phase I and II trials. In many recent trials, multiple post-randomization biomarkers are measured. However, few statistical methods exist for comparison of or combination of biomarkers as principal surrogates, and none of these methods to our knowledge utilize time-to-event clinical endpoint information. We propose a Weibull model extension of the semi-parametric estimated maximum likelihood method that allows for the inclusion of multiple biomarkers in the same risk model as multivariate candidate principal surrogates. We propose several methods for comparing candidate principal surrogates and evaluating multivariate principal surrogates. These include the time-dependent and surrogate-dependent true and false positive fraction, the time-dependent and the integrated standardized total gain, and the cumulative distribution function of the risk difference. We illustrate the operating characteristics of our proposed methods in simulations and outline how these statistics can be used to evaluate and compare candidate principal surrogates. We use these methods to investigate candidate surrogates in the Diabetes Control and Complications Trial. Copyright © 2014 John Wiley & Sons, Ltd.
Azmi, Syed Najmul Hejaz; Al-Fazari, Ahlam; Al-Badaei, Munira; Al-Mahrazi, Ruqiya
2015-12-01
An accurate, selective and sensitive spectrofluorimetric method was developed for the determination of citalopram hydrobromide in commercial dosage forms. The method was based on the formation of a fluorescent ion-pair complex between citalopram hydrobromide and eosin Y in the presence of a disodium hydrogen phosphate/citric acid buffer solution of pH 3.4 that was extractable in dichloromethane. The extracted complex showed fluorescence intensity at λem = 554 nm after excitation at 259 nm. The calibration curve was linear over at concentrations of 2.0-26.0 µg/mL. Under optimized experimental conditions, the proposed method was validated as per ICH guidelines. The effect of common excipients used as additives was tested and the tolerance limit calculated. The limit of detection for the proposed method was 0.121 μg/mL. The proposed method was successfully applied to the determination of citalopram hydrobromide in commercial dosage forms. The results were compared with the reference RP-HPLC method. Copyright © 2015 John Wiley & Sons, Ltd.
Motion-compensated speckle tracking via particle filtering
NASA Astrophysics Data System (ADS)
Liu, Lixin; Yagi, Shin-ichi; Bian, Hongyu
2015-07-01
Recently, an improved motion compensation method that uses the sum of absolute differences (SAD) has been applied to frame persistence utilized in conventional ultrasonic imaging because of its high accuracy and relative simplicity in implementation. However, high time consumption is still a significant drawback of this space-domain method. To seek for a more accelerated motion compensation method and verify if it is possible to eliminate conventional traversal correlation, motion-compensated speckle tracking between two temporally adjacent B-mode frames based on particle filtering is discussed. The optimal initial density of particles, the least number of iterations, and the optimal transition radius of the second iteration are analyzed from simulation results for the sake of evaluating the proposed method quantitatively. The speckle tracking results obtained using the optimized parameters indicate that the proposed method is capable of tracking the micromotion of speckle throughout the region of interest (ROI) that is superposed with global motion. The computational cost of the proposed method is reduced by 25% compared with that of the previous algorithm and further improvement is necessary.
Ship detection in panchromatic images: a new method and its DSP implementation
NASA Astrophysics Data System (ADS)
Yao, Yuan; Jiang, Zhiguo; Zhang, Haopeng; Wang, Mengfei; Meng, Gang
2016-03-01
In this paper, a new ship detection method is proposed after analyzing the characteristics of panchromatic remote sensing images and ship targets. Firstly, AdaBoost(Adaptive Boosting) classifiers trained by Haar features are utilized to make coarse detection of ship targets. Then LSD (Line Segment Detector) is adopted to extract the line features in target slices to make fine detection. Experimental results on a dataset of panchromatic remote sensing images with a spatial resolution of 2m show that the proposed algorithm can achieve high detection rate and low false alarm rate. Meanwhile, the algorithm can meet the needs of practical applications on DSP (Digital Signal Processor).
Simple Backdoors on RSA Modulus by Using RSA Vulnerability
NASA Astrophysics Data System (ADS)
Sun, Hung-Min; Wu, Mu-En; Yang, Cheng-Ta
This investigation proposes two methods for embedding backdoors in the RSA modulus N=pq rather than in the public exponent e. This strategy not only permits manufacturers to embed backdoors in an RSA system, but also allows users to choose any desired public exponent, such as e=216+1, to ensure efficient encryption. This work utilizes lattice attack and exhaustive attack to embed backdoors in two proposed methods, called RSASBLT and RSASBES, respectively. Both approaches involve straightforward steps, making their running time roughly the same as that of normal RSA key-generation time, implying that no one can detect the backdoor by observing time imparity.
Zhang, Yilong; Han, Sung Won; Cox, Laura M; Li, Huilin
2017-12-01
Human microbiome is the collection of microbes living in and on the various parts of our body. The microbes living on our body in nature do not live alone. They act as integrated microbial community with massive competing and cooperating and contribute to our human health in a very important way. Most current analyses focus on examining microbial differences at a single time point, which do not adequately capture the dynamic nature of the microbiome data. With the advent of high-throughput sequencing and analytical tools, we are able to probe the interdependent relationship among microbial species through longitudinal study. Here, we propose a multivariate distance-based test to evaluate the association between key phenotypic variables and microbial interdependence utilizing the repeatedly measured microbiome data. Extensive simulations were performed to evaluate the validity and efficiency of the proposed method. We also demonstrate the utility of the proposed test using a well-designed longitudinal murine experiment and a longitudinal human study. The proposed methodology has been implemented in the freely distributed open-source R package and Python code. © 2017 WILEY PERIODICALS, INC.
Tensor Rank Preserving Discriminant Analysis for Facial Recognition.
Tao, Dapeng; Guo, Yanan; Li, Yaotang; Gao, Xinbo
2017-10-12
Facial recognition, one of the basic topics in computer vision and pattern recognition, has received substantial attention in recent years. However, for those traditional facial recognition algorithms, the facial images are reshaped to a long vector, thereby losing part of the original spatial constraints of each pixel. In this paper, a new tensor-based feature extraction algorithm termed tensor rank preserving discriminant analysis (TRPDA) for facial image recognition is proposed; the proposed method involves two stages: in the first stage, the low-dimensional tensor subspace of the original input tensor samples was obtained; in the second stage, discriminative locality alignment was utilized to obtain the ultimate vector feature representation for subsequent facial recognition. On the one hand, the proposed TRPDA algorithm fully utilizes the natural structure of the input samples, and it applies an optimization criterion that can directly handle the tensor spectral analysis problem, thereby decreasing the computation cost compared those traditional tensor-based feature selection algorithms. On the other hand, the proposed TRPDA algorithm extracts feature by finding a tensor subspace that preserves most of the rank order information of the intra-class input samples. Experiments on the three facial databases are performed here to determine the effectiveness of the proposed TRPDA algorithm.
Segmenting breast cancerous regions in thermal images using fuzzy active contours
Ghayoumi Zadeh, Hossein; Haddadnia, Javad; Rahmani Seryasat, Omid; Mostafavi Isfahani, Sayed Mohammad
2016-01-01
Breast cancer is the main cause of death among young women in developing countries. The human body temperature carries critical medical information related to the overall body status. Abnormal rise in total and regional body temperature is a natural symptom in diagnosing many diseases. Thermal imaging (Thermography) utilizes infrared beams which are fast, non-invasive, and non-contact and the output created images by this technique are flexible and useful to monitor the temperature of the human body. In some clinical studies and biopsy tests, it is necessary for the clinician to know the extent of the cancerous area. In such cases, the thermal image is very useful. In the same line, to detect the cancerous tissue core, thermal imaging is beneficial. This paper presents a fully automated approach to detect the thermal edge and core of the cancerous area in thermography images. In order to evaluate the proposed method, 60 patients with an average age of 44/9 were chosen. These cases were suspected of breast tissue disease. These patients referred to Tehran Imam Khomeini Imaging Center. Clinical examinations such as ultrasound, biopsy, questionnaire, and eventually thermography were done precisely on these individuals. Finally, the proposed model is applied for segmenting the proved abnormal area in thermal images. The proposed model is based on a fuzzy active contour designed by fuzzy logic. The presented method can segment cancerous tissue areas from its borders in thermal images of the breast area. In order to evaluate the proposed algorithm, Hausdorff and mean distance between manual and automatic method were used. Estimation of distance was conducted to accurately separate the thermal core and edge. Hausdorff distance between the proposed and the manual method for thermal core and edge was 0.4719 ± 0.4389, 0.3171 ± 0.1056 mm respectively, and the average distance between the proposed and the manual method for core and thermal edge was 0.0845 ± 0.0619, 0.0710 ± 0.0381 mm respectively. Furthermore, the sensitivity in recognizing the thermal pattern in breast tissue masses is 85 % and its accuracy is 91.98 %.A thermal imaging system has been proposed that is able to recognize abnormal breast tissue masses. This system utilizes fuzzy active contours to extract the abnormal regions automatically. PMID:28096784
A New Soft Computing Method for K-Harmonic Means Clustering.
Yeh, Wei-Chang; Jiang, Yunzhi; Chen, Yee-Fen; Chen, Zhe
2016-01-01
The K-harmonic means clustering algorithm (KHM) is a new clustering method used to group data such that the sum of the harmonic averages of the distances between each entity and all cluster centroids is minimized. Because it is less sensitive to initialization than K-means (KM), many researchers have recently been attracted to studying KHM. In this study, the proposed iSSO-KHM is based on an improved simplified swarm optimization (iSSO) and integrates a variable neighborhood search (VNS) for KHM clustering. As evidence of the utility of the proposed iSSO-KHM, we present extensive computational results on eight benchmark problems. From the computational results, the comparison appears to support the superiority of the proposed iSSO-KHM over previously developed algorithms for all experiments in the literature.
Active appearance model and deep learning for more accurate prostate segmentation on MRI
NASA Astrophysics Data System (ADS)
Cheng, Ruida; Roth, Holger R.; Lu, Le; Wang, Shijun; Turkbey, Baris; Gandler, William; McCreedy, Evan S.; Agarwal, Harsh K.; Choyke, Peter; Summers, Ronald M.; McAuliffe, Matthew J.
2016-03-01
Prostate segmentation on 3D MR images is a challenging task due to image artifacts, large inter-patient prostate shape and texture variability, and lack of a clear prostate boundary specifically at apex and base levels. We propose a supervised machine learning model that combines atlas based Active Appearance Model (AAM) with a Deep Learning model to segment the prostate on MR images. The performance of the segmentation method is evaluated on 20 unseen MR image datasets. The proposed method combining AAM and Deep Learning achieves a mean Dice Similarity Coefficient (DSC) of 0.925 for whole 3D MR images of the prostate using axial cross-sections. The proposed model utilizes the adaptive atlas-based AAM model and Deep Learning to achieve significant segmentation accuracy.
A class-based link prediction using Distance Dependent Chinese Restaurant Process
NASA Astrophysics Data System (ADS)
Andalib, Azam; Babamir, Seyed Morteza
2016-08-01
One of the important tasks in relational data analysis is link prediction which has been successfully applied on many applications such as bioinformatics, information retrieval, etc. The link prediction is defined as predicting the existence or absence of edges between nodes of a network. In this paper, we propose a novel method for link prediction based on Distance Dependent Chinese Restaurant Process (DDCRP) model which enables us to utilize the information of the topological structure of the network such as shortest path and connectivity of the nodes. We also propose a new Gibbs sampling algorithm for computing the posterior distribution of the hidden variables based on the training data. Experimental results on three real-world datasets show the superiority of the proposed method over other probabilistic models for link prediction problem.
Assigning values to intermediate health states for cost-utility analysis: theory and practice.
Cohen, B J
1996-01-01
Cost-utility analysis (CUA) was developed to guide the allocation of health care resources under a budget constraint. As the generally stated goal of CUA is to maximize aggregate health benefits, the philosophical underpinning of this method is classic utilitarianism. Utilitarianism has been criticized as a basis for social choice because of its emphasis on the net sum of benefits without regard to the distribution of benefits. For example, it has been argued that absolute priority should be given to the worst off when making social choices affecting basic needs. Application of classic utilitarianism requires use of strength-of-preference utilities, assessed under conditions of certainty, to assign quality-adjustment factors to intermediate health states. The two methods commonly used to measure strength-of-preference utility, categorical scaling and time tradeoff, produce rankings that systematically give priority to those who are better off. Alternatively, von Neumann-Morgenstern utilities, assessed under conditions of uncertainty, could be used to assign values to intermediate health states. The theoretical basis for this would be Harsanyi's proposal that social choice be made under the hypothetical assumption that one had an equal chance of being anyone in society. If this proposal is accepted, as well as the expected-utility axioms applied to both individual choice and social choice, the preferred societal arrangement is that with the highest expected von Neumann-Morgenstern utility. In the presence of risk aversion, this will give some priority to the worst-off relative to classic utilitarianism. Another approach is to raise the values obtained by time-tradeoff assessments to a power a between 0 and 1. This would explicitly give priority to the worst off, with the degree of priority increasing as a decreases. Results could be presented over a range of a. The results of CUA would then provide useful information to those holding a range of philosophical points of view.
NASA Astrophysics Data System (ADS)
Kaloop, Mosbeh R.; Yigit, Cemal O.; Hu, Jong W.
2018-03-01
Recently, the high rate global navigation satellite system-precise point positioning (GNSS-PPP) technique has been used to detect the dynamic behavior of structures. This study aimed to increase the accuracy of the extraction oscillation properties of structural movements based on the high-rate (10 Hz) GNSS-PPP monitoring technique. A developmental model based on the combination of wavelet package transformation (WPT) de-noising and neural network prediction (NN) was proposed to improve the dynamic behavior of structures for GNSS-PPP method. A complicated numerical simulation involving highly noisy data and 13 experimental cases with different loads were utilized to confirm the efficiency of the proposed model design and the monitoring technique in detecting the dynamic behavior of structures. The results revealed that, when combined with the proposed model, GNSS-PPP method can be used to accurately detect the dynamic behavior of engineering structures as an alternative to relative GNSS method.
Reconstruction method for fringe projection profilometry based on light beams.
Li, Xuexing; Zhang, Zhijiang; Yang, Chen
2016-12-01
A novel reconstruction method for fringe projection profilometry, based on light beams, is proposed and verified by experiments. Commonly used calibration techniques require the parameters of projector calibration or the reference planes placed in many known positions. Obviously, introducing the projector calibration can reduce the accuracy of the reconstruction result, and setting the reference planes to many known positions is a time-consuming process. Therefore, in this paper, a reconstruction method without projector's parameters is proposed and only two reference planes are introduced. A series of light beams determined by the subpixel point-to-point map on the two reference planes combined with their reflected light beams determined by the camera model are used to calculate the 3D coordinates of reconstruction points. Furthermore, the bundle adjustment strategy and the complementary gray-code phase-shifting method are utilized to ensure the accuracy and stability. Qualitative and quantitative comparisons as well as experimental tests demonstrate the performance of our proposed approach, and the measurement accuracy can reach about 0.0454 mm.
Autonomous Landmark Calibration Method for Indoor Localization
Kim, Jae-Hoon; Kim, Byoung-Seop
2017-01-01
Machine-generated data expansion is a global phenomenon in recent Internet services. The proliferation of mobile communication and smart devices has increased the utilization of machine-generated data significantly. One of the most promising applications of machine-generated data is the estimation of the location of smart devices. The motion sensors integrated into smart devices generate continuous data that can be used to estimate the location of pedestrians in an indoor environment. We focus on the estimation of the accurate location of smart devices by determining the landmarks appropriately for location error calibration. In the motion sensor-based location estimation, the proposed threshold control method determines valid landmarks in real time to avoid the accumulation of errors. A statistical method analyzes the acquired motion sensor data and proposes a valid landmark for every movement of the smart devices. Motion sensor data used in the testbed are collected from the actual measurements taken throughout a commercial building to demonstrate the practical usefulness of the proposed method. PMID:28837071
Patch-based image reconstruction for PET using prior-image derived dictionaries
NASA Astrophysics Data System (ADS)
Tahaei, Marzieh S.; Reader, Andrew J.
2016-09-01
In PET image reconstruction, regularization is often needed to reduce the noise in the resulting images. Patch-based image processing techniques have recently been successfully used for regularization in medical image reconstruction through a penalized likelihood framework. Re-parameterization within reconstruction is another powerful regularization technique in which the object in the scanner is re-parameterized using coefficients for spatially-extensive basis vectors. In this work, a method for extracting patch-based basis vectors from the subject’s MR image is proposed. The coefficients for these basis vectors are then estimated using the conventional MLEM algorithm. Furthermore, using the alternating direction method of multipliers, an algorithm for optimizing the Poisson log-likelihood while imposing sparsity on the parameters is also proposed. This novel method is then utilized to find sparse coefficients for the patch-based basis vectors extracted from the MR image. The results indicate the superiority of the proposed methods to patch-based regularization using the penalized likelihood framework.
Zhang, Hong-guang; Lu, Jian-gang
2016-02-01
Abstract To overcome the problems of significant difference among samples and nonlinearity between the property and spectra of samples in spectral quantitative analysis, a local regression algorithm is proposed in this paper. In this algorithm, net signal analysis method(NAS) was firstly used to obtain the net analyte signal of the calibration samples and unknown samples, then the Euclidean distance between net analyte signal of the sample and net analyte signal of calibration samples was calculated and utilized as similarity index. According to the defined similarity index, the local calibration sets were individually selected for each unknown sample. Finally, a local PLS regression model was built on each local calibration sets for each unknown sample. The proposed method was applied to a set of near infrared spectra of meat samples. The results demonstrate that the prediction precision and model complexity of the proposed method are superior to global PLS regression method and conventional local regression algorithm based on spectral Euclidean distance.
Film patterned retarder for stereoscopic three-dimensional display using ink-jet printing method.
Lim, Young Jin; Yu, Ji Hoon; Song, Ki Hoon; Lee, Myong-Hoon; Ren, Hongwen; Mun, Byung-June; Lee, Gi-Dong; Lee, Seung Hee
2014-09-22
We propose a film patterned retarder (FPR) for stereoscopic three-dimensional display with polarization glasses using ink-jet printing method. Conventional FPR process requires coating of photo-alignment and then UV exposure using wire-grid mask, which is very expensive and difficult. The proposed novel fabrication method utilizes a plastic substrate made of polyether sulfone and an alignment layer, poly (4, 4' - (9, 9 -fluorenyl) diphenylene cyclobutanyltetracarboximide) (9FDA/CBDA) in which the former and the latter aligns reactive mesogen along and perpendicular to the rubbing direction, respectively. The ink-jet printing of 9FDA/CBDA line by line allows fabricating the cost effective FPR which can be widely applied for 3D display applications.
Fang, Leyuan; Cunefare, David; Wang, Chong; Guymer, Robyn H.; Li, Shutao; Farsiu, Sina
2017-01-01
We present a novel framework combining convolutional neural networks (CNN) and graph search methods (termed as CNN-GS) for the automatic segmentation of nine layer boundaries on retinal optical coherence tomography (OCT) images. CNN-GS first utilizes a CNN to extract features of specific retinal layer boundaries and train a corresponding classifier to delineate a pilot estimate of the eight layers. Next, a graph search method uses the probability maps created from the CNN to find the final boundaries. We validated our proposed method on 60 volumes (2915 B-scans) from 20 human eyes with non-exudative age-related macular degeneration (AMD), which attested to effectiveness of our proposed technique. PMID:28663902
A Method for Harmonic Sources Detection based on Harmonic Distortion Power Rate
NASA Astrophysics Data System (ADS)
Lin, Ruixing; Xu, Lin; Zheng, Xian
2018-03-01
Harmonic sources detection at the point of common coupling is an essential step for harmonic contribution determination and harmonic mitigation. The harmonic distortion power rate index is proposed for harmonic source location based on IEEE Std 1459-2010 in the paper. The method only based on harmonic distortion power is not suitable when the background harmonic is large. To solve this problem, a threshold is determined by the prior information, when the harmonic distortion power is larger than the threshold, the customer side is considered as the main harmonic source, otherwise, the utility side is. A simple model of public power system was built in MATLAB/Simulink and field test results of typical harmonic loads verified the effectiveness of proposed method.
Fang, Leyuan; Cunefare, David; Wang, Chong; Guymer, Robyn H; Li, Shutao; Farsiu, Sina
2017-05-01
We present a novel framework combining convolutional neural networks (CNN) and graph search methods (termed as CNN-GS) for the automatic segmentation of nine layer boundaries on retinal optical coherence tomography (OCT) images. CNN-GS first utilizes a CNN to extract features of specific retinal layer boundaries and train a corresponding classifier to delineate a pilot estimate of the eight layers. Next, a graph search method uses the probability maps created from the CNN to find the final boundaries. We validated our proposed method on 60 volumes (2915 B-scans) from 20 human eyes with non-exudative age-related macular degeneration (AMD), which attested to effectiveness of our proposed technique.
Yeung, Dit-Yan; Chang, Hong; Dai, Guang
2008-11-01
In recent years, metric learning in the semisupervised setting has aroused a lot of research interest. One type of semisupervised metric learning utilizes supervisory information in the form of pairwise similarity or dissimilarity constraints. However, most methods proposed so far are either limited to linear metric learning or unable to scale well with the data set size. In this letter, we propose a nonlinear metric learning method based on the kernel approach. By applying low-rank approximation to the kernel matrix, our method can handle significantly larger data sets. Moreover, our low-rank approximation scheme can naturally lead to out-of-sample generalization. Experiments performed on both artificial and real-world data show very promising results.
Video-Based Fingerprint Verification
Qin, Wei; Yin, Yilong; Liu, Lili
2013-01-01
Conventional fingerprint verification systems use only static information. In this paper, fingerprint videos, which contain dynamic information, are utilized for verification. Fingerprint videos are acquired by the same capture device that acquires conventional fingerprint images, and the user experience of providing a fingerprint video is the same as that of providing a single impression. After preprocessing and aligning processes, “inside similarity” and “outside similarity” are defined and calculated to take advantage of both dynamic and static information contained in fingerprint videos. Match scores between two matching fingerprint videos are then calculated by combining the two kinds of similarity. Experimental results show that the proposed video-based method leads to a relative reduction of 60 percent in the equal error rate (EER) in comparison to the conventional single impression-based method. We also analyze the time complexity of our method when different combinations of strategies are used. Our method still outperforms the conventional method, even if both methods have the same time complexity. Finally, experimental results demonstrate that the proposed video-based method can lead to better accuracy than the multiple impressions fusion method, and the proposed method has a much lower false acceptance rate (FAR) when the false rejection rate (FRR) is quite low. PMID:24008283
NASA Astrophysics Data System (ADS)
Qiao, Zijian; Lei, Yaguo; Lin, Jing; Jia, Feng
2017-02-01
In mechanical fault diagnosis, most traditional methods for signal processing attempt to suppress or cancel noise imbedded in vibration signals for extracting weak fault characteristics, whereas stochastic resonance (SR), as a potential tool for signal processing, is able to utilize the noise to enhance fault characteristics. The classical bistable SR (CBSR), as one of the most widely used SR methods, however, has the disadvantage of inherent output saturation. The output saturation not only reduces the output signal-to-noise ratio (SNR) but also limits the enhancement capability for fault characteristics. To overcome this shortcoming, a novel method is proposed to extract the fault characteristics, where a piecewise bistable potential model is established. Simulated signals are used to illustrate the effectiveness of the proposed method, and the results show that the method is able to extract weak fault characteristics and has good enhancement performance and anti-noise capability. Finally, the method is applied to fault diagnosis of bearings and planetary gearboxes, respectively. The diagnosis results demonstrate that the proposed method can obtain larger output SNR, higher spectrum peaks at fault characteristic frequencies and therefore larger recognizable degree than the CBSR method.
Optimal joint detection and estimation that maximizes ROC-type curves
Wunderlich, Adam; Goossens, Bart; Abbey, Craig K.
2017-01-01
Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation. PMID:27093544
Optimal Joint Detection and Estimation That Maximizes ROC-Type Curves.
Wunderlich, Adam; Goossens, Bart; Abbey, Craig K
2016-09-01
Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation.
Zhang, Shangjian; Wang, Heng; Zou, Xinhai; Zhang, Yali; Lu, Rongguo; Liu, Yong
2015-06-15
An extinction-ratio-independent electrical method is proposed for measuring chirp parameters of Mach-Zehnder electric-optic intensity modulators based on frequency-shifted optical heterodyne. The method utilizes the electrical spectrum analysis of the heterodyne products between the intensity modulated optical signal and the frequency-shifted optical carrier, and achieves the intrinsic chirp parameters measurement at microwave region with high-frequency resolution and wide-frequency range for the Mach-Zehnder modulator with a finite extinction ratio. Moreover, the proposed method avoids calibrating the responsivity fluctuation of the photodiode in spite of the involved photodetection. Chirp parameters as a function of modulation frequency are experimentally measured and compared to those with the conventional optical spectrum analysis method. Our method enables an extinction-ratio-independent and calibration-free electrical measurement of Mach-Zehnder intensity modulators by using the high-resolution frequency-shifted heterodyne technique.
NASA Astrophysics Data System (ADS)
Iwaki, Sunao; Ueno, Shoogo
1998-06-01
The weighted minimum-norm estimation (wMNE) is a popular method to obtain the source distribution in the human brain from magneto- and electro- encephalograpic measurements when detailed information about the generator profile is not available. We propose a method to reconstruct current distributions in the human brain based on the wMNE technique with the weighting factors defined by a simplified multiple signal classification (MUSIC) prescanning. In this method, in addition to the conventional depth normalization technique, weighting factors of the wMNE were determined by the cost values previously calculated by a simplified MUSIC scanning which contains the temporal information of the measured data. We performed computer simulations of this method and compared it with the conventional wMNE method. The results show that the proposed method is effective for the reconstruction of the current distributions from noisy data.
Applications of 3D-EDGE Detection for ALS Point Cloud
NASA Astrophysics Data System (ADS)
Ni, H.; Lin, X. G.; Zhang, J. X.
2017-09-01
Edge detection has been one of the major issues in the field of remote sensing and photogrammetry. With the fast development of sensor technology of laser scanning system, dense point clouds have become increasingly common. Precious 3D-edges are able to be detected from these point clouds and a great deal of edge or feature line extraction methods have been proposed. Among these methods, an easy-to-use 3D-edge detection method, AGPN (Analyzing Geometric Properties of Neighborhoods), has been proposed. The AGPN method detects edges based on the analysis of geometric properties of a query point's neighbourhood. The AGPN method detects two kinds of 3D-edges, including boundary elements and fold edges, and it has many applications. This paper presents three applications of AGPN, i.e., 3D line segment extraction, ground points filtering, and ground breakline extraction. Experiments show that the utilization of AGPN method gives a straightforward solution to these applications.
BagReg: Protein inference through machine learning.
Zhao, Can; Liu, Dao; Teng, Ben; He, Zengyou
2015-08-01
Protein inference from the identified peptides is of primary importance in the shotgun proteomics. The target of protein inference is to identify whether each candidate protein is truly present in the sample. To date, many computational methods have been proposed to solve this problem. However, there is still no method that can fully utilize the information hidden in the input data. In this article, we propose a learning-based method named BagReg for protein inference. The method firstly artificially extracts five features from the input data, and then chooses each feature as the class feature to separately build models to predict the presence probabilities of proteins. Finally, the weak results from five prediction models are aggregated to obtain the final result. We test our method on six public available data sets. The experimental results show that our method is superior to the state-of-the-art protein inference algorithms. Copyright © 2015 Elsevier Ltd. All rights reserved.
EIT image reconstruction based on a hybrid FE-EFG forward method and the complete-electrode model.
Hadinia, M; Jafari, R; Soleimani, M
2016-06-01
This paper presents the application of the hybrid finite element-element free Galerkin (FE-EFG) method for the forward and inverse problems of electrical impedance tomography (EIT). The proposed method is based on the complete electrode model. Finite element (FE) and element-free Galerkin (EFG) methods are accurate numerical techniques. However, the FE technique has meshing task problems and the EFG method is computationally expensive. In this paper, the hybrid FE-EFG method is applied to take both advantages of FE and EFG methods, the complete electrode model of the forward problem is solved, and an iterative regularized Gauss-Newton method is adopted to solve the inverse problem. The proposed method is applied to compute Jacobian in the inverse problem. Utilizing 2D circular homogenous models, the numerical results are validated with analytical and experimental results and the performance of the hybrid FE-EFG method compared with the FE method is illustrated. Results of image reconstruction are presented for a human chest experimental phantom.
Moghbel, Mehrdad; Mashohor, Syamsiah; Mahmud, Rozi; Saripan, M. Iqbal Bin
2016-01-01
Segmentation of liver tumors from Computed Tomography (CT) and tumor burden analysis play an important role in the choice of therapeutic strategies for liver diseases and treatment monitoring. In this paper, a new segmentation method for liver tumors from contrast-enhanced CT imaging is proposed. As manual segmentation of tumors for liver treatment planning is both labor intensive and time-consuming, a highly accurate automatic tumor segmentation is desired. The proposed framework is fully automatic requiring no user interaction. The proposed segmentation evaluated on real-world clinical data from patients is based on a hybrid method integrating cuckoo optimization and fuzzy c-means algorithm with random walkers algorithm. The accuracy of the proposed method was validated using a clinical liver dataset containing one of the highest numbers of tumors utilized for liver tumor segmentation containing 127 tumors in total with further validation of the results by a consultant radiologist. The proposed method was able to achieve one of the highest accuracies reported in the literature for liver tumor segmentation compared to other segmentation methods with a mean overlap error of 22.78 % and dice similarity coefficient of 0.75 in 3Dircadb dataset and a mean overlap error of 15.61 % and dice similarity coefficient of 0.81 in MIDAS dataset. The proposed method was able to outperform most other tumor segmentation methods reported in the literature while representing an overlap error improvement of 6 % compared to one of the best performing automatic methods in the literature. The proposed framework was able to provide consistently accurate results considering the number of tumors and the variations in tumor contrast enhancements and tumor appearances while the tumor burden was estimated with a mean error of 0.84 % in 3Dircadb dataset. PMID:27540353
Recognition-Based Physical Response to Facilitate EFL Learning
ERIC Educational Resources Information Center
Hwang, Wu-Yuin; Shih, Timothy K.; Yeh, Shih-Ching; Chou, Ke-Chien; Ma, Zhao-Heng; Sommool, Worapot
2014-01-01
This study, based on total physical response and cognitive psychology, proposed a Kinesthetic English Learning System (KELS), which utilized Microsoft's Kinect technology to build kinesthetic interaction with life-related contexts in English. A subject test with 39 tenth-grade students was conducted following empirical research method in order to…
Alternative toxicity assessment methods to characterize the hazards of chemical substances have been proposed to reduce animal testing and screen thousands of chemicals in an efficient manner. Resources to accomplish these goals include utilizing large in vitro chemical screening...
USDA-ARS?s Scientific Manuscript database
Many methods have been proposed to incorporate molecular markers into breeding programs. Presented is a cost effective marker assisted selection (MAS) methodology that utilizes individual plant phenotypes, seed production-based knowledge of maternity, and molecular marker-determined paternity. Proge...
"But Where Are the Cossacks?": An Alternative Strategy for Popularization.
ERIC Educational Resources Information Center
Fayard, Pierre
1991-01-01
Proposes methods for the optimal popularization of science and technology to shape and influence public opinion about the utility of associated policies and goals. Emphasizes that such popularization should concentrate on enabling laypeople to acquire an adequate level of scientific and technological literacy, applicable to their everyday lives.…
Teaching Linear Algebra: Must the Fog Always Roll In?
ERIC Educational Resources Information Center
Carlson, David
1993-01-01
Proposes methods to teach the more difficult concepts of linear algebra. Examines features of the Linear Algebra Curriculum Study Group Core Syllabus, and presents problems from the core syllabus that utilize the mathematical process skills of making conjectures, proving the results, and communicating the results to colleagues. Presents five…
76 FR 63993 - Proposed Collection; Comment Request for Regulation Project
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-14
... that support must be reported using the organization's overall method of accounting. All tax-exempt... information displays a valid OMB control number. Books or records relating to a collection of information must... functions of the agency, including whether the information shall have practical utility; (b) the accuracy of...
A cognitive gateway-based spectrum sharing method in downlink round robin scheduling of LTE system
NASA Astrophysics Data System (ADS)
Deng, Hongyu; Wu, Cheng; Wang, Yiming
2017-07-01
A key technique of LTE is how to allocate efficiently the resource of radio spectrum. Traditional Round Robin (RR) scheduling scheme may lead to too many resource residues when allocating resources. When the number of users in the current transmission time interval (TTI) is not the greatest common divisor of resource block groups (RBGs), and such a phenomenon lasts for a long time, the spectrum utilization would be greatly decreased. In this paper, a novel spectrum allocation scheme of cognitive gateway (CG) was proposed, in which the LTE spectrum utilization and CG’s throughput were greatly increased by allocating idle resource blocks in the shared TTI in LTE system to CG. Our simulation results show that the spectrum resource sharing method can improve LTE spectral utilization and increase the CG’s throughput as well as network use time.
Multidimensional Processing and Visual Rendering of Complex 3D Biomedical Images
NASA Technical Reports Server (NTRS)
Sams, Clarence F.
2016-01-01
The proposed technology uses advanced image analysis techniques to maximize the resolution and utility of medical imaging methods being used during spaceflight. We utilize COTS technology for medical imaging, but our applications require higher resolution assessment of the medical images than is routinely applied with nominal system software. By leveraging advanced data reduction and multidimensional imaging techniques utilized in analysis of Planetary Sciences and Cell Biology imaging, it is possible to significantly increase the information extracted from the onboard biomedical imaging systems. Year 1 focused on application of these techniques to the ocular images collected on ground test subjects and ISS crewmembers. Focus was on the choroidal vasculature and the structure of the optic disc. Methods allowed for increased resolution and quantitation of structural changes enabling detailed assessment of progression over time. These techniques enhance the monitoring and evaluation of crew vision issues during space flight.
Coherent-state information concentration and purification in atomic memory
NASA Astrophysics Data System (ADS)
Herec, Jiří; Filip, Radim
2006-12-01
We propose a feasible method of coherent-state information concentration and purification utilizing quantum memory. The method allows us to optimally concentrate and purify information carried by many noisy copies of an unknown coherent state (randomly distributed in time) to a single copy. Thus nonclassical resources and operations can be saved, if we compare information processing with many noisy copies and a single copy with concentrated and purified information.
Qiu, Wei-Hai; Chen, Gui-Yan; Cui, Lu; Zhang, Ting-Ming; Wei, Feng; Yang, Yong
2016-01-01
To identify differential pathways between papillary thyroid carcinoma (PTC) patients and normal controls utilizing a novel method which combined pathway with co-expression network. The proposed method included three steps. In the first step, we conducted pretreatments for background pathways and gained representative pathways in PTC. Subsequently, a co-expression network for representative pathways was constructed using empirical Bayes (EB) approach to assign a weight value for each pathway. Finally, random model was extracted to set the thresholds of identifying differential pathways. We obtained 1267 representative pathways and their weight values based on the co-expressed pathway network, and then by meeting the criterion (Weight > 0.0296), 87 differential pathways in total across PTC patients and normal controls were identified. The top three ranked differential pathways were CREB phosphorylation, attachment of GPI anchor to urokinase plasminogen activator receptor (uPAR) and loss of function of SMAD2/3 in cancer. In conclusion, we successfully identified differential pathways (such as CREB phosphorylation, attachment of GPI anchor to uPAR and post-translational modification: synthesis of GPI-anchored proteins) for PTC using the proposed pathway co-expression method, and these pathways might be potential biomarkers for target therapy and detection of PTC.
A comparison of methods for determining HIV viral set point.
Mei, Y; Wang, L; Holte, S E
2008-01-15
During a course of human immunodeficiency virus (HIV-1) infection, the viral load usually increases sharply to a peak following infection and then drops rapidly to a steady state, where it remains until progression to AIDS. This steady state is often referred to as the viral set point. It is believed that the HIV viral set point results from an equilibrium between the HIV virus and immune response and is an important indicator of AIDS disease progression. In this paper, we analyze a real data set of viral loads measured before antiretroviral therapy is initiated, and propose two-phase regression models to utilize all available data to estimate the viral set point. The advantages of the proposed methods are illustrated by comparing them with two empirical methods, and the reason behind the improvement is also studied. Our results illustrate that for our data set, the viral load data are highly correlated and it is cost effective to estimate the viral set point based on one or two measurements obtained between 5 and 12 months after HIV infection. The utility and limitations of this recommendation will be discussed. Copyright (c) 2007 John Wiley & Sons, Ltd.
Eigenvectors phase correction in inverse modal problem
NASA Astrophysics Data System (ADS)
Qiao, Guandong; Rahmatalla, Salam
2017-12-01
The solution of the inverse modal problem for the spatial parameters of mechanical and structural systems is heavily dependent on the quality of the modal parameters obtained from the experiments. While experimental and environmental noises will always exist during modal testing, the resulting modal parameters are expected to be corrupted with different levels of noise. A novel methodology is presented in this work to mitigate the errors in the eigenvectors when solving the inverse modal problem for the spatial parameters. The phases of the eigenvector component were utilized as design variables within an optimization problem that minimizes the difference between the calculated and experimental transfer functions. The equation of motion in terms of the modal and spatial parameters was used as a constraint in the optimization problem. Constraints that reserve the positive and semi-positive definiteness and the inter-connectivity of the spatial matrices were implemented using semi-definite programming. Numerical examples utilizing noisy eigenvectors with augmented Gaussian white noise of 1%, 5%, and 10% were used to demonstrate the efficacy of the proposed method. The results showed that the proposed method is superior when compared with a known method in the literature.
A Geospatial Data Recommender System based on Metadata and User Behaviour
NASA Astrophysics Data System (ADS)
Li, Y.; Jiang, Y.; Yang, C. P.; Armstrong, E. M.; Huang, T.; Moroni, D. F.; Finch, C. J.; McGibbney, L. J.
2017-12-01
Earth observations are produced in a fast velocity through real time sensors, reaching tera- to peta- bytes of geospatial data daily. Discovering and accessing the right data from the massive geospatial data is like finding needle in the haystack. To help researchers find the right data for study and decision support, quite a lot of research focusing on improving search performance have been proposed including recommendation algorithm. However, few papers have discussed the way to implement a recommendation algorithm in geospatial data retrieval system. In order to address this problem, we propose a recommendation engine to improve discovering relevant geospatial data by mining and utilizing metadata and user behavior data: 1) metadata based recommendation considers the correlation of each attribute (i.e., spatiotemporal, categorical, and ordinal) to data to be found. In particular, phrase extraction method is used to improve the accuracy of the description similarity; 2) user behavior data are utilized to predict the interest of a user through collaborative filtering; 3) an integration method is designed to combine the results of the above two methods to achieve better recommendation Experiments show that in the hybrid recommendation list, the all the precisions are larger than 0.8 from position 1 to 10.
Liang, Yicheng; Peng, Hao
2015-02-07
Depth-of-interaction (DOI) poses a major challenge for a PET system to achieve uniform spatial resolution across the field-of-view, particularly for small animal and organ-dedicated PET systems. In this work, we implemented an analytical method to model system matrix for resolution recovery, which was then incorporated in PET image reconstruction on a graphical processing unit platform, due to its parallel processing capacity. The method utilizes the concepts of virtual DOI layers and multi-ray tracing to calculate the coincidence detection response function for a given line-of-response. The accuracy of the proposed method was validated for a small-bore PET insert to be used for simultaneous PET/MR breast imaging. In addition, the performance comparisons were studied among the following three cases: 1) no physical DOI and no resolution modeling; 2) two physical DOI layers and no resolution modeling; and 3) no physical DOI design but with a different number of virtual DOI layers. The image quality was quantitatively evaluated in terms of spatial resolution (full-width-half-maximum and position offset), contrast recovery coefficient and noise. The results indicate that the proposed method has the potential to be used as an alternative to other physical DOI designs and achieve comparable imaging performances, while reducing detector/system design cost and complexity.
Accelerating EPI distortion correction by utilizing a modern GPU-based parallel computation.
Yang, Yao-Hao; Huang, Teng-Yi; Wang, Fu-Nien; Chuang, Tzu-Chao; Chen, Nan-Kuei
2013-04-01
The combination of phase demodulation and field mapping is a practical method to correct echo planar imaging (EPI) geometric distortion. However, since phase dispersion accumulates in each phase-encoding step, the calculation complexity of phase modulation is Ny-fold higher than conventional image reconstructions. Thus, correcting EPI images via phase demodulation is generally a time-consuming task. Parallel computing by employing general-purpose calculations on graphics processing units (GPU) can accelerate scientific computing if the algorithm is parallelized. This study proposes a method that incorporates the GPU-based technique into phase demodulation calculations to reduce computation time. The proposed parallel algorithm was applied to a PROPELLER-EPI diffusion tensor data set. The GPU-based phase demodulation method reduced the EPI distortion correctly, and accelerated the computation. The total reconstruction time of the 16-slice PROPELLER-EPI diffusion tensor images with matrix size of 128 × 128 was reduced from 1,754 seconds to 101 seconds by utilizing the parallelized 4-GPU program. GPU computing is a promising method to accelerate EPI geometric correction. The resulting reduction in computation time of phase demodulation should accelerate postprocessing for studies performed with EPI, and should effectuate the PROPELLER-EPI technique for clinical practice. Copyright © 2011 by the American Society of Neuroimaging.
A robust functional-data-analysis method for data recovery in multichannel sensor systems.
Sun, Jian; Liao, Haitao; Upadhyaya, Belle R
2014-08-01
Multichannel sensor systems are widely used in condition monitoring for effective failure prevention of critical equipment or processes. However, loss of sensor readings due to malfunctions of sensors and/or communication has long been a hurdle to reliable operations of such integrated systems. Moreover, asynchronous data sampling and/or limited data transmission are usually seen in multiple sensor channels. To reliably perform fault diagnosis and prognosis in such operating environments, a data recovery method based on functional principal component analysis (FPCA) can be utilized. However, traditional FPCA methods are not robust to outliers and their capabilities are limited in recovering signals with strongly skewed distributions (i.e., lack of symmetry). This paper provides a robust data-recovery method based on functional data analysis to enhance the reliability of multichannel sensor systems. The method not only considers the possibly skewed distribution of each channel of signal trajectories, but is also capable of recovering missing data for both individual and correlated sensor channels with asynchronous data that may be sparse as well. In particular, grand median functions, rather than classical grand mean functions, are utilized for robust smoothing of sensor signals. Furthermore, the relationship between the functional scores of two correlated signals is modeled using multivariate functional regression to enhance the overall data-recovery capability. An experimental flow-control loop that mimics the operation of coolant-flow loop in a multimodular integral pressurized water reactor is used to demonstrate the effectiveness and adaptability of the proposed data-recovery method. The computational results illustrate that the proposed method is robust to outliers and more capable than the existing FPCA-based method in terms of the accuracy in recovering strongly skewed signals. In addition, turbofan engine data are also analyzed to verify the capability of the proposed method in recovering non-skewed signals.
End-to-end system of license plate localization and recognition
NASA Astrophysics Data System (ADS)
Zhu, Siyu; Dianat, Sohail; Mestha, Lalit K.
2015-03-01
An end-to-end license plate recognition system is proposed. It is composed of preprocessing, detection, segmentation, and character recognition to find and recognize plates from camera-based still images. The system utilizes connected component (CC) properties to quickly extract the license plate region. A two-stage CC filtering is utilized to address both shape and spatial relationship information to produce high precision and to recall values for detection. Floating peak and valleys of projection profiles are used to cut the license plates into individual characters. A turning function-based method is proposed to quickly and accurately recognize each character. It is further accelerated using curvature histogram-based support vector machine. The INFTY dataset is used to train the recognition system, and MediaLab license plate dataset is used for testing. The proposed system achieved 89.45% F-measure for detection and 87.33% accuracy for overall recognition rate which is comparable to current state-of-the-art systems.
Salem, Hesham; Mohamed, Dalia
2015-04-05
Six simple, specific, accurate and precise spectrophotometric methods were developed and validated for the simultaneous determination of the analgesic drug; paracetamol (PARA) and the skeletal muscle relaxant; dantrolene sodium (DANT). Three methods are manipulating ratio spectra namely; ratio difference (RD), ratio subtraction (RS) and mean centering (MC). The other three methods are utilizing the isoabsorptive point either at zero order namely; absorbance ratio (AR) and absorbance subtraction (AS) or at ratio spectrum namely; amplitude modulation (AM). The proposed spectrophotometric procedures do not require any preliminary separation step. The accuracy, precision and linearity ranges of the proposed methods were determined. The selectivity of the developed methods was investigated by analyzing laboratory prepared mixtures of the drugs and their combined dosage form. Standard deviation values are less than 1.5 in the assay of raw materials and capsules. The obtained results were statistically compared with each other and with those of reported spectrophotometric ones. The comparison showed that there is no significant difference between the proposed methods and the reported methods regarding both accuracy and precision. Copyright © 2015 Elsevier B.V. All rights reserved.
Statistical Bayesian method for reliability evaluation based on ADT data
NASA Astrophysics Data System (ADS)
Lu, Dawei; Wang, Lizhi; Sun, Yusheng; Wang, Xiaohong
2018-05-01
Accelerated degradation testing (ADT) is frequently conducted in the laboratory to predict the products’ reliability under normal operating conditions. Two kinds of methods, degradation path models and stochastic process models, are utilized to analyze degradation data and the latter one is the most popular method. However, some limitations like imprecise solution process and estimation result of degradation ratio still exist, which may affect the accuracy of the acceleration model and the extrapolation value. Moreover, the conducted solution of this problem, Bayesian method, lose key information when unifying the degradation data. In this paper, a new data processing and parameter inference method based on Bayesian method is proposed to handle degradation data and solve the problems above. First, Wiener process and acceleration model is chosen; Second, the initial values of degradation model and parameters of prior and posterior distribution under each level is calculated with updating and iteration of estimation values; Third, the lifetime and reliability values are estimated on the basis of the estimation parameters; Finally, a case study is provided to demonstrate the validity of the proposed method. The results illustrate that the proposed method is quite effective and accuracy in estimating the lifetime and reliability of a product.
Regularized spherical polar fourier diffusion MRI with optimal dictionary learning.
Cheng, Jian; Jiang, Tianzi; Deriche, Rachid; Shen, Dinggang; Yap, Pew-Thian
2013-01-01
Compressed Sensing (CS) takes advantage of signal sparsity or compressibility and allows superb signal reconstruction from relatively few measurements. Based on CS theory, a suitable dictionary for sparse representation of the signal is required. In diffusion MRI (dMRI), CS methods proposed for reconstruction of diffusion-weighted signal and the Ensemble Average Propagator (EAP) utilize two kinds of Dictionary Learning (DL) methods: 1) Discrete Representation DL (DR-DL), and 2) Continuous Representation DL (CR-DL). DR-DL is susceptible to numerical inaccuracy owing to interpolation and regridding errors in a discretized q-space. In this paper, we propose a novel CR-DL approach, called Dictionary Learning - Spherical Polar Fourier Imaging (DL-SPFI) for effective compressed-sensing reconstruction of the q-space diffusion-weighted signal and the EAP. In DL-SPFI, a dictionary that sparsifies the signal is learned from the space of continuous Gaussian diffusion signals. The learned dictionary is then adaptively applied to different voxels using a weighted LASSO framework for robust signal reconstruction. Compared with the start-of-the-art CR-DL and DR-DL methods proposed by Merlet et al. and Bilgic et al., respectively, our work offers the following advantages. First, the learned dictionary is proved to be optimal for Gaussian diffusion signals. Second, to our knowledge, this is the first work to learn a voxel-adaptive dictionary. The importance of the adaptive dictionary in EAP reconstruction will be demonstrated theoretically and empirically. Third, optimization in DL-SPFI is only performed in a small subspace resided by the SPF coefficients, as opposed to the q-space approach utilized by Merlet et al. We experimentally evaluated DL-SPFI with respect to L1-norm regularized SPFI (L1-SPFI), which uses the original SPF basis, and the DR-DL method proposed by Bilgic et al. The experiment results on synthetic and real data indicate that the learned dictionary produces sparser coefficients than the original SPF basis and results in significantly lower reconstruction error than Bilgic et al.'s method.
Graphical approach for multiple values logic minimization
NASA Astrophysics Data System (ADS)
Awwal, Abdul Ahad S.; Iftekharuddin, Khan M.
1999-03-01
Multiple valued logic (MVL) is sought for designing high complexity, highly compact, parallel digital circuits. However, the practical realization of an MVL-based system is dependent on optimization of cost, which directly affects the optical setup. We propose a minimization technique for MVL logic optimization based on graphical visualization, such as a Karnaugh map. The proposed method is utilized to solve signed-digit binary and trinary logic minimization problems. The usefulness of the minimization technique is demonstrated for the optical implementation of MVL circuits.
Al-Dmour, Hayat; Al-Ani, Ahmed
2016-04-01
The present work has the goal of developing a secure medical imaging information system based on a combined steganography and cryptography technique. It attempts to securely embed patient's confidential information into his/her medical images. The proposed information security scheme conceals coded Electronic Patient Records (EPRs) into medical images in order to protect the EPRs' confidentiality without affecting the image quality and particularly the Region of Interest (ROI), which is essential for diagnosis. The secret EPR data is converted into ciphertext using private symmetric encryption method. Since the Human Visual System (HVS) is less sensitive to alterations in sharp regions compared to uniform regions, a simple edge detection method has been introduced to identify and embed in edge pixels, which will lead to an improved stego image quality. In order to increase the embedding capacity, the algorithm embeds variable number of bits (up to 3) in edge pixels based on the strength of edges. Moreover, to increase the efficiency, two message coding mechanisms have been utilized to enhance the ±1 steganography. The first one, which is based on Hamming code, is simple and fast, while the other which is known as the Syndrome Trellis Code (STC), is more sophisticated as it attempts to find a stego image that is close to the cover image through minimizing the embedding impact. The proposed steganography algorithm embeds the secret data bits into the Region of Non Interest (RONI), where due to its importance; the ROI is preserved from modifications. The experimental results demonstrate that the proposed method can embed large amount of secret data without leaving a noticeable distortion in the output image. The effectiveness of the proposed algorithm is also proven using one of the efficient steganalysis techniques. The proposed medical imaging information system proved to be capable of concealing EPR data and producing imperceptible stego images with minimal embedding distortions compared to other existing methods. In order to refrain from introducing any modifications to the ROI, the proposed system only utilizes the Region of Non Interest (RONI) in embedding the EPR data. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Research on Estimates of Xi’an City Life Garbage Pay-As-You-Throw Based on Two-part Tariff method
NASA Astrophysics Data System (ADS)
Yaobo, Shi; Xinxin, Zhao; Fuli, Zheng
2017-05-01
Domestic waste whose pricing can’t be separated from the pricing of public economics category is quasi public goods. Based on Two-part Tariff method on urban public utilities, this paper designs the pricing model in order to match the charging method and estimates the standard of pay-as-you-throw using data of the past five years in Xi’an. Finally, this paper summarizes the main results and proposes corresponding policy recommendations.
Even Illumination from Fiber-Optic-Coupled Laser Diodes
NASA Technical Reports Server (NTRS)
Howard, Richard T.
2006-01-01
A method of equipping fiber-optic-coupled laser diodes to evenly illuminate specified fields of view has been proposed. The essence of the method is to shape the tips of the optical fibers into suitably designed diffractive optical elements. One of the main benefits afforded by the method would be more nearly complete utilization of the available light. Diffractive optics is a relatively new field of optics in which laser beams are shaped by use of diffraction instead of refraction.
Zhou, Bangyan; Wu, Xiaopei; Lv, Zhao; Zhang, Lei; Guo, Xiaojin
2016-01-01
Independent component analysis (ICA) as a promising spatial filtering method can separate motor-related independent components (MRICs) from the multichannel electroencephalogram (EEG) signals. However, the unpredictable burst interferences may significantly degrade the performance of ICA-based brain-computer interface (BCI) system. In this study, we proposed a new algorithm frame to address this issue by combining the single-trial-based ICA filter with zero-training classifier. We developed a two-round data selection method to identify automatically the badly corrupted EEG trials in the training set. The "high quality" training trials were utilized to optimize the ICA filter. In addition, we proposed an accuracy-matrix method to locate the artifact data segments within a single trial and investigated which types of artifacts can influence the performance of the ICA-based MIBCIs. Twenty-six EEG datasets of three-class motor imagery were used to validate the proposed methods, and the classification accuracies were compared with that obtained by frequently used common spatial pattern (CSP) spatial filtering algorithm. The experimental results demonstrated that the proposed optimizing strategy could effectively improve the stability, practicality and classification performance of ICA-based MIBCI. The study revealed that rational use of ICA method may be crucial in building a practical ICA-based MIBCI system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Guang; Sun, Xin; Wang, Yuxin
A new inverse method was proposed to calculate the anisotropic elastic-plastic properties (flow stress) of thin electrodeposited Ag coating utilizing nanoindentation tests, previously reported inverse method for isotropic materials and three-dimensional (3-D) finite element analyses (FEA). Indentation depth was ~4% of coating thickness (~10 μm) to avoid substrate effect and different indentation responses were observed in the longitudinal (L) and the transverse (T) directions. The estimated elastic-plastic properties were obtained in the newly developed inverse method by matching the predicted indentation responses in the L and T directions with experimental measurements considering indentation size effect (ISE). The results were validatedmore » with tensile flow curves measured from free-standing (FS) Ag film. The current method can be utilized to characterize the anisotropic elastic-plastic properties of coatings and to provide the constitutive properties for coating performance evaluations.« less
A joint tracking method for NSCC based on WLS algorithm
NASA Astrophysics Data System (ADS)
Luo, Ruidan; Xu, Ying; Yuan, Hong
2017-12-01
Navigation signal based on compound carrier (NSCC), has the flexible multi-carrier scheme and various scheme parameters configuration, which enables it to possess significant efficiency of navigation augmentation in terms of spectral efficiency, tracking accuracy, multipath mitigation capability and anti-jamming reduction compared with legacy navigation signals. Meanwhile, the typical scheme characteristics can provide auxiliary information for signal synchronism algorithm design. This paper, based on the characteristics of NSCC, proposed a kind of joint tracking method utilizing Weighted Least Square (WLS) algorithm. In this method, the LS algorithm is employed to jointly estimate each sub-carrier frequency shift with the frequency-Doppler linear relationship, by utilizing the known sub-carrier frequency. Besides, the weighting matrix is set adaptively according to the sub-carrier power to ensure the estimation accuracy. Both the theory analysis and simulation results illustrate that the tracking accuracy and sensitivity of this method outperforms the single-carrier algorithm with lower SNR.
Critical object recognition in millimeter-wave images with robustness to rotation and scale.
Mohammadzade, Hoda; Ghojogh, Benyamin; Faezi, Sina; Shabany, Mahdi
2017-06-01
Locating critical objects is crucial in various security applications and industries. For example, in security applications, such as in airports, these objects might be hidden or covered under shields or secret sheaths. Millimeter-wave images can be utilized to discover and recognize the critical objects out of the hidden cases without any health risk due to their non-ionizing features. However, millimeter-wave images usually have waves in and around the detected objects, making object recognition difficult. Thus, regular image processing and classification methods cannot be used for these images and additional pre-processings and classification methods should be introduced. This paper proposes a novel pre-processing method for canceling rotation and scale using principal component analysis. In addition, a two-layer classification method is introduced and utilized for recognition. Moreover, a large dataset of millimeter-wave images is collected and created for experiments. Experimental results show that a typical classification method such as support vector machines can recognize 45.5% of a type of critical objects at 34.2% false alarm rate (FAR), which is a drastically poor recognition. The same method within the proposed recognition framework achieves 92.9% recognition rate at 0.43% FAR, which indicates a highly significant improvement. The significant contribution of this work is to introduce a new method for analyzing millimeter-wave images based on machine vision and learning approaches, which is not yet widely noted in the field of millimeter-wave image analysis.
A novel accelerated oxidative stability screening method for pharmaceutical solids.
Zhu, Donghua Alan; Zhang, Geoff G Z; George, Karen L S T; Zhou, Deliang
2011-08-01
Despite the fact that oxidation is the second most frequent degradation pathway for pharmaceuticals, means of evaluating the oxidative stability of pharmaceutical solids, especially effective stress testing, are still lacking. This paper describes a novel experimental method for peroxide-mediated oxidative stress testing on pharmaceutical solids. The method utilizes urea-hydrogen peroxide, a molecular complex that undergoes solid-state decomposition and releases hydrogen peroxide vapor at elevated temperatures (e.g., 30°C), as a source of peroxide. The experimental setting for this method is simple, convenient, and can be operated routinely in most laboratories. The fundamental parameter of the system, that is, hydrogen peroxide vapor pressure, was determined using a modified spectrophotometric method. The feasibility and utility of the proposed method in solid form selection have been demonstrated using various solid forms of ephedrine. No degradation was detected for ephedrine hydrochloride after exposure to the hydrogen peroxide vapor for 2 weeks, whereas both anhydrate and hemihydrate free base forms degraded rapidly under the test conditions. In addition, both the anhydrate and the hemihydrate free base degraded faster when exposed to hydrogen peroxide vapor at 30°C under dry condition than at 30°C/75% relative humidity (RH). A new degradation product was also observed under the drier condition. The proposed method provides more relevant screening conditions for solid dosage forms, and is useful in selecting optimal solid form(s), determining potential degradation products, and formulation screening during development. Copyright © 2011 Wiley-Liss, Inc.
An improved reaction path optimization method using a chain of conformations
NASA Astrophysics Data System (ADS)
Asada, Toshio; Sawada, Nozomi; Nishikawa, Takuya; Koseki, Shiro
2018-05-01
The efficient fast path optimization (FPO) method is proposed to optimize the reaction paths on energy surfaces by using chains of conformations. No artificial spring force is used in the FPO method to ensure the equal spacing of adjacent conformations. The FPO method is applied to optimize the reaction path on two model potential surfaces. The use of this method enabled the optimization of the reaction paths with a drastically reduced number of optimization cycles for both potentials. It was also successfully utilized to define the MEP of the isomerization of the glycine molecule in water by FPO method.
Pixel-based speckle adjustment for noise reduction in Fourier-domain OCT images.
Zhang, Anqi; Xi, Jiefeng; Sun, Jitao; Li, Xingde
2017-03-01
Speckle resides in OCT signals and inevitably effects OCT image quality. In this work, we present a novel method for speckle noise reduction in Fourier-domain OCT images, which utilizes the phase information of complex OCT data. In this method, speckle area is pre-delineated pixelwise based on a phase-domain processing method and then adjusted by the results of wavelet shrinkage of the original image. Coefficient shrinkage method such as wavelet or contourlet is applied afterwards for further suppressing the speckle noise. Compared with conventional methods without speckle adjustment, the proposed method demonstrates significant improvement of image quality.
Martínez, Sergio; Sánchez, David; Valls, Aida
2013-04-01
Structured patient data like Electronic Health Records (EHRs) are a valuable source for clinical research. However, the sensitive nature of such information requires some anonymisation procedure to be applied before releasing the data to third parties. Several studies have shown that the removal of identifying attributes, like the Social Security Number, is not enough to obtain an anonymous data file, since unique combinations of other attributes as for example, rare diagnoses and personalised treatments, may lead to patient's identity disclosure. To tackle this problem, Statistical Disclosure Control (SDC) methods have been proposed to mask sensitive attributes while preserving, up to a certain degree, the utility of anonymised data. Most of these methods focus on continuous-scale numerical data. Considering that part of the clinical data found in EHRs is expressed with non-numerical attributes as for example, diagnoses, symptoms, procedures, etc., their application to EHRs produces far from optimal results. In this paper, we propose a general framework to enable the accurate application of SDC methods to non-numerical clinical data, with a focus on the preservation of semantics. To do so, we exploit structured medical knowledge bases like SNOMED CT to propose semantically-grounded operators to compare, aggregate and sort non-numerical terms. Our framework has been applied to several well-known SDC methods and evaluated using a real clinical dataset with non-numerical attributes. Results show that the exploitation of medical semantics produces anonymised datasets that better preserve the utility of EHRs. Copyright © 2012 Elsevier Inc. All rights reserved.
ANN based Real-Time Estimation of Power Generation of Different PV Module Types
NASA Astrophysics Data System (ADS)
Syafaruddin; Karatepe, Engin; Hiyama, Takashi
Distributed generation is expected to become more important in the future generation system. Utilities need to find solutions that help manage resources more efficiently. Effective smart grid solutions have been experienced by using real-time data to help refine and pinpoint inefficiencies for maintaining secure and reliable operating conditions. This paper proposes the application of Artificial Neural Network (ANN) for the real-time estimation of the maximum power generation of PV modules of different technologies. An intelligent technique is necessary required in this case due to the relationship between the maximum power of PV modules and the open circuit voltage and temperature is nonlinear and can't be easily expressed by an analytical expression for each technology. The proposed ANN method is using input signals of open circuit voltage and cell temperature instead of irradiance and ambient temperature to determine the estimated maximum power generation of PV modules. It is important for the utility to have the capability to perform this estimation for optimal operating points and diagnostic purposes that may be an early indicator of a need for maintenance and optimal energy management. The proposed method is accurately verified through a developed real-time simulator on the daily basis of irradiance and cell temperature changes.
Star tracking method based on multiexposure imaging for intensified star trackers.
Yu, Wenbo; Jiang, Jie; Zhang, Guangjun
2017-07-20
The requirements for the dynamic performance of star trackers are rapidly increasing with the development of space exploration technologies. However, insufficient knowledge of the angular acceleration has largely decreased the performance of the existing star tracking methods, and star trackers may even fail to track under highly dynamic conditions. This study proposes a star tracking method based on multiexposure imaging for intensified star trackers. The accurate estimation model of the complete motion parameters, including the angular velocity and angular acceleration, is established according to the working characteristic of multiexposure imaging. The estimation of the complete motion parameters is utilized to generate the predictive star image accurately. Therefore, the correct matching and tracking between stars in the real and predictive star images can be reliably accomplished under highly dynamic conditions. Simulations with specific dynamic conditions are conducted to verify the feasibility and effectiveness of the proposed method. Experiments with real starry night sky observation are also conducted for further verification. Simulations and experiments demonstrate that the proposed method is effective and shows excellent performance under highly dynamic conditions.
NASA Astrophysics Data System (ADS)
Dong, Min; Dong, Chenghui; Guo, Miao; Wang, Zhe; Mu, Xiaomin
2018-04-01
Multiresolution-based methods, such as wavelet and Contourlet are usually used to image fusion. This work presents a new image fusion frame-work by utilizing area-based standard deviation in dual tree Contourlet trans-form domain. Firstly, the pre-registered source images are decomposed with dual tree Contourlet transform; low-pass and high-pass coefficients are obtained. Then, the low-pass bands are fused with weighted average based on area standard deviation rather than the simple "averaging" rule. While the high-pass bands are merged with the "max-absolute' fusion rule. Finally, the modified low-pass and high-pass coefficients are used to reconstruct the final fused image. The major advantage of the proposed fusion method over conventional fusion is the approximately shift invariance and multidirectional selectivity of dual tree Contourlet transform. The proposed method is compared with wavelet- , Contourletbased methods and other the state-of-the art methods on common used multi focus images. Experiments demonstrate that the proposed fusion framework is feasible and effective, and it performs better in both subjective and objective evaluation.
NASA Astrophysics Data System (ADS)
Zhang, Menghua; Ma, Xin; Rong, Xuewen; Tian, Xincheng; Li, Yibin
2017-02-01
This paper exploits an error tracking control method for overhead crane systems for which the error trajectories for the trolley and the payload swing can be pre-specified. The proposed method does not require that the initial payload swing angle remains zero, whereas this requirement is usually assumed in conventional methods. The significant feature of the proposed method is its superior control performance as well as its strong robustness over different or uncertain rope lengths, payload masses, desired positions, initial payload swing angles, and external disturbances. Owing to the same attenuation behavior, the desired error trajectory for the trolley for each traveling distance is not needed to be reset, which is easy to implement in practical applications. By converting the error tracking overhead crane dynamics to the objective system, we obtain the error tracking control law for arbitrary initial payload swing angles. Lyapunov techniques and LaSalle's invariance theorem are utilized to prove the convergence and stability of the closed-loop system. Simulation and experimental results are illustrated to validate the superior performance of the proposed error tracking control method.
NASA Astrophysics Data System (ADS)
Nimura, Yukitaka; Hayashi, Yuichiro; Kitasaka, Takayuki; Mori, Kensaku
2015-03-01
This paper presents a method for torso organ segmentation from abdominal CT images using structured perceptron and dual decomposition. A lot of methods have been proposed to enable automated extraction of organ regions from volumetric medical images. However, it is necessary to adjust empirical parameters of them to obtain precise organ regions. This paper proposes an organ segmentation method using structured output learning. Our method utilizes a graphical model and binary features which represent the relationship between voxel intensities and organ labels. Also we optimize the weights of the graphical model by structured perceptron and estimate the best organ label for a given image by dynamic programming and dual decomposition. The experimental result revealed that the proposed method can extract organ regions automatically using structured output learning. The error of organ label estimation was 4.4%. The DICE coefficients of left lung, right lung, heart, liver, spleen, pancreas, left kidney, right kidney, and gallbladder were 0.91, 0.95, 0.77, 0.81, 0.74, 0.08, 0.83, 0.84, and 0.03, respectively.
Feature Vector Construction Method for IRIS Recognition
NASA Astrophysics Data System (ADS)
Odinokikh, G.; Fartukov, A.; Korobkin, M.; Yoo, J.
2017-05-01
One of the basic stages of iris recognition pipeline is iris feature vector construction procedure. The procedure represents the extraction of iris texture information relevant to its subsequent comparison. Thorough investigation of feature vectors obtained from iris showed that not all the vector elements are equally relevant. There are two characteristics which determine the vector element utility: fragility and discriminability. Conventional iris feature extraction methods consider the concept of fragility as the feature vector instability without respect to the nature of such instability appearance. This work separates sources of the instability into natural and encodinginduced which helps deeply investigate each source of instability independently. According to the separation concept, a novel approach of iris feature vector construction is proposed. The approach consists of two steps: iris feature extraction using Gabor filtering with optimal parameters and quantization with separated preliminary optimized fragility thresholds. The proposed method has been tested on two different datasets of iris images captured under changing environmental conditions. The testing results show that the proposed method surpasses all the methods considered as a prior art by recognition accuracy on both datasets.
Analysis of drugs in human tissues by supercritical fluid extraction/immunoassay
NASA Astrophysics Data System (ADS)
Furton, Kenneth G.; Sabucedo, Alberta; Rein, Joseph; Hearn, W. L.
1997-02-01
A rapid, readily automated method has been developed for the quantitative analysis of phenobarbital from human liver tissues based on supercritical carbon dioxide extraction followed by fluorescence enzyme immunoassay. The method developed significantly reduces sample handling and utilizes the entire liver homogenate. The current method yields comparable recoveries and precision and does not require the use of an internal standard, although traditional GC/MS confirmation can still be performed on sample extracts. Additionally, the proposed method uses non-toxic, inexpensive carbon dioxide, thus eliminating the use of halogenated organic solvents.
Deep multi-scale convolutional neural network for hyperspectral image classification
NASA Astrophysics Data System (ADS)
Zhang, Feng-zhe; Yang, Xia
2018-04-01
In this paper, we proposed a multi-scale convolutional neural network for hyperspectral image classification task. Firstly, compared with conventional convolution, we utilize multi-scale convolutions, which possess larger respective fields, to extract spectral features of hyperspectral image. We design a deep neural network with a multi-scale convolution layer which contains 3 different convolution kernel sizes. Secondly, to avoid overfitting of deep neural network, dropout is utilized, which randomly sleeps neurons, contributing to improve the classification accuracy a bit. In addition, new skills like ReLU in deep learning is utilized in this paper. We conduct experiments on University of Pavia and Salinas datasets, and obtained better classification accuracy compared with other methods.
Basic research for the geodynamics program
NASA Technical Reports Server (NTRS)
1986-01-01
Further development of utility program software for analyzing final results of Earth rotation parameter determination from different space geodetic systems was completed. Main simulation experiments were performed. Results and conclusions were compiled. The utilization of range-difference observations in geodynamics is also examined. A method based on the Bayesian philosophy and entropy measure of information is given for the elucidation of time-dependent models of crustal motions as part of a proposed algorithm. The strategy of model discrimination and design of measurements is illustrated in an example for the case of crustal deformation models.
A heuristic re-mapping algorithm reducing inter-level communication in SAMR applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steensland, Johan; Ray, Jaideep
2003-07-01
This paper aims at decreasing execution time for large-scale structured adaptive mesh refinement (SAMR) applications by proposing a new heuristic re-mapping algorithm and experimentally showing its effectiveness in reducing inter-level communication. Tests were done for five different SAMR applications. The overall goal is to engineer a dynamically adaptive meta-partitioner capable of selecting and configuring the most appropriate partitioning strategy at run-time based on current system and application state. Such a metapartitioner can significantly reduce execution times for general SAMR applications. Computer simulations of physical phenomena are becoming increasingly popular as they constitute an important complement to real-life testing. In manymore » cases, such simulations are based on solving partial differential equations by numerical methods. Adaptive methods are crucial to efficiently utilize computer resources such as memory and CPU. But even with adaption, the simulations are computationally demanding and yield huge data sets. Thus parallelization and the efficient partitioning of data become issues of utmost importance. Adaption causes the workload to change dynamically, calling for dynamic (re-) partitioning to maintain efficient resource utilization. The proposed heuristic algorithm reduced inter-level communication substantially. Since the complexity of the proposed algorithm is low, this decrease comes at a relatively low cost. As a consequence, we draw the conclusion that the proposed re-mapping algorithm would be useful to lower overall execution times for many large SAMR applications. Due to its usefulness and its parameterization, the proposed algorithm would constitute a natural and important component of the meta-partitioner.« less
NASA Astrophysics Data System (ADS)
Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai
2016-05-01
The commonly employed calibration methods for laboratory-made spectrometers have several disadvantages, including poor calibration when the number of characteristic spectral peaks is low. Therefore, we present a wavelength calibration method using relative k-space distribution with low coherence interferometer. The proposed method utilizes an interferogram with a perfect sinusoidal pattern in k-space for calibration. Zero-crossing detection extracts the k-space distribution of a spectrometer from the interferogram in the wavelength domain, and a calibration lamp provides information about absolute wavenumbers. To assign wavenumbers, wavelength-to-k-space conversion is required for the characteristic spectrum of the calibration lamp with the extracted k-space distribution. Then, the wavelength calibration is completed by inverse conversion of the k-space into wavelength domain. The calibration performance of the proposed method was demonstrated with two experimental conditions of four and eight characteristic spectral peaks. The proposed method elicited reliable calibration results in both cases, whereas the conventional method of third-order polynomial curve fitting failed to determine wavelengths in the case of four characteristic peaks. Moreover, for optical coherence tomography imaging, the proposed method could improve axial resolution due to higher suppression of sidelobes in point spread function than the conventional method. We believe that our findings can improve not only wavelength calibration accuracy but also resolution for optical coherence tomography.
Jastrzębska, Aneta; Piasta, Anna; Szłyk, Edward
2014-01-01
A simple and useful method for the determination of biogenic amines in beverage samples based on isotachophoretic separation is described. The proposed procedure permitted simultaneous analysis of histamine, tyramine, cadaverine, putrescine, tryptamine, 2-phenylethylamine, spermine and spermidine. The data presented demonstrate the utility, simplicity, flexibility, sensitivity and environmentally friendly character of the proposed method. The precision of the method expressed as coefficient of variations varied from 0.1% to 5.9% for beverage samples, whereas recoveries varied from 91% to 101%. The results for the determination of biogenic amines were compared with an HPLC procedure based on a pre-column derivatisation reaction of biogenic amines with dansyl chloride. Furthermore, the derivatisation procedure was optimised by verification of concentration and pH of the buffer, the addition of organic solvents, reaction time and temperature.
NASA Astrophysics Data System (ADS)
Arshad, Muhammad; Lu, Dianchen; Wang, Jun
2017-07-01
In this paper, we pursue the general form of the fractional reduced differential transform method (DTM) to (N+1)-dimensional case, so that fractional order partial differential equations (PDEs) can be resolved effectively. The most distinct aspect of this method is that no prescribed assumptions are required, and the huge computational exertion is reduced and round-off errors are also evaded. We utilize the proposed scheme on some initial value problems and approximate numerical solutions of linear and nonlinear time fractional PDEs are obtained, which shows that the method is highly accurate and simple to apply. The proposed technique is thus an influential technique for solving the fractional PDEs and fractional order problems occurring in the field of engineering, physics etc. Numerical results are obtained for verification and demonstration purpose by using Mathematica software.
Finger Vein Recognition Based on Local Directional Code
Meng, Xianjing; Yang, Gongping; Yin, Yilong; Xiao, Rongyang
2012-01-01
Finger vein patterns are considered as one of the most promising biometric authentication methods for its security and convenience. Most of the current available finger vein recognition methods utilize features from a segmented blood vessel network. As an improperly segmented network may degrade the recognition accuracy, binary pattern based methods are proposed, such as Local Binary Pattern (LBP), Local Derivative Pattern (LDP) and Local Line Binary Pattern (LLBP). However, the rich directional information hidden in the finger vein pattern has not been fully exploited by the existing local patterns. Inspired by the Webber Local Descriptor (WLD), this paper represents a new direction based local descriptor called Local Directional Code (LDC) and applies it to finger vein recognition. In LDC, the local gradient orientation information is coded as an octonary decimal number. Experimental results show that the proposed method using LDC achieves better performance than methods using LLBP. PMID:23202194
Finger vein recognition based on local directional code.
Meng, Xianjing; Yang, Gongping; Yin, Yilong; Xiao, Rongyang
2012-11-05
Finger vein patterns are considered as one of the most promising biometric authentication methods for its security and convenience. Most of the current available finger vein recognition methods utilize features from a segmented blood vessel network. As an improperly segmented network may degrade the recognition accuracy, binary pattern based methods are proposed, such as Local Binary Pattern (LBP), Local Derivative Pattern (LDP) and Local Line Binary Pattern (LLBP). However, the rich directional information hidden in the finger vein pattern has not been fully exploited by the existing local patterns. Inspired by the Webber Local Descriptor (WLD), this paper represents a new direction based local descriptor called Local Directional Code (LDC) and applies it to finger vein recognition. In LDC, the local gradient orientation information is coded as an octonary decimal number. Experimental results show that the proposed method using LDC achieves better performance than methods using LLBP.
Yehia, Ali M
2013-05-15
New, simple, specific, accurate and precise spectrophotometric technique utilizing ratio spectra is developed for simultaneous determination of two different binary mixtures. The developed ratio H-point standard addition method (RHPSAM) was managed successfully to resolve the spectral overlap in itopride hydrochloride (ITO) and pantoprazole sodium (PAN) binary mixture, as well as, mosapride citrate (MOS) and PAN binary mixture. The theoretical background and advantages of the newly proposed method are presented. The calibration curves are linear over the concentration range of 5-60 μg/mL, 5-40 μg/mL and 4-24 μg/mL for ITO, MOS and PAN, respectively. Specificity of the method was investigated and relative standard deviations were less than 1.5. The accuracy, precision and repeatability were also investigated for the proposed method according to ICH guidelines. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Yehia, Ali M.
2013-05-01
New, simple, specific, accurate and precise spectrophotometric technique utilizing ratio spectra is developed for simultaneous determination of two different binary mixtures. The developed ratio H-point standard addition method (RHPSAM) was managed successfully to resolve the spectral overlap in itopride hydrochloride (ITO) and pantoprazole sodium (PAN) binary mixture, as well as, mosapride citrate (MOS) and PAN binary mixture. The theoretical background and advantages of the newly proposed method are presented. The calibration curves are linear over the concentration range of 5-60 μg/mL, 5-40 μg/mL and 4-24 μg/mL for ITO, MOS and PAN, respectively. Specificity of the method was investigated and relative standard deviations were less than 1.5. The accuracy, precision and repeatability were also investigated for the proposed method according to ICH guidelines.
Multivariate Boosting for Integrative Analysis of High-Dimensional Cancer Genomic Data
Xiong, Lie; Kuan, Pei-Fen; Tian, Jianan; Keles, Sunduz; Wang, Sijian
2015-01-01
In this paper, we propose a novel multivariate component-wise boosting method for fitting multivariate response regression models under the high-dimension, low sample size setting. Our method is motivated by modeling the association among different biological molecules based on multiple types of high-dimensional genomic data. Particularly, we are interested in two applications: studying the influence of DNA copy number alterations on RNA transcript levels and investigating the association between DNA methylation and gene expression. For this purpose, we model the dependence of the RNA expression levels on DNA copy number alterations and the dependence of gene expression on DNA methylation through multivariate regression models and utilize boosting-type method to handle the high dimensionality as well as model the possible nonlinear associations. The performance of the proposed method is demonstrated through simulation studies. Finally, our multivariate boosting method is applied to two breast cancer studies. PMID:26609213
76 FR 20624 - Oglethorpe Power Corporation: Proposed Biomass Power Plant
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-13
... DEPARTMENT OF AGRICULTURE Rural Utilities Service Oglethorpe Power Corporation: Proposed Biomass Power Plant AGENCY: Rural Utilities Service, USDA. ACTION: Notice of Availability of a Draft...) biomass plant and related facilities (Proposal) in Warren County, Georgia. The purpose of the Proposal is...
Wu, John Z; Cutlip, Robert G; Welcome, Daniel; Dong, Ren G
2006-01-01
Knowledge of viscoelastic properties of soft tissues is essential for the finite element modelling of the stress/strain distributions in finger-pad during vibratory loading, which is important in exploring the mechanism of hand-arm vibration syndrome. In conventional procedures, skin and subcutaneous tissue have to be separated for testing the viscoelastic properties. In this study, a novel method has been proposed to simultaneously determine the viscoelastic properties of skin and subcutaneous tissue in uniaxial stress relaxation tests. A mathematical approach has been derived to obtain the creep and relaxation characteristics of skin and subcutaneous tissue using uniaxial stress relaxation data of skin/subcutaneous composite specimens. The micro-structures of collagen fiber networks in the soft tissue, which underline the tissue mechanical characteristics, will be intact in the proposed method. Therefore, the viscoelastic properties of soft tissues obtained using the proposed method would be more physiologically relevant than those obtained using the conventional method. The proposed approach has been utilized to measure the viscoelastic properties of soft tissues of pig. The relaxation curves of pig skin and subcutaneous tissue obtained in the current study agree well with those in literature. Using the proposed approach, reliable material properties of soft tissues can be obtained in a cost- and time-efficient manner, which simultaneously improves the physiological relevance.
Multiview boosting digital pathology analysis of prostate cancer.
Kwak, Jin Tae; Hewitt, Stephen M
2017-04-01
Various digital pathology tools have been developed to aid in analyzing tissues and improving cancer pathology. The multi-resolution nature of cancer pathology, however, has not been fully analyzed and utilized. Here, we develop an automated, cooperative, and multi-resolution method for improving prostate cancer diagnosis. Digitized tissue specimen images are obtained from 5 tissue microarrays (TMAs). The TMAs include 70 benign and 135 cancer samples (TMA1), 74 benign and 89 cancer samples (TMA2), 70 benign and 115 cancer samples (TMA3), 79 benign and 82 cancer samples (TMA4), and 72 benign and 86 cancer samples (TMA5). The tissue specimen images are segmented using intensity- and texture-based features. Using the segmentation results, a number of morphological features from lumens and epithelial nuclei are computed to characterize tissues at different resolutions. Applying a multiview boosting algorithm, tissue characteristics, obtained from differing resolutions, are cooperatively combined to achieve accurate cancer detection. In segmenting prostate tissues, the multiview boosting method achieved≥ 0.97 AUC using TMA1. For detecting cancers, the multiview boosting method achieved an AUC of 0.98 (95% CI: 0.97-0.99) as trained on TMA2 and tested on TMA3, TMA4, and TMA5. The proposed method was superior to single-view approaches, utilizing features from a single resolution or merging features from all the resolutions. Moreover, the performance of the proposed method was insensitive to the choice of the training dataset. Trained on TMA3, TMA4, and TMA5, the proposed method obtained an AUC of 0.97 (95% CI: 0.96-0.98), 0.98 (95% CI: 0.96-0.99), and 0.97 (95% CI: 0.96-0.98), respectively. The multiview boosting method is capable of integrating information from multiple resolutions in an effective and efficient fashion and identifying cancers with high accuracy. The multiview boosting method holds a great potential for improving digital pathology tools and research. Copyright © 2017 Elsevier B.V. All rights reserved.
Optic disk localization by a robust fusion method
NASA Astrophysics Data System (ADS)
Zhang, Jielin; Yin, Fengshou; Wong, Damon W. K.; Liu, Jiang; Baskaran, Mani; Cheng, Ching-Yu; Wong, Tien Yin
2013-02-01
The optic disk localization plays an important role in developing computer-aided diagnosis (CAD) systems for ocular diseases such as glaucoma, diabetic retinopathy and age-related macula degeneration. In this paper, we propose an intelligent fusion of methods for the localization of the optic disk in retinal fundus images. Three different approaches are developed to detect the location of the optic disk separately. The first method is the maximum vessel crossing method, which finds the region with the most number of blood vessel crossing points. The second one is the multichannel thresholding method, targeting the area with the highest intensity. The final method searches the vertical and horizontal region-of-interest separately on the basis of blood vessel structure and neighborhood entropy profile. Finally, these three methods are combined using an intelligent fusion method to improve the overall accuracy. The proposed algorithm was tested on the STARE database and the ORIGAlight database, each consisting of images with various pathologies. The preliminary result on the STARE database can achieve 81.5%, while a higher result of 99% can be obtained for the ORIGAlight database. The proposed method outperforms each individual approach and state-of-the-art method which utilizes an intensity-based approach. The result demonstrates a high potential for this method to be used in retinal CAD systems.
NASA Astrophysics Data System (ADS)
Saleh, Sarah S.; Lotfy, Hayam M.; Hassan, Nagiba Y.; Salem, Hesham
2014-11-01
This work represents a comparative study of a novel progressive spectrophotometric resolution technique namely, amplitude center method (ACM), versus the well-established successive spectrophotometric resolution techniques namely; successive derivative subtraction (SDS); successive derivative of ratio spectra (SDR) and mean centering of ratio spectra (MCR). All the proposed spectrophotometric techniques consist of several consecutive steps utilizing ratio and/or derivative spectra. The novel amplitude center method (ACM) can be used for the determination of ternary mixtures using single divisor where the concentrations of the components are determined through progressive manipulation performed on the same ratio spectrum. Those methods were applied for the analysis of the ternary mixture of chloramphenicol (CHL), dexamethasone sodium phosphate (DXM) and tetryzoline hydrochloride (TZH) in eye drops in the presence of benzalkonium chloride as a preservative. The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs. The proposed methods were validated according to the ICH guidelines. A comparative study was conducted between those methods regarding simplicity, limitation and sensitivity. The obtained results were statistically compared with those obtained from the official BP methods, showing no significant difference with respect to accuracy and precision.
A Real-Time Infrared Ultra-Spectral Signature Classification Method via Spatial Pyramid Matching
Mei, Xiaoguang; Ma, Yong; Li, Chang; Fan, Fan; Huang, Jun; Ma, Jiayi
2015-01-01
The state-of-the-art ultra-spectral sensor technology brings new hope for high precision applications due to its high spectral resolution. However, it also comes with new challenges, such as the high data dimension and noise problems. In this paper, we propose a real-time method for infrared ultra-spectral signature classification via spatial pyramid matching (SPM), which includes two aspects. First, we introduce an infrared ultra-spectral signature similarity measure method via SPM, which is the foundation of the matching-based classification method. Second, we propose the classification method with reference spectral libraries, which utilizes the SPM-based similarity for the real-time infrared ultra-spectral signature classification with robustness performance. Specifically, instead of matching with each spectrum in the spectral library, our method is based on feature matching, which includes a feature library-generating phase. We calculate the SPM-based similarity between the feature of the spectrum and that of each spectrum of the reference feature library, then take the class index of the corresponding spectrum having the maximum similarity as the final result. Experimental comparisons on two publicly-available datasets demonstrate that the proposed method effectively improves the real-time classification performance and robustness to noise. PMID:26205263
NASA Astrophysics Data System (ADS)
Xu, Kaili
Wyoming is by far the largest coal producing state in the US, but local utilization is extremely low. As much as 92% of Wyoming's coal is shipped to the other states and is mainly consumed by their electricity producers. Coal accounts for more than 50% of the US electricity generation and is one of the least expensive energy sources. Wyoming could utilize its coal better by exporting electricity instead of exporting the coal only in its raw form. Natural gas is another important energy resource in Wyoming but local utilization is even lower. As a result of the development in coalbed methane fields, natural gas production in Wyoming is almost in pace with its coal production. In addition to constructing more new pipelines, new transmission lines should be considered as an alternative way of exporting this energy. Because of their enormous electricity market sizes and high electricity prices, California, Texas and Illinois are chosen to be the target markets for Wyoming's electricity. The proposed transmission schemes use High Voltage DC (HVDC) lines, which are suitable for long distance and cross-system power transmission. Technical and economic feasibilities are studied in details. The Wyoming-California scheme has a better return of investment than both the Wyoming-Texas and the Wyoming-Illinois schemes. A major drawback of HVDC transmission is the high level of harmonics generated by the converters. Elaborate filtering is required at both the AC and the DC sides. A novel pulse-multiplication method is proposed in the thesis to reduce the harmonics from the converter source. By introducing an averaging inductor, the proposed method uses less thyristors to achieve the same high-pulse operation as the existing series scheme. The reduction of thyristors makes the switching circuit more reliable and easier to control and maintain. Harmonic analysis shows that the harmonic level can be reduced to about one third of the original system. The proposed method is also simulated by using the Real Time Digital Simulator (RTDS) with a few assumptions. Simulation results of various operating conditions confirm the theoretical analysis results.
Joint Estimation of Source Range and Depth Using a Bottom-Deployed Vertical Line Array in Deep Water
Li, Hui; Yang, Kunde; Duan, Rui; Lei, Zhixiong
2017-01-01
This paper presents a joint estimation method of source range and depth using a bottom-deployed vertical line array (VLA). The method utilizes the information on the arrival angle of direct (D) path in space domain and the interference characteristic of D and surface-reflected (SR) paths in frequency domain. The former is related to a ray tracing technique to backpropagate the rays and produces an ambiguity surface of source range. The latter utilizes Lloyd’s mirror principle to obtain an ambiguity surface of source depth. The acoustic transmission duct is the well-known reliable acoustic path (RAP). The ambiguity surface of the combined estimation is a dimensionless ad hoc function. Numerical efficiency and experimental verification show that the proposed method is a good candidate for initial coarse estimation of source position. PMID:28590442
Bayramoglu, Husnu; Komurcugil, Hasan
2014-07-01
A time-varying sliding-coefficient-based decoupled terminal sliding mode control strategy is presented for a class of fourth-order systems. First, the fourth-order system is decoupled into two second-order subsystems. The sliding surface of each subsystem was designed by utilizing time-varying coefficients. Then, the control target of one subsystem to another subsystem was embedded. Thereafter, a terminal sliding mode control method was utilized to make both subsystems converge to their equilibrium points in finite time. The simulation results on the inverted pendulum system demonstrate that the proposed method exhibits a considerable improvement in terms of a faster dynamic response and lower IAE and ITAE values as compared with the existing decoupled control methods. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Theoretical analysis of exponential transversal method of lines for the diffusion equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salazar, A.; Raydan, M.; Campo, A.
1996-12-31
Recently a new approximate technique to solve the diffusion equation was proposed by Campo and Salazar. This new method is inspired on the Method of Lines (MOL) with some insight coming from the method of separation of variables. The proposed method, the Exponential Transversal Method of Lines (ETMOL), utilizes an exponential variation to improve accuracy in the evaluation of the time derivative. Campo and Salazar have implemented this method in a wide range of heat/mass transfer applications and have obtained surprisingly good numerical results. In this paper, the authors study the theoretical properties of ETMOL in depth. In particular, consistency,more » stability and convergence are established in the framework of the heat/mass diffusion equation. In most practical applications the method presents a very reduced truncation error in time and its different versions are proven to be unconditionally stable in the Fourier sense. Convergence of the solutions is then established. The theory is corroborated by several analytical/numerical experiments.« less
Allowable SEM noise for unbiased LER measurement
NASA Astrophysics Data System (ADS)
Papavieros, George; Constantoudis, Vassilios; Gogolides, Evangelos
2018-03-01
Recently, a novel method for the calculation of unbiased Line Edge Roughness based on Power Spectral Density analysis has been proposed. In this paper first an alternative method is discussed and investigated, utilizing the Height-Height Correlation Function (HHCF) of edges. The HHCF-based method enables the unbiased determination of the whole triplet of LER parameters including besides rms the correlation length and roughness exponent. The key of both methods is the sensitivity of PSD and HHCF on noise at high frequencies and short distance respectively. Secondly, we elaborate a testbed of synthesized SEM images with controlled LER and noise to justify the effectiveness of the proposed unbiased methods. Our main objective is to find out the boundaries of the method in respect to noise levels and roughness characteristics, for which the method remains reliable, i.e the maximum amount of noise allowed, for which the output results cope with the controllable known inputs. At the same time, we will also set the extremes of roughness parameters for which the methods hold their accuracy.
NASA Astrophysics Data System (ADS)
Wang, Xianghong; Liu, Xinyu; Wang, Nanshuo; Yu, Xiaojun; Bo, En; Chen, Si; Liu, Linbo
2017-02-01
Optical coherence tomography (OCT) provides high resolution and cross-sectional images of biological tissue and is widely used for diagnosis of ocular diseases. However, OCT images suffer from speckle noise, which typically considered as multiplicative noise in nature, reducing the image resolution and contrast. In this study, we propose a two-step iteration (TSI) method to suppress those noises. We first utilize augmented Lagrange method to recover a low-rank OCT image and remove additive Gaussian noise, and then employ the simple and efficient split Bregman method to solve the Total-Variation Denoising model. We validated such proposed method using images of swine, rabbit and human retina. Results demonstrate that our TSI method outperforms the other popular methods in achieving higher peak signal-to-noise ratio (PSNR) and structure similarity (SSIM) while preserving important structural details, such as tiny capillaries and thin layers in retinal OCT images. In addition, the results of our TSI method show clearer boundaries and maintains high image contrast, which facilitates better image interpretations and analyses.
Energy Efficient Real-Time Scheduling Using DPM on Mobile Sensors with a Uniform Multi-Cores
Kim, Youngmin; Lee, Chan-Gun
2017-01-01
In wireless sensor networks (WSNs), sensor nodes are deployed for collecting and analyzing data. These nodes use limited energy batteries for easy deployment and low cost. The use of limited energy batteries is closely related to the lifetime of the sensor nodes when using wireless sensor networks. Efficient-energy management is important to extending the lifetime of the sensor nodes. Most effort for improving power efficiency in tiny sensor nodes has focused mainly on reducing the power consumed during data transmission. However, recent emergence of sensor nodes equipped with multi-cores strongly requires attention to be given to the problem of reducing power consumption in multi-cores. In this paper, we propose an energy efficient scheduling method for sensor nodes supporting a uniform multi-cores. We extend the proposed T-Ler plane based scheduling for global optimal scheduling of a uniform multi-cores and multi-processors to enable power management using dynamic power management. In the proposed approach, processor selection for a scheduling and mapping method between the tasks and processors is proposed to efficiently utilize dynamic power management. Experiments show the effectiveness of the proposed approach compared to other existing methods. PMID:29240695
Structural Equation Models in a Redundancy Analysis Framework With Covariates.
Lovaglio, Pietro Giorgio; Vittadini, Giorgio
2014-01-01
A recent method to specify and fit structural equation modeling in the Redundancy Analysis framework based on so-called Extended Redundancy Analysis (ERA) has been proposed in the literature. In this approach, the relationships between the observed exogenous variables and the observed endogenous variables are moderated by the presence of unobservable composites, estimated as linear combinations of exogenous variables. However, in the presence of direct effects linking exogenous and endogenous variables, or concomitant indicators, the composite scores are estimated by ignoring the presence of the specified direct effects. To fit structural equation models, we propose a new specification and estimation method, called Generalized Redundancy Analysis (GRA), allowing us to specify and fit a variety of relationships among composites, endogenous variables, and external covariates. The proposed methodology extends the ERA method, using a more suitable specification and estimation algorithm, by allowing for covariates that affect endogenous indicators indirectly through the composites and/or directly. To illustrate the advantages of GRA over ERA we propose a simulation study of small samples. Moreover, we propose an application aimed at estimating the impact of formal human capital on the initial earnings of graduates of an Italian university, utilizing a structural model consistent with well-established economic theory.
Yun, Kyungwon; Lee, Hyunjae; Bang, Hyunwoo; Jeon, Noo Li
2016-02-21
This study proposes a novel way to achieve high-throughput image acquisition based on a computer-recognizable micro-pattern implemented on a microfluidic device. We integrated the QR code, a two-dimensional barcode system, onto the microfluidic device to simplify imaging of multiple ROIs (regions of interest). A standard QR code pattern was modified to arrays of cylindrical structures of polydimethylsiloxane (PDMS). Utilizing the recognition of the micro-pattern, the proposed system enables: (1) device identification, which allows referencing additional information of the device, such as device imaging sequences or the ROIs and (2) composing a coordinate system for an arbitrarily located microfluidic device with respect to the stage. Based on these functionalities, the proposed method performs one-step high-throughput imaging for data acquisition in microfluidic devices without further manual exploration and locating of the desired ROIs. In our experience, the proposed method significantly reduced the time for the preparation of an acquisition. We expect that the method will innovatively improve the prototype device data acquisition and analysis.
NASA Astrophysics Data System (ADS)
Ha, Jong M.; Youn, Byeng D.; Oh, Hyunseok; Han, Bongtae; Jung, Yoongho; Park, Jungho
2016-03-01
We propose autocorrelation-based time synchronous averaging (ATSA) to cope with the challenges associated with the current practice of time synchronous averaging (TSA) for planet gears in planetary gearboxes of wind turbine (WT). An autocorrelation function that represents physical interactions between the ring, sun, and planet gears in the gearbox is utilized to define the optimal shape and range of the window function for TSA using actual kinetic responses. The proposed ATSA offers two distinctive features: (1) data-efficient TSA processing and (2) prevention of signal distortion during the TSA process. It is thus expected that an order analysis with the ATSA signals significantly improves the efficiency and accuracy in fault diagnostics of planet gears in planetary gearboxes. Two case studies are presented to demonstrate the effectiveness of the proposed method: an analytical signal from a simulation and a signal measured from a 2 kW WT testbed. It can be concluded from the results that the proposed method outperforms conventional TSA methods in condition monitoring of the planetary gearbox when the amount of available stationary data is limited.
No-Reference Image Quality Assessment by Wide-Perceptual-Domain Scorer Ensemble Method.
Liu, Tsung-Jung; Liu, Kuan-Hsien
2018-03-01
A no-reference (NR) learning-based approach to assess image quality is presented in this paper. The devised features are extracted from wide perceptual domains, including brightness, contrast, color, distortion, and texture. These features are used to train a model (scorer) which can predict scores. The scorer selection algorithms are utilized to help simplify the proposed system. In the final stage, the ensemble method is used to combine the prediction results from selected scorers. Two multiple-scale versions of the proposed approach are also presented along with the single-scale one. They turn out to have better performances than the original single-scale method. Because of having features from five different domains at multiple image scales and using the outputs (scores) from selected score prediction models as features for multi-scale or cross-scale fusion (i.e., ensemble), the proposed NR image quality assessment models are robust with respect to more than 24 image distortion types. They also can be used on the evaluation of images with authentic distortions. The extensive experiments on three well-known and representative databases confirm the performance robustness of our proposed model.
Glue detection based on teaching points constraint and tracking model of pixel convolution
NASA Astrophysics Data System (ADS)
Geng, Lei; Ma, Xiao; Xiao, Zhitao; Wang, Wen
2018-01-01
On-line glue detection based on machine version is significant for rust protection and strengthening in car production. Shadow stripes caused by reflect light and unevenness of inside front cover of car reduce the accuracy of glue detection. In this paper, we propose an effective algorithm to distinguish the edges of the glue and shadow stripes. Teaching points are utilized to calculate slope between the two adjacent points. Then a tracking model based on pixel convolution along motion direction is designed to segment several local rectangular regions using distance. The distance is the height of rectangular region. The pixel convolution along the motion direction is proposed to extract edges of gules in local rectangular region. A dataset with different illumination and complexity shape stripes are used to evaluate proposed method, which include 500 thousand images captured from the camera of glue gun machine. Experimental results demonstrate that the proposed method can detect the edges of glue accurately. The shadow stripes are distinguished and removed effectively. Our method achieves the 99.9% accuracies for the image dataset.
Integration of heterogeneous features for remote sensing scene classification
NASA Astrophysics Data System (ADS)
Wang, Xin; Xiong, Xingnan; Ning, Chen; Shi, Aiye; Lv, Guofang
2018-01-01
Scene classification is one of the most important issues in remote sensing (RS) image processing. We find that features from different channels (shape, spectral, texture, etc.), levels (low-level and middle-level), or perspectives (local and global) could provide various properties for RS images, and then propose a heterogeneous feature framework to extract and integrate heterogeneous features with different types for RS scene classification. The proposed method is composed of three modules (1) heterogeneous features extraction, where three heterogeneous feature types, called DS-SURF-LLC, mean-Std-LLC, and MS-CLBP, are calculated, (2) heterogeneous features fusion, where the multiple kernel learning (MKL) is utilized to integrate the heterogeneous features, and (3) an MKL support vector machine classifier for RS scene classification. The proposed method is extensively evaluated on three challenging benchmark datasets (a 6-class dataset, a 12-class dataset, and a 21-class dataset), and the experimental results show that the proposed method leads to good classification performance. It produces good informative features to describe the RS image scenes. Moreover, the integration of heterogeneous features outperforms some state-of-the-art features on RS scene classification tasks.
Uniqueness and reconstruction in magnetic resonance-electrical impedance tomography (MR-EIT).
Ider, Y Ziya; Onart, Serkan; Lionheart, William R B
2003-05-01
Magnetic resonance-electrical impedance tomography (MR-EIT) was first proposed in 1992. Since then various reconstruction algorithms have been suggested and applied. These algorithms use peripheral voltage measurements and internal current density measurements in different combinations. In this study the problem of MR-EIT is treated as a hyperbolic system of first-order partial differential equations, and three numerical methods are proposed for its solution. This approach is not utilized in any of the algorithms proposed earlier. The numerical solution methods are integration along equipotential surfaces (method of characteristics), integration on a Cartesian grid, and inversion of a system matrix derived by a finite difference formulation. It is shown that if some uniqueness conditions are satisfied, then using at least two injected current patterns, resistivity can be reconstructed apart from a multiplicative constant. This constant can then be identified using a single voltage measurement. The methods proposed are direct, non-iterative, and valid and feasible for 3D reconstructions. They can also be used to easily obtain slice and field-of-view images from a 3D object. 2D simulations are made to illustrate the performance of the algorithms.
Policy improvement by a model-free Dyna architecture.
Hwang, Kao-Shing; Lo, Chia-Yue
2013-05-01
The objective of this paper is to accelerate the process of policy improvement in reinforcement learning. The proposed Dyna-style system combines two learning schemes, one of which utilizes a temporal difference method for direct learning; the other uses relative values for indirect learning in planning between two successive direct learning cycles. Instead of establishing a complicated world model, the approach introduces a simple predictor of average rewards to actor-critic architecture in the simulation (planning) mode. The relative value of a state, defined as the accumulated differences between immediate reward and average reward, is used to steer the improvement process in the right direction. The proposed learning scheme is applied to control a pendulum system for tracking a desired trajectory to demonstrate its adaptability and robustness. Through reinforcement signals from the environment, the system takes the appropriate action to drive an unknown dynamic to track desired outputs in few learning cycles. Comparisons are made between the proposed model-free method, a connectionist adaptive heuristic critic, and an advanced method of Dyna-Q learning in the experiments of labyrinth exploration. The proposed method outperforms its counterparts in terms of elapsed time and convergence rate.
An adaptive multi-feature segmentation model for infrared image
NASA Astrophysics Data System (ADS)
Zhang, Tingting; Han, Jin; Zhang, Yi; Bai, Lianfa
2016-04-01
Active contour models (ACM) have been extensively applied to image segmentation, conventional region-based active contour models only utilize global or local single feature information to minimize the energy functional to drive the contour evolution. Considering the limitations of original ACMs, an adaptive multi-feature segmentation model is proposed to handle infrared images with blurred boundaries and low contrast. In the proposed model, several essential local statistic features are introduced to construct a multi-feature signed pressure function (MFSPF). In addition, we draw upon the adaptive weight coefficient to modify the level set formulation, which is formed by integrating MFSPF with local statistic features and signed pressure function with global information. Experimental results demonstrate that the proposed method can make up for the inadequacy of the original method and get desirable results in segmenting infrared images.
Luan, Xiaoli; Chen, Qiang; Liu, Fei
2014-09-01
This article presents a new scheme to design full matrix controller for high dimensional multivariable processes based on equivalent transfer function (ETF). Differing from existing ETF method, the proposed ETF is derived directly by exploiting the relationship between the equivalent closed-loop transfer function and the inverse of open-loop transfer function. Based on the obtained ETF, the full matrix controller is designed utilizing the existing PI tuning rules. The new proposed ETF model can more accurately represent the original processes. Furthermore, the full matrix centralized controller design method proposed in this paper is applicable to high dimensional multivariable systems with satisfactory performance. Comparison with other multivariable controllers shows that the designed ETF based controller is superior with respect to design-complexity and obtained performance. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Detection of small surface defects using DCT based enhancement approach in machine vision systems
NASA Astrophysics Data System (ADS)
He, Fuqiang; Wang, Wen; Chen, Zichen
2005-12-01
Utilizing DCT based enhancement approach, an improved small defect detection algorithm for real-time leather surface inspection was developed. A two-stage decomposition procedure was proposed to extract an odd-odd frequency matrix after a digital image has been transformed to DCT domain. Then, the reverse cumulative sum algorithm was proposed to detect the transition points of the gentle curves plotted from the odd-odd frequency matrix. The best radius of the cutting sector was computed in terms of the transition points and the high-pass filtering operation was implemented. The filtered image was then inversed and transformed back to the spatial domain. Finally, the restored image was segmented by an entropy method and some defect features are calculated. Experimental results show the proposed small defect detection method can reach the small defect detection rate by 94%.
A New Modified Histogram Matching Normalization for Time Series Microarray Analysis.
Astola, Laura; Molenaar, Jaap
2014-07-01
Microarray data is often utilized in inferring regulatory networks. Quantile normalization (QN) is a popular method to reduce array-to-array variation. We show that in the context of time series measurements QN may not be the best choice for this task, especially not if the inference is based on continuous time ODE model. We propose an alternative normalization method that is better suited for network inference from time series data.
NASA Astrophysics Data System (ADS)
Ivanov, M. P.; Tolmachev, Yu. A.
2018-05-01
We consider the most feasible ways to significantly improve the sensitivity of spectroscopic methods for detection and measurement of trace concentrations of greenhouse gas molecules in the atmosphere. The proposed methods are based on combining light fluxes from a number of spectral components of the specified molecule on the same photodetector, taking into account the characteristic features of the transmission spectrum of devices utilizing multipath interference effects.
NASA Astrophysics Data System (ADS)
Başhan, Ali; Uçar, Yusuf; Murat Yağmurlu, N.; Esen, Alaattin
2018-01-01
In the present paper, a Crank-Nicolson-differential quadrature method (CN-DQM) based on utilizing quintic B-splines as a tool has been carried out to obtain the numerical solutions for the nonlinear Schrödinger (NLS) equation. For this purpose, first of all, the Schrödinger equation has been converted into coupled real value differential equations and then they have been discretized using both the forward difference formula and the Crank-Nicolson method. After that, Rubin and Graves linearization techniques have been utilized and the differential quadrature method has been applied to obtain an algebraic equation system. Next, in order to be able to test the efficiency of the newly applied method, the error norms, L2 and L_{∞}, as well as the two lowest invariants, I1 and I2, have been computed. Besides those, the relative changes in those invariants have been presented. Finally, the newly obtained numerical results have been compared with some of those available in the literature for similar parameters. This comparison clearly indicates that the currently utilized method, namely CN-DQM, is an effective and efficient numerical scheme and allows us to propose to solve a wide range of nonlinear equations.
Enhancing an Instructional Design Model for Virtual Reality-Based Learning
ERIC Educational Resources Information Center
Chen, Chwen Jen; Teh, Chee Siong
2013-01-01
In order to effectively utilize the capabilities of virtual reality (VR) in supporting the desired learning outcomes, careful consideration in the design of instruction for VR learning is crucial. In line with this concern, previous work proposed an instructional design model that prescribes instructional methods to guide the design of VR-based…
USDA-ARS?s Scientific Manuscript database
Antibiotic growth promoters that have been historically employed to control pathogens and increase the rate of animal development for human consumption are currently banned in many countries. Probiotics have been proposed as an alternative to control pathogenic bacteria. Traditional culture method...
7 CFR 1940.317 - Methods for ensuring proper implementation of categorical exclusions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... under consideration; and (14) State Water Quality Standard—the proposed action would impair a water... (Continued) RURAL HOUSING SERVICE, RURAL BUSINESS-COOPERATIVE SERVICE, RURAL UTILITIES SERVICE, AND FARM... identified in § 1940.310(b)(8), for farm programs exclusions listed in § 1940.310(d)(2) and (3), and for...
Double-Polymer-Modified Pencil Lead for Stripping Voltammetry of Perchlorate in Drinking Water
ERIC Educational Resources Information Center
Izadyar, Anahita; Kim, Yushin; Ward, Michelle M.; Amemiya, Shigeru
2012-01-01
The inexpensive and disposable electrode based on a double-polymer-modified pencil lead is proposed for upper-division undergraduate instrumental laboratories to enable the highly sensitive detection of perchlorate. Students fabricate and utilize their own electrodes in the 3-4 h laboratory session to learn important concepts and methods of…
ERIC Educational Resources Information Center
Femi, Sunday Akinwumi; Yemisi, Etomi Edwin
2015-01-01
This study investigated effective teaching with the aid of ICT in Nigerian higher education institutions as a proposed solution to graduates' unemployability. The survey method was utilized for this study. Respondents were randomly selected from students and teachers of selected higher institutions in Nigeria. The findings reveal that, even though…
Towards using musculoskeletal models for intelligent control of physically assistive robots.
Carmichael, Marc G; Liu, Dikai
2011-01-01
With the increasing number of robots being developed to physically assist humans in tasks such as rehabilitation and assistive living, more intelligent and personalized control systems are desired. In this paper we propose the use of a musculoskeletal model to estimate the strength of the user, from which information can be utilized to improve control schemes in which robots physically assist humans. An optimization model is developed utilizing a musculoskeletal model to estimate human strength in a specified dynamic state. Results of this optimization as well as methods of using it to observe muscle-based weaknesses in task space are presented. Lastly potential methods and problems in incorporating this model into a robot control system are discussed.
Spurious cross-frequency amplitude-amplitude coupling in nonstationary, nonlinear signals
NASA Astrophysics Data System (ADS)
Yeh, Chien-Hung; Lo, Men-Tzung; Hu, Kun
2016-07-01
Recent studies of brain activities show that cross-frequency coupling (CFC) plays an important role in memory and learning. Many measures have been proposed to investigate the CFC phenomenon, including the correlation between the amplitude envelopes of two brain waves at different frequencies - cross-frequency amplitude-amplitude coupling (AAC). In this short communication, we describe how nonstationary, nonlinear oscillatory signals may produce spurious cross-frequency AAC. Utilizing the empirical mode decomposition, we also propose a new method for assessment of AAC that can potentially reduce the effects of nonlinearity and nonstationarity and, thus, help to avoid the detection of artificial AACs. We compare the performances of this new method and the traditional Fourier-based AAC method. We also discuss the strategies to identify potential spurious AACs.
Dynamic speckle illumination wide-field reflection phase microscopy
Choi, Youngwoon; Hosseini, Poorya; Choi, Wonshik; Dasari, Ramachandra R.; So, Peter T. C.; Yaqoob, Zahid
2014-01-01
We demonstrate a quantitative reflection-phase microscope based on time-varying speckle-field illumination. Due to the short spatial coherence length of the speckle field, the proposed imaging system features superior lateral resolution, 520 nm, as well as high-depth selectivity, 1.03 µm. Off-axis interferometric detection enables wide-field and single-shot imaging appropriate for high-speed measurements. In addition, the measured phase sensitivity of this method, which is the smallest measurable axial motion, is more than 40 times higher than that available using a transmission system. We demonstrate the utility of our method by successfully distinguishing the motion of the top surface from that of the bottom in red blood cells. The proposed method will be useful for studying membrane dynamics in complex eukaryotic cells. PMID:25361156
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Hongxing; Fang, Hengrui; Miller, Mitchell D.
2016-07-15
An iterative transform algorithm is proposed to improve the conventional molecular-replacement method for solving the phase problem in X-ray crystallography. Several examples of successful trial calculations carried out with real diffraction data are presented. An iterative transform method proposed previously for direct phasing of high-solvent-content protein crystals is employed for enhancing the molecular-replacement (MR) algorithm in protein crystallography. Target structures that are resistant to conventional MR due to insufficient similarity between the template and target structures might be tractable with this modified phasing method. Trial calculations involving three different structures are described to test and illustrate the methodology. The relationshipmore » of the approach to PHENIX Phaser-MR and MR-Rosetta is discussed.« less
Wide-field absolute transverse blood flow velocity mapping in vessel centerline
NASA Astrophysics Data System (ADS)
Wu, Nanshou; Wang, Lei; Zhu, Bifeng; Guan, Caizhong; Wang, Mingyi; Han, Dingan; Tan, Haishu; Zeng, Yaguang
2018-02-01
We propose a wide-field absolute transverse blood flow velocity measurement method in vessel centerline based on absorption intensity fluctuation modulation effect. The difference between the light absorption capacities of red blood cells and background tissue under low-coherence illumination is utilized to realize the instantaneous and average wide-field optical angiography images. The absolute fuzzy connection algorithm is used for vessel centerline extraction from the average wide-field optical angiography. The absolute transverse velocity in the vessel centerline is then measured by a cross-correlation analysis according to instantaneous modulation depth signal. The proposed method promises to contribute to the treatment of diseases, such as those related to anemia or thrombosis.
Texture-adaptive hyperspectral video acquisition system with a spatial light modulator
NASA Astrophysics Data System (ADS)
Fang, Xiaojing; Feng, Jiao; Wang, Yongjin
2014-10-01
We present a new hybrid camera system based on spatial light modulator (SLM) to capture texture-adaptive high-resolution hyperspectral video. The hybrid camera system records a hyperspectral video with low spatial resolution using a gray camera and a high-spatial resolution video using a RGB camera. The hyperspectral video is subsampled by the SLM. The subsampled points can be adaptively selected according to the texture characteristic of the scene by combining with digital imaging analysis and computational processing. In this paper, we propose an adaptive sampling method utilizing texture segmentation and wavelet transform (WT). We also demonstrate the effectiveness of the sampled pattern on the SLM with the proposed method.
Forest Roadidentification and Extractionof Through Advanced Log Matching Techniques
NASA Astrophysics Data System (ADS)
Zhang, W.; Hu, B.; Quist, L.
2017-10-01
A novel algorithm for forest road identification and extraction was developed. The algorithm utilized Laplacian of Gaussian (LoG) filter and slope calculation on high resolution multispectral imagery and LiDAR data respectively to extract both primary road and secondary road segments in the forest area. The proposed method used road shape feature to extract the road segments, which have been further processed as objects with orientation preserved. The road network was generated after post processing with tensor voting. The proposed method was tested on Hearst forest, located in central Ontario, Canada. Based on visual examination against manually digitized roads, the majority of roads from the test area have been identified and extracted from the process.
Modeling PSInSAR time series without phase unwrapping
Zhang, L.; Ding, X.; Lu, Z.
2011-01-01
In this paper, we propose a least-squares-based method for multitemporal synthetic aperture radar interferometry that allows one to estimate deformations without the need of phase unwrapping. The method utilizes a series of multimaster wrapped differential interferograms with short baselines and focuses on arcs at which there are no phase ambiguities. An outlier detector is used to identify and remove the arcs with phase ambiguities, and a pseudoinverse of the variance-covariance matrix is used as the weight matrix of the correlated observations. The deformation rates at coherent points are estimated with a least squares model constrained by reference points. The proposed approach is verified with a set of simulated data.
Asteroid orbital inversion using uniform phase-space sampling
NASA Astrophysics Data System (ADS)
Muinonen, K.; Pentikäinen, H.; Granvik, M.; Oszkiewicz, D.; Virtanen, J.
2014-07-01
We review statistical inverse methods for asteroid orbit computation from a small number of astrometric observations and short time intervals of observations. With the help of Markov-chain Monte Carlo methods (MCMC), we present a novel inverse method that utilizes uniform sampling of the phase space for the orbital elements. The statistical orbital ranging method (Virtanen et al. 2001, Muinonen et al. 2001) was set out to resolve the long-lasting challenges in the initial computation of orbits for asteroids. The ranging method starts from the selection of a pair of astrometric observations. Thereafter, the topocentric ranges and angular deviations in R.A. and Decl. are randomly sampled. The two Cartesian positions allow for the computation of orbital elements and, subsequently, the computation of ephemerides for the observation dates. Candidate orbital elements are included in the sample of accepted elements if the χ^2-value between the observed and computed observations is within a pre-defined threshold. The sample orbital elements obtain weights based on a certain debiasing procedure. When the weights are available, the full sample of orbital elements allows the probabilistic assessments for, e.g., object classification and ephemeris computation as well as the computation of collision probabilities. The MCMC ranging method (Oszkiewicz et al. 2009; see also Granvik et al. 2009) replaces the original sampling algorithm described above with a proposal probability density function (p.d.f.), and a chain of sample orbital elements results in the phase space. MCMC ranging is based on a bivariate Gaussian p.d.f. for the topocentric ranges, and allows for the sampling to focus on the phase-space domain with most of the probability mass. In the virtual-observation MCMC method (Muinonen et al. 2012), the proposal p.d.f. for the orbital elements is chosen to mimic the a posteriori p.d.f. for the elements: first, random errors are simulated for each observation, resulting in a set of virtual observations; second, corresponding virtual least-squares orbital elements are derived using the Nelder-Mead downhill simplex method; third, repeating the procedure two times allows for a computation of a difference for two sets of virtual orbital elements; and, fourth, this orbital-element difference constitutes a symmetric proposal in a random-walk Metropolis-Hastings algorithm, avoiding the explicit computation of the proposal p.d.f. In a discrete approximation, the allowed proposals coincide with the differences that are based on a large number of pre-computed sets of virtual least-squares orbital elements. The virtual-observation MCMC method is thus based on the characterization of the relevant volume in the orbital-element phase space. Here we utilize MCMC to map the phase-space domain of acceptable solutions. We can make use of the proposal p.d.f.s from the MCMC ranging and virtual-observation methods. The present phase-space mapping produces, upon convergence, a uniform sampling of the solution space within a pre-defined χ^2-value. The weights of the sampled orbital elements are then computed on the basis of the corresponding χ^2-values. The present method resembles the original ranging method. On one hand, MCMC mapping is insensitive to local extrema in the phase space and efficiently maps the solution space. This is somewhat contrary to the MCMC methods described above. On the other hand, MCMC mapping can suffer from producing a small number of sample elements with small χ^2-values, in resemblance to the original ranging method. We apply the methods to example near-Earth, main-belt, and transneptunian objects, and highlight the utilization of the methods in the data processing and analysis pipeline of the ESA Gaia space mission.
Introducing the Benson Prize for Discovery Methods of Near Earth Objects by Amateurs
NASA Astrophysics Data System (ADS)
Benson, J. W.
1997-05-01
The Benson Prize Sponsored by Space Development Corporation The Benson Prize for Discovery Methods of Near Earth Objects by Amateurs is an annual competition which awards prizes to the best proposed methods by which amateur astronomers may discover such near earth objects as asteroids and comet cores. The purpose of the Benson Prize is to encourage the discovery of near earth objects by amateur astronomers. The utilization of valuable near earth resources can provide many new jobs and economic activities on earth, while also creating many new opportunities for opening up the space frontier. The utilization of near earth resources will significantly contribute to the lessening of environmental degradation on the Earth caused by mining and chemical leaching operations required to exploit the low grade ores now remaining on Earth. In addition, near earth objects pose grave dangers for life on earth. Discovering and plotting the orbits of all potentially dangerous near earth objects is the first and necessary step in protecting ourselves against the enormous potential damage possible from near earth objects. With the high quality, large size and low cost of todays consumer telescopes, the rapid development of powerful, high resolution and inexpensive CCD cameras, and the proliferation of inexpensive software for todays powerful home computers, the discovery of near earth objects by amateur astronomers is more attainable than ever. The Benson Prize is sponsored by the Space Development Corporation, a space resource exploration and utilization company. In 1997 one prize of \\500 will be awarded to the best proposed method for the amateur discovery of NEOs, and in each of the four following years, Prizes of \\500, \\250 and \\100 will be awarded. Prizes for the actual discovery of Near Earth Asteroids will be added in later years.
Simultaneous calibration phantom commission and geometry calibration in cone beam CT
NASA Astrophysics Data System (ADS)
Xu, Yuan; Yang, Shuai; Ma, Jianhui; Li, Bin; Wu, Shuyu; Qi, Hongliang; Zhou, Linghong
2017-09-01
Geometry calibration is a vital step for describing the geometry of a cone beam computed tomography (CBCT) system and is a prerequisite for CBCT reconstruction. In current methods, calibration phantom commission and geometry calibration are divided into two independent tasks. Small errors in ball-bearing (BB) positioning in the phantom-making step will severely degrade the quality of phantom calibration. To solve this problem, we propose an integrated method to simultaneously realize geometry phantom commission and geometry calibration. Instead of assuming the accuracy of the geometry phantom, the integrated method considers BB centers in the phantom as an optimized parameter in the workflow. Specifically, an evaluation phantom and the corresponding evaluation contrast index are used to evaluate geometry artifacts for optimizing the BB coordinates in the geometry phantom. After utilizing particle swarm optimization, the CBCT geometry and BB coordinates in the geometry phantom are calibrated accurately and are then directly used for the next geometry calibration task in other CBCT systems. To evaluate the proposed method, both qualitative and quantitative studies were performed on simulated and realistic CBCT data. The spatial resolution of reconstructed images using dental CBCT can reach up to 15 line pair cm-1. The proposed method is also superior to the Wiesent method in experiments. This paper shows that the proposed method is attractive for simultaneous and accurate geometry phantom commission and geometry calibration.
Enhancement of Satellite Image Compression Using a Hybrid (DWT-DCT) Algorithm
NASA Astrophysics Data System (ADS)
Shihab, Halah Saadoon; Shafie, Suhaidi; Ramli, Abdul Rahman; Ahmad, Fauzan
2017-12-01
Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) image compression techniques have been utilized in most of the earth observation satellites launched during the last few decades. However, these techniques have some issues that should be addressed. The DWT method has proven to be more efficient than DCT for several reasons. Nevertheless, the DCT can be exploited to improve the high-resolution satellite image compression when combined with the DWT technique. Hence, a proposed hybrid (DWT-DCT) method was developed and implemented in the current work, simulating an image compression system on-board on a small remote sensing satellite, with the aim of achieving a higher compression ratio to decrease the onboard data storage and the downlink bandwidth, while avoiding further complex levels of DWT. This method also succeeded in maintaining the reconstructed satellite image quality through replacing the standard forward DWT thresholding and quantization processes with an alternative process that employed the zero-padding technique, which also helped to reduce the processing time of DWT compression. The DCT, DWT and the proposed hybrid methods were implemented individually, for comparison, on three LANDSAT 8 images, using the MATLAB software package. A comparison was also made between the proposed method and three other previously published hybrid methods. The evaluation of all the objective and subjective results indicated the feasibility of using the proposed hybrid (DWT-DCT) method to enhance the image compression process on-board satellites.
Salient regions detection using convolutional neural networks and color volume
NASA Astrophysics Data System (ADS)
Liu, Guang-Hai; Hou, Yingkun
2018-03-01
Convolutional neural network is an important technique in machine learning, pattern recognition and image processing. In order to reduce the computational burden and extend the classical LeNet-5 model to the field of saliency detection, we propose a simple and novel computing model based on LeNet-5 network. In the proposed model, hue, saturation and intensity are utilized to extract depth cues, and then we integrate depth cues and color volume to saliency detection following the basic structure of the feature integration theory. Experimental results show that the proposed computing model outperforms some existing state-of-the-art methods on MSRA1000 and ECSSD datasets.
Bayesian Normalization Model for Label-Free Quantitative Analysis by LC-MS
Nezami Ranjbar, Mohammad R.; Tadesse, Mahlet G.; Wang, Yue; Ressom, Habtom W.
2016-01-01
We introduce a new method for normalization of data acquired by liquid chromatography coupled with mass spectrometry (LC-MS) in label-free differential expression analysis. Normalization of LC-MS data is desired prior to subsequent statistical analysis to adjust variabilities in ion intensities that are not caused by biological differences but experimental bias. There are different sources of bias including variabilities during sample collection and sample storage, poor experimental design, noise, etc. In addition, instrument variability in experiments involving a large number of LC-MS runs leads to a significant drift in intensity measurements. Although various methods have been proposed for normalization of LC-MS data, there is no universally applicable approach. In this paper, we propose a Bayesian normalization model (BNM) that utilizes scan-level information from LC-MS data. Specifically, the proposed method uses peak shapes to model the scan-level data acquired from extracted ion chromatograms (EIC) with parameters considered as a linear mixed effects model. We extended the model into BNM with drift (BNMD) to compensate for the variability in intensity measurements due to long LC-MS runs. We evaluated the performance of our method using synthetic and experimental data. In comparison with several existing methods, the proposed BNM and BNMD yielded significant improvement. PMID:26357332
A method of 3D object recognition and localization in a cloud of points
NASA Astrophysics Data System (ADS)
Bielicki, Jerzy; Sitnik, Robert
2013-12-01
The proposed method given in this article is prepared for analysis of data in the form of cloud of points directly from 3D measurements. It is designed for use in the end-user applications that can directly be integrated with 3D scanning software. The method utilizes locally calculated feature vectors (FVs) in point cloud data. Recognition is based on comparison of the analyzed scene with reference object library. A global descriptor in the form of a set of spatially distributed FVs is created for each reference model. During the detection process, correlation of subsets of reference FVs with FVs calculated in the scene is computed. Features utilized in the algorithm are based on parameters, which qualitatively estimate mean and Gaussian curvatures. Replacement of differentiation with averaging in the curvatures estimation makes the algorithm more resistant to discontinuities and poor quality of the input data. Utilization of the FV subsets allows to detect partially occluded and cluttered objects in the scene, while additional spatial information maintains false positive rate at a reasonably low level.
Local region power spectrum-based unfocused ship detection method in synthetic aperture radar images
NASA Astrophysics Data System (ADS)
Wei, Xiangfei; Wang, Xiaoqing; Chong, Jinsong
2018-01-01
Ships on synthetic aperture radar (SAR) images will be severely defocused and their energy will disperse into numerous resolution cells under long SAR integration time. Therefore, the image intensity of ships is weak and sometimes even overwhelmed by sea clutter on SAR image. Consequently, it is hard to detect the ships from SAR intensity images. A ship detection method based on local region power spectrum of SAR complex image is proposed. Although the energies of the ships are dispersed on SAR intensity images, their spectral energies are rather concentrated or will cause the power spectra of local areas of SAR images to deviate from that of sea surface background. Therefore, the key idea of the proposed method is to detect ships via the power spectra distortion of local areas of SAR images. The local region power spectrum of a moving target on SAR image is analyzed and the way to obtain the detection threshold through the probability density function (pdf) of the power spectrum is illustrated. Numerical P- and L-band airborne SAR ocean data are utilized and the detection results are also illustrated. Results show that the proposed method can well detect the unfocused ships, with a detection rate of 93.6% and a false-alarm rate of 8.6%. Moreover, by comparing with some other algorithms, it indicates that the proposed method performs better under long SAR integration time. Finally, the applicability of the proposed method and the way of parameters selection are also discussed.
Optimization of cell seeding in a 2D bio-scaffold system using computational models.
Ho, Nicholas; Chua, Matthew; Chui, Chee-Kong
2017-05-01
The cell expansion process is a crucial part of generating cells on a large-scale level in a bioreactor system. Hence, it is important to set operating conditions (e.g. initial cell seeding distribution, culture medium flow rate) to an optimal level. Often, the initial cell seeding distribution factor is neglected and/or overlooked in the design of a bioreactor using conventional seeding distribution methods. This paper proposes a novel seeding distribution method that aims to maximize cell growth and minimize production time/cost. The proposed method utilizes two computational models; the first model represents cell growth patterns whereas the second model determines optimal initial cell seeding positions for adherent cell expansions. Cell growth simulation from the first model demonstrates that the model can be a representation of various cell types with known probabilities. The second model involves a combination of combinatorial optimization, Monte Carlo and concepts of the first model, and is used to design a multi-layer 2D bio-scaffold system that increases cell production efficiency in bioreactor applications. Simulation results have shown that the recommended input configurations obtained from the proposed optimization method are the most optimal configurations. The results have also illustrated the effectiveness of the proposed optimization method. The potential of the proposed seeding distribution method as a useful tool to optimize the cell expansion process in modern bioreactor system applications is highlighted. Copyright © 2017 Elsevier Ltd. All rights reserved.
El-Bardicy, Mohammad G; Lotfy, Hayam M; El-Sayed, Mohammad A; El-Tarras, Mohammad F
2008-01-01
Ratio subtraction and isosbestic point methods are 2 innovating spectrophotometric methods used to determine vincamine in the presence of its acid degradation product and a mixture of cinnarizine (CN) and nicergoline (NIC). Linear correlations were obtained in the concentration range from 8-40 microg/mL for vincamine (I), 6-22 microg/mL for CN (II), and 6-36 microg/mL for NIC (III), with mean accuracies 99.72 +/- 0.917% for I, 99.91 +/- 0.703% for II, and 99.58 +/- 0.847 and 99.83 +/- 1.039% for III. The ratio subtraction method was utilized for the analysis of laboratory-prepared mixtures containing different ratios of vincamine and its degradation product, and it was valid in the presence of up to 80% degradation product. CN and NIC in synthetic mixtures were analyzed by the 2 proposed methods with the total content of the mixture determined at their respective isosbestic points of 270.2 and 235.8 nm, and the content of CN was determined by the ratio subtraction method. The proposed method was validated and found to be suitable as a stability-indicating assay method for vincamine in pharmaceutical formulations. The standard addition technique was applied to validate the results and to ensure the specificity of the proposed methods.
An arrhythmia classification algorithm using a dedicated wavelet adapted to different subjects.
Kim, Jinkwon; Min, Se Dong; Lee, Myoungho
2011-06-27
Numerous studies have been conducted regarding a heartbeat classification algorithm over the past several decades. However, many algorithms have also been studied to acquire robust performance, as biosignals have a large amount of variation among individuals. Various methods have been proposed to reduce the differences coming from personal characteristics, but these expand the differences caused by arrhythmia. In this paper, an arrhythmia classification algorithm using a dedicated wavelet adapted to individual subjects is proposed. We reduced the performance variation using dedicated wavelets, as in the ECG morphologies of the subjects. The proposed algorithm utilizes morphological filtering and a continuous wavelet transform with a dedicated wavelet. A principal component analysis and linear discriminant analysis were utilized to compress the morphological data transformed by the dedicated wavelets. An extreme learning machine was used as a classifier in the proposed algorithm. A performance evaluation was conducted with the MIT-BIH arrhythmia database. The results showed a high sensitivity of 97.51%, specificity of 85.07%, accuracy of 97.94%, and a positive predictive value of 97.26%. The proposed algorithm achieves better accuracy than other state-of-the-art algorithms with no intrasubject between the training and evaluation datasets. And it significantly reduces the amount of intervention needed by physicians.
Chen, Gong; Qi, Peng; Guo, Zhao; Yu, Haoyong
2017-06-01
In the field of gait rehabilitation robotics, achieving human-robot synchronization is very important. In this paper, a novel human-robot synchronization method using gait event information is proposed. This method includes two steps. First, seven gait events in one gait cycle are detected in real time with a hidden Markov model; second, an adaptive oscillator is utilized to estimate the stride percentage of human gait using any one of the gait events. Synchronous reference trajectories for the robot are then generated with the estimated stride percentage. This method is based on a bioinspired adaptive oscillator, which is a mathematical tool, first proposed to explain the phenomenon of synchronous flashing among fireflies. The proposed synchronization method is implemented in a portable knee-ankle-foot robot and tested in 15 healthy subjects. This method has the advantages of simple structure, flexible selection of gait events, and fast adaptation. Gait event is the only information needed, and hence the performance of synchronization holds when an abnormal gait pattern is involved. The results of the experiments reveal that our approach is efficient in achieving human-robot synchronization and feasible for rehabilitation robotics application.
Geometric Stitching Method for Double Cameras with Weak Convergence Geometry
NASA Astrophysics Data System (ADS)
Zhou, N.; He, H.; Bao, Y.; Yue, C.; Xing, K.; Cao, S.
2017-05-01
In this paper, a new geometric stitching method is proposed which utilizes digital elevation model (DEM)-aided block adjustment to solve relative orientation parameters for dual-camera with weak convergence geometry. A rational function model (RFM) with affine transformation is chosen as the relative orientation model. To deal with the weak geometry, a reference DEM is used in this method as an additional constraint in the block adjustment, which only calculates the planimetry coordinates of tie points (TPs). After that we can use the obtained affine transform coefficients to generate virtual grid, and update rational polynomial coefficients (RPCs) to complete the geometric stitching. Our proposed method was tested on GaoFen-2(GF-2) dual-camera panchromatic (PAN) images. The test results show that the proposed method can achieve an accuracy of better than 0.5 pixel in planimetry and have a seamless visual effect. For regions with small relief, when global DEM with 1 km grid, SRTM with 90 m grid and ASTER GDEM V2 with 30 m grid replaced DEM with 1m grid as elevation constraint it is almost no loss of accuracy. The test results proved the effectiveness and feasibility of the stitching method.