Lu, Jia-Yang; Cheung, Michael Lok-Man; Huang, Bao-Tian; Wu, Li-Li; Xie, Wen-Jia; Chen, Zhi-Jian; Li, De-Rui; Xie, Liang-Xi
2015-01-01
To assess the performance of a simple optimisation method for improving target coverage and organ-at-risk (OAR) sparing in intensity-modulated radiotherapy (IMRT) for cervical oesophageal cancer. For 20 selected patients, clinically acceptable original IMRT plans (Original plans) were created, and two optimisation methods were adopted to improve the plans: 1) a base dose function (BDF)-based method, in which the treatment plans were re-optimised based on the original plans, and 2) a dose-controlling structure (DCS)-based method, in which the original plans were re-optimised by assigning additional constraints for hot and cold spots. The Original, BDF-based and DCS-based plans were compared with regard to target dose homogeneity, conformity, OAR sparing, planning time and monitor units (MUs). Dosimetric verifications were performed and delivery times were recorded for the BDF-based and DCS-based plans. The BDF-based plans provided significantly superior dose homogeneity and conformity compared with both the DCS-based and Original plans. The BDF-based method further reduced the doses delivered to the OARs by approximately 1-3%. The re-optimisation time was reduced by approximately 28%, but the MUs and delivery time were slightly increased. All verification tests were passed and no significant differences were found. The BDF-based method for the optimisation of IMRT for cervical oesophageal cancer can achieve significantly better dose distributions with better planning efficiency at the expense of slightly more MUs.
Tracking rural-to-urban migration in China: Lessons from the 2005 inter-census population survey.
Ebenstein, Avraham; Zhao, Yaohui
2015-01-01
We examined migration in China using the 2005 inter-census population survey, in which migrants were registered at both their place of original (hukou) residence and at their destination. We find evidence that the estimated number of internal migrants in China is extremely sensitive to the enumeration method. We estimate that the traditional destination-based survey method fails to account for more than a third of migrants found using comparable origin-based methods. The 'missing' migrants are disproportionately young, male, and holders of rural hukou. We find that origin-based methods are more effective at capturing migrants who travel short distances for short periods, whereas destination-based methods are more effective when entire households have migrated and no remaining family members are located at the hukou location. We conclude with a set of policy recommendations for the design of population surveys in countries with large migrant populations.
Face recognition based on symmetrical virtual image and original training image
NASA Astrophysics Data System (ADS)
Ke, Jingcheng; Peng, Yali; Liu, Shigang; Li, Jun; Pei, Zhao
2018-02-01
In face representation-based classification methods, we are able to obtain high recognition rate if a face has enough available training samples. However, in practical applications, we only have limited training samples to use. In order to obtain enough training samples, many methods simultaneously use the original training samples and corresponding virtual samples to strengthen the ability of representing the test sample. One is directly using the original training samples and corresponding mirror samples to recognize the test sample. However, when the test sample is nearly symmetrical while the original training samples are not, the integration of the original training and mirror samples might not well represent the test samples. To tackle the above-mentioned problem, in this paper, we propose a novel method to obtain a kind of virtual samples which are generated by averaging the original training samples and corresponding mirror samples. Then, the original training samples and the virtual samples are integrated to recognize the test sample. Experimental results on five face databases show that the proposed method is able to partly overcome the challenges of the various poses, facial expressions and illuminations of original face image.
Fatigue loading history reconstruction based on the rain-flow technique
NASA Technical Reports Server (NTRS)
Khosrovaneh, A. K.; Dowling, N. E.
1989-01-01
Methods are considered for reducing a non-random fatigue loading history to a concise description and then for reconstructing a time history similar to the original. In particular, three methods of reconstruction based on a rain-flow cycle counting matrix are presented. A rain-flow matrix consists of the numbers of cycles at various peak and valley combinations. Two methods are based on a two dimensional rain-flow matrix, and the third on a three dimensional rain-flow matrix. Histories reconstructed by any of these methods produce a rain-flow matrix identical to that of the original history, and as a result the resulting time history is expected to produce a fatigue life similar to that for the original. The procedures described allow lengthy loading histories to be stored in compact form.
Parallelization of an Object-Oriented Unstructured Aeroacoustics Solver
NASA Technical Reports Server (NTRS)
Baggag, Abdelkader; Atkins, Harold; Oezturan, Can; Keyes, David
1999-01-01
A computational aeroacoustics code based on the discontinuous Galerkin method is ported to several parallel platforms using MPI. The discontinuous Galerkin method is a compact high-order method that retains its accuracy and robustness on non-smooth unstructured meshes. In its semi-discrete form, the discontinuous Galerkin method can be combined with explicit time marching methods making it well suited to time accurate computations. The compact nature of the discontinuous Galerkin method also makes it well suited for distributed memory parallel platforms. The original serial code was written using an object-oriented approach and was previously optimized for cache-based machines. The port to parallel platforms was achieved simply by treating partition boundaries as a type of boundary condition. Code modifications were minimal because boundary conditions were abstractions in the original program. Scalability results are presented for the SCI Origin, IBM SP2, and clusters of SGI and Sun workstations. Slightly superlinear speedup is achieved on a fixed-size problem on the Origin, due to cache effects.
Masada, Sayaka
2016-07-01
Various herbal medicines have been developed and used in various parts of the world for thousands of years. Although locally grown indigenous plants were originally used for traditional herbal preparations, Western herbal products are now becoming popular in Japan with the increasing interest in health. At the same time, there are growing concerns about the substitution of ingredients and adulteration of herbal products, highlighting the need for the authentication of the origin of plants used in herbal products. This review describes studies on Cimicifuga and Vitex products developed in Europe and Japan, focusing on establishing analytical methods to evaluate the origins of material plants and finished products. These methods include a polymerase chain reaction-restriction fragment length polymorphism method and a multiplex amplification refractory mutation system method. A genome-based authentication method and liquid chromatography-mass spectrometry-based authentication for black cohosh products, and the identification of two characteristic diterpenes of agnus castus fruit and a shrub chaste tree fruit-specific triterpene derivative are also described.
Compressive sensing method for recognizing cat-eye effect targets.
Li, Li; Li, Hui; Dang, Ersheng; Liu, Bo
2013-10-01
This paper proposes a cat-eye effect target recognition method with compressive sensing (CS) and presents a recognition method (sample processing before reconstruction based on compressed sensing, or SPCS) for image processing. In this method, the linear projections of original image sequences are applied to remove dynamic background distractions and extract cat-eye effect targets. Furthermore, the corresponding imaging mechanism for acquiring active and passive image sequences is put forward. This method uses fewer images to recognize cat-eye effect targets, reduces data storage, and translates the traditional target identification, based on original image processing, into measurement vectors processing. The experimental results show that the SPCS method is feasible and superior to the shape-frequency dual criteria method.
Liachko, Ivan; Youngblood, Rachel A.; Keich, Uri; Dunham, Maitreya J.
2013-01-01
DNA replication origins are necessary for the duplication of genomes. In addition, plasmid-based expression systems require DNA replication origins to maintain plasmids efficiently. The yeast autonomously replicating sequence (ARS) assay has been a valuable tool in dissecting replication origin structure and function. However, the dearth of information on origins in diverse yeasts limits the availability of efficient replication origin modules to only a handful of species and restricts our understanding of origin function and evolution. To enable rapid study of origins, we have developed a sequencing-based suite of methods for comprehensively mapping and characterizing ARSs within a yeast genome. Our approach finely maps genomic inserts capable of supporting plasmid replication and uses massively parallel deep mutational scanning to define molecular determinants of ARS function with single-nucleotide resolution. In addition to providing unprecedented detail into origin structure, our data have allowed us to design short, synthetic DNA sequences that retain maximal ARS function. These methods can be readily applied to understand and modulate ARS function in diverse systems. PMID:23241746
Choi, Hyun Ho; Lee, Ju Hwan; Kim, Sung Min; Park, Sung Yun
2015-01-01
Here, the speckle noise in ultrasonic images is removed using an image fusion-based denoising method. To optimize the denoising performance, each discrete wavelet transform (DWT) and filtering technique was analyzed and compared. In addition, the performances were compared in order to derive the optimal input conditions. To evaluate the speckle noise removal performance, an image fusion algorithm was applied to the ultrasound images, and comparatively analyzed with the original image without the algorithm. As a result, applying DWT and filtering techniques caused information loss and noise characteristics, and did not represent the most significant noise reduction performance. Conversely, an image fusion method applying SRAD-original conditions preserved the key information in the original image, and the speckle noise was removed. Based on such characteristics, the input conditions of SRAD-original had the best denoising performance with the ultrasound images. From this study, the best denoising technique proposed based on the results was confirmed to have a high potential for clinical application.
An efficient study design to test parent-of-origin effects in family trios.
Yu, Xiaobo; Chen, Gao; Feng, Rui
2017-11-01
Increasing evidence has shown that genes may cause prenatal, neonatal, and pediatric diseases depending on their parental origins. Statistical models that incorporate parent-of-origin effects (POEs) can improve the power of detecting disease-associated genes and help explain the missing heritability of diseases. In many studies, children have been sequenced for genome-wide association testing. But it may become unaffordable to sequence their parents and evaluate POEs. Motivated by the reality, we proposed a budget-friendly study design of sequencing children and only genotyping their parents through single nucleotide polymorphism array. We developed a powerful likelihood-based method, which takes into account both sequence reads and linkage disequilibrium to infer the parental origins of children's alleles and estimate their POEs on the outcome. We evaluated the performance of our proposed method and compared it with an existing method using only genotypes, through extensive simulations. Our method showed higher power than the genotype-based method. When either the mean read depth or the pair-end length was reasonably large, our method achieved ideal power. When single parents' genotypes were unavailable or parental genotypes at the testing locus were not typed, both methods lost power compared with when complete data were available; but the power loss from our method was smaller than the genotype-based method. We also extended our method to accommodate mixed genotype, low-, and high-coverage sequence data from children and their parents. At presence of sequence errors, low-coverage parental sequence data may lead to lower power than parental genotype data. © 2017 WILEY PERIODICALS, INC.
USDA-ARS?s Scientific Manuscript database
The beard testing method for measuring cotton fiber length is based on the fibrogram theory. However, in the instrumental implementations, the engineering complexity alters the original fiber length distribution observed by the instrument. This causes challenges in obtaining the entire original le...
Mapping yeast origins of replication via single-stranded DNA detection.
Feng, Wenyi; Raghuraman, M K; Brewer, Bonita J
2007-02-01
Studies in th Saccharomyces cerevisiae have provided a framework for understanding how eukaryotic cells replicate their chromosomal DNA to ensure faithful transmission of genetic information to their daughter cells. In particular, S. cerevisiae is the first eukaryote to have its origins of replication mapped on a genomic scale, by three independent groups using three different microarray-based approaches. Here we describe a new technique of origin mapping via detection of single-stranded DNA in yeast. This method not only identified the majority of previously discovered origins, but also detected new ones. We have also shown that this technique can identify origins in Schizosaccharomyces pombe, illustrating the utility of this method for origin mapping in other eukaryotes.
Simple and fast multiplex PCR method for detection of species origin in meat products.
Izadpanah, Mehrnaz; Mohebali, Nazanin; Elyasi Gorji, Zahra; Farzaneh, Parvaneh; Vakhshiteh, Faezeh; Shahzadeh Fazeli, Seyed Abolhassan
2018-02-01
Identification of animal species is one of the major concerns in food regulatory control and quality assurance system. Different approaches have been used for species identification in animal origin of feedstuff. This study aimed to develop a multiplex PCR approach to detect the origin of meat and meat products. Specific primers were designed based on the conserved region of mitochondrial Cytochrome C Oxidase subunit I ( COX1 ) gene. This method could successfully distinguish the origin of the pig, camel, sheep, donkey, goat, cow, and chicken in one single reaction. Since PCR products derived from each species represent unique molecular weight, the amplified products could be identified by electrophoresis and analyzed based on their size. Due to the synchronized amplification of segments within a single PCR reaction, multiplex PCR is considered to be a simple, fast, and inexpensive technique that can be applied for identification of meat products in food industries. Nowadays, this technique has been considered as a practical method to identify the species origin, which could further applied for animal feedstuffs identification.
Multirate sampled-data yaw-damper and modal suppression system design
NASA Technical Reports Server (NTRS)
Berg, Martin C.; Mason, Gregory S.
1990-01-01
A multirate control law synthesized algorithm based on an infinite-time quadratic cost function, was developed along with a method for analyzing the robustness of multirate systems. A generalized multirate sampled-data control law structure (GMCLS) was introduced. A new infinite-time-based parameter optimization multirate sampled-data control law synthesis method and solution algorithm were developed. A singular-value-based method for determining gain and phase margins for multirate systems was also developed. The finite-time-based parameter optimization multirate sampled-data control law synthesis algorithm originally intended to be applied to the aircraft problem was instead demonstrated by application to a simpler problem involving the control of the tip position of a two-link robot arm. The GMCLS, the infinite-time-based parameter optimization multirate control law synthesis method and solution algorithm, and the singular-value based method for determining gain and phase margins were all demonstrated by application to the aircraft control problem originally proposed for this project.
Fecal bacteria, including those originating from concentrated animal feeding operations, are a leading contributor to water quality impairments in agricultural areas. Rapid and reliable methods are needed that can accurately characterize fecal pollution in agricultural settings....
Fecal bacteria, including those originating from concentrated animal feeding operations, are a leading contributor to water quality impairments in agricultural areas. Rapid and reliable methods are needed that can accurately characterize fecal pollution in agricultural settings....
Berke, Ethan M; Shi, Xun
2009-04-29
Travel time is an important metric of geographic access to health care. We compared strategies of estimating travel times when only subject ZIP code data were available. Using simulated data from New Hampshire and Arizona, we estimated travel times to nearest cancer centers by using: 1) geometric centroid of ZIP code polygons as origins, 2) population centroids as origin, 3) service area rings around each cancer center, assigning subjects to rings by assuming they are evenly distributed within their ZIP code, 4) service area rings around each center, assuming the subjects follow the population distribution within the ZIP code. We used travel times based on street addresses as true values to validate estimates. Population-based methods have smaller errors than geometry-based methods. Within categories (geometry or population), centroid and service area methods have similar errors. Errors are smaller in urban areas than in rural areas. Population-based methods are superior to the geometry-based methods, with the population centroid method appearing to be the best choice for estimating travel time. Estimates in rural areas are less reliable.
Sun, Chenglu; Li, Wei; Chen, Wei
2017-01-01
For extracting the pressure distribution image and respiratory waveform unobtrusively and comfortably, we proposed a smart mat which utilized a flexible pressure sensor array, printed electrodes and novel soft seven-layer structure to monitor those physiological information. However, in order to obtain high-resolution pressure distribution and more accurate respiratory waveform, it needs more time to acquire the pressure signal of all the pressure sensors embedded in the smart mat. In order to reduce the sampling time while keeping the same resolution and accuracy, a novel method based on compressed sensing (CS) theory was proposed. By utilizing the CS based method, 40% of the sampling time can be decreased by means of acquiring nearly one-third of original sampling points. Then several experiments were carried out to validate the performance of the CS based method. While less than one-third of original sampling points were measured, the correlation degree coefficient between reconstructed respiratory waveform and original waveform can achieve 0.9078, and the accuracy of the respiratory rate (RR) extracted from the reconstructed respiratory waveform can reach 95.54%. The experimental results demonstrated that the novel method can fit the high resolution smart mat system and be a viable option for reducing the sampling time of the pressure sensor array. PMID:28796188
A novel alignment-free method for detection of lateral genetic transfer based on TF-IDF.
Cong, Yingnan; Chan, Yao-Ban; Ragan, Mark A
2016-07-25
Lateral genetic transfer (LGT) plays an important role in the evolution of microbes. Existing computational methods for detecting genomic regions of putative lateral origin scale poorly to large data. Here, we propose a novel method based on TF-IDF (Term Frequency-Inverse Document Frequency) statistics to detect not only regions of lateral origin, but also their origin and direction of transfer, in sets of hierarchically structured nucleotide or protein sequences. This approach is based on the frequency distributions of k-mers in the sequences. If a set of contiguous k-mers appears sufficiently more frequently in another phyletic group than in its own, we infer that they have been transferred from the first group to the second. We performed rigorous tests of TF-IDF using simulated and empirical datasets. With the simulated data, we tested our method under different parameter settings for sequence length, substitution rate between and within groups and post-LGT, deletion rate, length of transferred region and k size, and found that we can detect LGT events with high precision and recall. Our method performs better than an established method, ALFY, which has high recall but low precision. Our method is efficient, with runtime increasing approximately linearly with sequence length.
Evaluating user reputation in online rating systems via an iterative group-based ranking method
NASA Astrophysics Data System (ADS)
Gao, Jian; Zhou, Tao
2017-05-01
Reputation is a valuable asset in online social lives and it has drawn increased attention. Due to the existence of noisy ratings and spamming attacks, how to evaluate user reputation in online rating systems is especially significant. However, most of the previous ranking-based methods either follow a debatable assumption or have unsatisfied robustness. In this paper, we propose an iterative group-based ranking method by introducing an iterative reputation-allocation process into the original group-based ranking method. More specifically, the reputation of users is calculated based on the weighted sizes of the user rating groups after grouping all users by their rating similarities, and the high reputation users' ratings have larger weights in dominating the corresponding user rating groups. The reputation of users and the user rating group sizes are iteratively updated until they become stable. Results on two real data sets with artificial spammers suggest that the proposed method has better performance than the state-of-the-art methods and its robustness is considerably improved comparing with the original group-based ranking method. Our work highlights the positive role of considering users' grouping behaviors towards a better online user reputation evaluation.
Contact angle adjustment in equation-of-state-based pseudopotential model.
Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong
2016-05-01
The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.
Contact angle adjustment in equation-of-state-based pseudopotential model
NASA Astrophysics Data System (ADS)
Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong
2016-05-01
The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.
NASA Astrophysics Data System (ADS)
Cheng, Jian; Yue, Huiqiang; Yu, Shengjiao; Liu, Tiegang
2018-06-01
In this paper, an adjoint-based high-order h-adaptive direct discontinuous Galerkin method is developed and analyzed for the two dimensional steady state compressible Navier-Stokes equations. Particular emphasis is devoted to the analysis of the adjoint consistency for three different direct discontinuous Galerkin discretizations: including the original direct discontinuous Galerkin method (DDG), the direct discontinuous Galerkin method with interface correction (DDG(IC)) and the symmetric direct discontinuous Galerkin method (SDDG). Theoretical analysis shows the extra interface correction term adopted in the DDG(IC) method and the SDDG method plays a key role in preserving the adjoint consistency. To be specific, for the model problem considered in this work, we prove that the original DDG method is not adjoint consistent, while the DDG(IC) method and the SDDG method can be adjoint consistent with appropriate treatment of boundary conditions and correct modifications towards the underlying output functionals. The performance of those three DDG methods is carefully investigated and evaluated through typical test cases. Based on the theoretical analysis, an adjoint-based h-adaptive DDG(IC) method is further developed and evaluated, numerical experiment shows its potential in the applications of adjoint-based adaptation for simulating compressible flows.
HIV-1 protease cleavage site prediction based on two-stage feature selection method.
Niu, Bing; Yuan, Xiao-Cheng; Roeper, Preston; Su, Qiang; Peng, Chun-Rong; Yin, Jing-Yuan; Ding, Juan; Li, HaiPeng; Lu, Wen-Cong
2013-03-01
Knowledge of the mechanism of HIV protease cleavage specificity is critical to the design of specific and effective HIV inhibitors. Searching for an accurate, robust, and rapid method to correctly predict the cleavage sites in proteins is crucial when searching for possible HIV inhibitors. In this article, HIV-1 protease specificity was studied using the correlation-based feature subset (CfsSubset) selection method combined with Genetic Algorithms method. Thirty important biochemical features were found based on a jackknife test from the original data set containing 4,248 features. By using the AdaBoost method with the thirty selected features the prediction model yields an accuracy of 96.7% for the jackknife test and 92.1% for an independent set test, with increased accuracy over the original dataset by 6.7% and 77.4%, respectively. Our feature selection scheme could be a useful technique for finding effective competitive inhibitors of HIV protease.
Final Report for X-ray Diffraction Sample Preparation Method Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ely, T. M.; Meznarich, H. K.; Valero, T.
WRPS-1500790, “X-ray Diffraction Saltcake Sample Preparation Method Development Plan/Procedure,” was originally prepared with the intent of improving the specimen preparation methodology used to generate saltcake specimens suitable for XRD-based solid phase characterization. At the time that this test plan document was originally developed, packed powder in cavity supports with collodion binder was the established XRD specimen preparation method. An alternate specimen preparation method less vulnerable, if not completely invulnerable to preferred orientation effects, was desired as a replacement for the method.
Blind compressed sensing image reconstruction based on alternating direction method
NASA Astrophysics Data System (ADS)
Liu, Qinan; Guo, Shuxu
2018-04-01
In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.
Improvements to surrogate data methods for nonstationary time series.
Lucio, J H; Valdés, R; Rodríguez, L R
2012-05-01
The method of surrogate data has been extensively applied to hypothesis testing of system linearity, when only one realization of the system, a time series, is known. Normally, surrogate data should preserve the linear stochastic structure and the amplitude distribution of the original series. Classical surrogate data methods (such as random permutation, amplitude adjusted Fourier transform, or iterative amplitude adjusted Fourier transform) are successful at preserving one or both of these features in stationary cases. However, they always produce stationary surrogates, hence existing nonstationarity could be interpreted as dynamic nonlinearity. Certain modifications have been proposed that additionally preserve some nonstationarity, at the expense of reproducing a great deal of nonlinearity. However, even those methods generally fail to preserve the trend (i.e., global nonstationarity in the mean) of the original series. This is the case of time series with unit roots in their autoregressive structure. Additionally, those methods, based on Fourier transform, either need first and last values in the original series to match, or they need to select a piece of the original series with matching ends. These conditions are often inapplicable and the resulting surrogates are adversely affected by the well-known artefact problem. In this study, we propose a simple technique that, applied within existing Fourier-transform-based methods, generates surrogate data that jointly preserve the aforementioned characteristics of the original series, including (even strong) trends. Moreover, our technique avoids the negative effects of end mismatch. Several artificial and real, stationary and nonstationary, linear and nonlinear time series are examined, in order to demonstrate the advantages of the methods. Corresponding surrogate data are produced with the classical and with the proposed methods, and the results are compared.
Sea Ice Detection Based on an Improved Similarity Measurement Method Using Hyperspectral Data.
Han, Yanling; Li, Jue; Zhang, Yun; Hong, Zhonghua; Wang, Jing
2017-05-15
Hyperspectral remote sensing technology can acquire nearly continuous spectrum information and rich sea ice image information, thus providing an important means of sea ice detection. However, the correlation and redundancy among hyperspectral bands reduce the accuracy of traditional sea ice detection methods. Based on the spectral characteristics of sea ice, this study presents an improved similarity measurement method based on linear prediction (ISMLP) to detect sea ice. First, the first original band with a large amount of information is determined based on mutual information theory. Subsequently, a second original band with the least similarity is chosen by the spectral correlation measuring method. Finally, subsequent bands are selected through the linear prediction method, and a support vector machine classifier model is applied to classify sea ice. In experiments performed on images of Baffin Bay and Bohai Bay, comparative analyses were conducted to compare the proposed method and traditional sea ice detection methods. Our proposed ISMLP method achieved the highest classification accuracies (91.18% and 94.22%) in both experiments. From these results the ISMLP method exhibits better performance overall than other methods and can be effectively applied to hyperspectral sea ice detection.
Sea Ice Detection Based on an Improved Similarity Measurement Method Using Hyperspectral Data
Han, Yanling; Li, Jue; Zhang, Yun; Hong, Zhonghua; Wang, Jing
2017-01-01
Hyperspectral remote sensing technology can acquire nearly continuous spectrum information and rich sea ice image information, thus providing an important means of sea ice detection. However, the correlation and redundancy among hyperspectral bands reduce the accuracy of traditional sea ice detection methods. Based on the spectral characteristics of sea ice, this study presents an improved similarity measurement method based on linear prediction (ISMLP) to detect sea ice. First, the first original band with a large amount of information is determined based on mutual information theory. Subsequently, a second original band with the least similarity is chosen by the spectral correlation measuring method. Finally, subsequent bands are selected through the linear prediction method, and a support vector machine classifier model is applied to classify sea ice. In experiments performed on images of Baffin Bay and Bohai Bay, comparative analyses were conducted to compare the proposed method and traditional sea ice detection methods. Our proposed ISMLP method achieved the highest classification accuracies (91.18% and 94.22%) in both experiments. From these results the ISMLP method exhibits better performance overall than other methods and can be effectively applied to hyperspectral sea ice detection. PMID:28505135
NASA Astrophysics Data System (ADS)
Wang, Hai-Yan; Song, Chao; Sha, Min; Liu, Jun; Li, Li-Ping; Zhang, Zheng-Yong
2018-05-01
Raman spectra and ultraviolet-visible absorption spectra of four different geographic origins of Radix Astragali were collected. These data were analyzed using kernel principal component analysis combined with sparse representation classification. The results showed that the recognition rate reached 70.44% using Raman spectra for data input and 90.34% using ultraviolet-visible absorption spectra for data input. A new fusion method based on Raman combined with ultraviolet-visible data was investigated and the recognition rate was increased to 96.43%. The experimental results suggested that the proposed data fusion method effectively improved the utilization rate of the original data.
Accurate Energy Transaction Allocation using Path Integration and Interpolation
NASA Astrophysics Data System (ADS)
Bhide, Mandar Mohan
This thesis investigates many of the popular cost allocation methods which are based on actual usage of the transmission network. The Energy Transaction Allocation (ETA) method originally proposed by A.Fradi, S.Brigonne and B.Wollenberg which gives unique advantage of accurately allocating the transmission network usage is discussed subsequently. Modified calculation of ETA based on simple interpolation technique is then proposed. The proposed methodology not only increase the accuracy of calculation but also decreases number of calculations to less than half of the number of calculations required in original ETAs.
Yoon, So-Ra; Kim, Sung Hyun; Lee, Hae-Won
2017-01-01
The geographical origin of kimchi is of interest to consumers and producers because the prices of commercial kimchi products can vary significantly according to the geographical origin. Hence, social issues related to the geographical origin of kimchi in Korea have emerged as a major problem. In this study, the geographical origin of kimchi was determined by comparing the mass fingerprints obtained for Korean and Chinese kimchi samples by MALDI-TOF MS with multivariate analysis. The results obtained herein provide an accurate, powerful tool to clearly discriminate kimchi samples based on their geographical origin within a short time and to ensure food authenticity, which is of significance in the kimchi industry. Furthermore, our MALDI-TOF MS method could be applied to determining the geographical origin of other fermented-salted vegetables at a reduced cost in shorter times. PMID:29149220
Yoon, So-Ra; Kim, Sung Hyun; Lee, Hae-Won; Ha, Ji-Hyoung
2017-01-01
The geographical origin of kimchi is of interest to consumers and producers because the prices of commercial kimchi products can vary significantly according to the geographical origin. Hence, social issues related to the geographical origin of kimchi in Korea have emerged as a major problem. In this study, the geographical origin of kimchi was determined by comparing the mass fingerprints obtained for Korean and Chinese kimchi samples by MALDI-TOF MS with multivariate analysis. The results obtained herein provide an accurate, powerful tool to clearly discriminate kimchi samples based on their geographical origin within a short time and to ensure food authenticity, which is of significance in the kimchi industry. Furthermore, our MALDI-TOF MS method could be applied to determining the geographical origin of other fermented-salted vegetables at a reduced cost in shorter times.
ERIC Educational Resources Information Center
Jafari, Alireza
2011-01-01
This study explored the relationship between religious orientation (internal-external) and the ways of coping stress (problem-based and emotion-based) in the students of IAU (Islamic Azad University), Abhar Branch. Religion with internal origin is comprehensive and has well-organized principles. However, religion with external origin is a device…
Speciation of animal fat: Needs and challenges.
Hsieh, Yun-Hwa Peggy; Ofori, Jack Appiah
2017-05-24
The use of pork fat is a concern for Muslims and Jews, who for religious reasons avoid consuming anything that is pig-derived. The use of bovine materials, including beef fat, is prohibited in Hinduism and may also pose a risk of carrying the infectious agent for bovine spongiform encephalopathy. Vegetable oils are sometimes adulterated with animal fat or pork fat with beef fat for economic gain. The development of methods to determine the species origin of fat has therefore become a priority due to the complex and global nature of the food trade, which creates opportunities for the fraudulent use of these animal fats as food ingredients. However, determining the species origin of fats in processed foods or composite blends is an arduous task as the adulterant has a composition that is very similar to that of the original fat or oil. This review examines some of the methods that have been developed for fat speciation, including both fat-based and DNA-based methods, their shortcomings, and the need for additional alternatives. Protein-based methods, specifically immunoassays targeting residual proteins in adipose tissue, that are being explored by researchers as a new tool for fat speciation will also be discussed.
Reliable femoral frame construction based on MRI dedicated to muscles position follow-up.
Dubois, G; Bonneau, D; Lafage, V; Rouch, P; Skalli, W
2015-10-01
In vivo follow-up of muscle shape variation represents a challenge when evaluating muscle development due to disease or treatment. Recent developments in muscles reconstruction techniques indicate MRI as a clinical tool for the follow-up of the thigh muscles. The comparison of 3D muscles shape from two different sequences is not easy because there is no common frame. This study proposes an innovative method for the reconstruction of a reliable femoral frame based on the femoral head and both condyles centers. In order to robustify the definition of condylar spheres, an original method was developed to combine the estimation of diameters of both condyles from the lateral antero-posterior distance and the estimation of the spheres center from an optimization process. The influence of spacing between MR slices and of origin positions was studied. For all axes, the proposed method presented an angular error lower than 1° with spacing between slice of 10 mm and the optimal position of the origin was identified at 56 % of the distance between the femoral head center and the barycenter of both condyles. The high reliability of this method provides a robust frame for clinical follow-up based on MRI .
Mwogi, Thomas S.; Biondich, Paul G.; Grannis, Shaun J.
2014-01-01
Motivated by the need for readily available data for testing an open-source health information exchange platform, we developed and evaluated two methods for generating synthetic messages. The methods used HL7 version 2 messages obtained from the Indiana Network for Patient Care. Data from both methods were analyzed to assess how effectively the output reflected original ‘real-world’ data. The Markov Chain method (MCM) used an algorithm based on transitional probability matrix while the Music Box model (MBM) randomly selected messages of particular trigger type from the original data to generate new messages. The MBM was faster, generated shorter messages and exhibited less variation in message length. The MCM required more computational power, generated longer messages with more message length variability. Both methods exhibited adequate coverage, producing a high proportion of messages consistent with original messages. Both methods yielded similar rates of valid messages. PMID:25954458
Improved l1-SPIRiT using 3D walsh transform-based sparsity basis.
Feng, Zhen; Liu, Feng; Jiang, Mingfeng; Crozier, Stuart; Guo, He; Wang, Yuxin
2014-09-01
l1-SPIRiT is a fast magnetic resonance imaging (MRI) method which combines parallel imaging (PI) with compressed sensing (CS) by performing a joint l1-norm and l2-norm optimization procedure. The original l1-SPIRiT method uses two-dimensional (2D) Wavelet transform to exploit the intra-coil data redundancies and a joint sparsity model to exploit the inter-coil data redundancies. In this work, we propose to stack all the coil images into a three-dimensional (3D) matrix, and then a novel 3D Walsh transform-based sparsity basis is applied to simultaneously reduce the intra-coil and inter-coil data redundancies. Both the 2D Wavelet transform-based and the proposed 3D Walsh transform-based sparsity bases were investigated in the l1-SPIRiT method. The experimental results show that the proposed 3D Walsh transform-based l1-SPIRiT method outperformed the original l1-SPIRiT in terms of image quality and computational efficiency. Copyright © 2014 Elsevier Inc. All rights reserved.
Structural analysis of paintings based on brush strokes
NASA Astrophysics Data System (ADS)
Sablatnig, Robert; Kammerer, Paul; Zolda, Ernestine
1998-05-01
The origin of works of art can often not be attributed to a certain artist. Likewise it is difficult to say whether paintings or drawings are originals or forgeries. In various fields of art new technical methods are used to examine the age, the state of preservation and the origin of the materials used. For the examination of paintings, radiological methods like X-ray and infra-red diagnosis, digital radiography, computer-tomography, etc. and color analyzes are employed to authenticate art. But all these methods do not relate certain characteristics in art work to a specific artist -- the artist's personal style. In order to study this personal style of a painter, experts in art history and image processing try to examine the 'structural signature' based on brush strokes within paintings, in particular in portrait miniatures. A computer-aided classification and recognition system for portrait miniatures is developed, which enables a semi- automatic classification and forgery detection based on content, color, and brush strokes. A hierarchically structured classification scheme is introduced which separates the classification into three different levels of information: color, shape of region, and structure of brush strokes.
SU-E-J-48: Imaging Origin-Radiation Isocenter Coincidence for Linac-Based SRS with Novalis Tx
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geraghty, C; Workie, D; Hasson, B
Purpose To implement and evaluate an image-based Winston-Lutz (WL) test to measure the displacement between ExacTrac imaging origin and radiation isocenter on a Novalis Tx system using RIT V6.2 software analysis tools. Displacement between imaging and radiation isocenters was tracked over time. The method was applied for cone-based and MLC-based WL tests. Methods The Brainlab Winston-Lutz phantom was aligned to room lasers. The ExacTrac imaging system was then used to detect the Winston- Lutz phantom and obtain the displacement between the center of the phantom and the imaging origin. EPID images of the phantom were obtained at various gantry andmore » couch angles and analyzed with RIT calculating the phantom center to radiation isocenter displacement. The RIT and Exactrac displacements were combined to calculate the displacement between imaging origin and radiation isocenter. Results were tracked over time. Results Mean displacements between ExacTrac origin and radiation isocenter were: VRT: −0.1mm ± 0.3mm, LNG: 0.5mm ± 0.2mm, LAT: 0.2mm ± 0.2mm (vector magnitude of 0.7 ± 0.2mm). Radiation isocenter was characterized by the mean of the standard deviations of the WL phantom displacements: σVRT: 0.2mm, σLNG: 0.4mm, σLAT: 0.6mm. The linac couch base was serviced to reduce couch walkout. This reduced σLAT to 0.2mm. These measurements established a new baseline of radiation isocenter-imaging origin coincidence. Conclusion The image-based WL test has ensured submillimeter localization accuracy using the ExacTrac imaging system. Standard deviations of ExacTrac-radiation isocenter displacements indicate that average agreement within 0.3mm is possible in each axis. This WL test is a departure from the tradiational WL in that imaging origin/radiation isocenter agreement is the end goal not lasers/radiation isocenter.« less
A new method to identify the foot of continental slope based on an integrated profile analysis
NASA Astrophysics Data System (ADS)
Wu, Ziyin; Li, Jiabiao; Li, Shoujun; Shang, Jihong; Jin, Xiaobin
2017-06-01
A new method is proposed to identify automatically the foot of the continental slope (FOS) based on the integrated analysis of topographic profiles. Based on the extremum points of the second derivative and the Douglas-Peucker algorithm, it simplifies the topographic profiles, then calculates the second derivative of the original profiles and the D-P profiles. Seven steps are proposed to simplify the original profiles. Meanwhile, multiple identification methods are proposed to determine the FOS points, including gradient, water depth and second derivative values of data points, as well as the concave and convex, continuity and segmentation of the topographic profiles. This method can comprehensively and intelligently analyze the topographic profiles and their derived slopes, second derivatives and D-P profiles, based on which, it is capable to analyze the essential properties of every single data point in the profile. Furthermore, it is proposed to remove the concave points of the curve and in addition, to implement six FOS judgment criteria.
ERIC Educational Resources Information Center
Han, Gang; Newell, Jay
2014-01-01
This study explores the adoption of the team-based learning (TBL) method in knowledge-based and theory-oriented journalism and mass communication (J&MC) courses. It first reviews the origin and concept of TBL, the relevant theories, and then introduces the TBL method and implementation, including procedures and assessments, employed in an…
Hercegová, Andrea; Dömötörová, Milena; Kruzlicová, Dása; Matisová, Eva
2006-05-01
Four sample preparation techniques were compared for the ultratrace analysis of pesticide residues in baby food: (a) modified Schenck's method based on ACN extraction with SPE cleaning; (b) quick, easy, cheap, effective, rugged, and safe (QuEChERS) method based on ACN extraction and dispersive SPE; (c) modified QuEChERS method which utilizes column-based SPE instead of dispersive SPE; and (d) matrix solid phase dispersion (MSPD). The methods were combined with fast gas chromatographic-mass spectrometric analysis. The effectiveness of clean-up of the final extract was determined by comparison of the chromatograms obtained. Time consumption, laboriousness, demands on glassware and working place, and consumption of chemicals, especially solvents, increase in the following order QuEChERS < modified QuEChERS < MSPD < modified Schenck's method. All methods offer satisfactory analytical characteristics at the concentration levels of 5, 10, and 100 microg/kg in terms of recoveries and repeatability. Recoveries obtained for the modified QuEChERS method were lower than for the original QuEChERS. In general the best LOQs were obtained for the modified Schenck's method. Modified QuEChERS method provides 21-72% better LOQs than the original method.
On some methods for assessing earthquake predictions
NASA Astrophysics Data System (ADS)
Molchan, G.; Romashkova, L.; Peresan, A.
2017-09-01
A regional approach to the problem of assessing earthquake predictions inevitably faces a deficit of data. We point out some basic limits of assessment methods reported in the literature, considering the practical case of the performance of the CN pattern recognition method in the prediction of large Italian earthquakes. Along with the classical hypothesis testing, a new game approach, the so-called parimutuel gambling (PG) method, is examined. The PG, originally proposed for the evaluation of the probabilistic earthquake forecast, has been recently adapted for the case of 'alarm-based' CN prediction. The PG approach is a non-standard method; therefore it deserves careful examination and theoretical analysis. We show that the PG alarm-based version leads to an almost complete loss of information about predicted earthquakes (even for a large sample). As a result, any conclusions based on the alarm-based PG approach are not to be trusted. We also show that the original probabilistic PG approach does not necessarily identifies the genuine forecast correctly among competing seismicity rate models, even when applied to extensive data.
Single Wall Carbon Nanotube Alignment Mechanisms for Non-Destructive Evaluation
NASA Technical Reports Server (NTRS)
Hong, Seunghun
2002-01-01
As proposed in our original proposal, we developed a new innovative method to assemble millions of single wall carbon nanotube (SWCNT)-based circuit components as fast as conventional microfabrication processes. This method is based on surface template assembly strategy. The new method solves one of the major bottlenecks in carbon nanotube based electrical applications and, potentially, may allow us to mass produce a large number of SWCNT-based integrated devices of critical interests to NASA.
PDEs on moving surfaces via the closest point method and a modified grid based particle method
NASA Astrophysics Data System (ADS)
Petras, A.; Ruuth, S. J.
2016-05-01
Partial differential equations (PDEs) on surfaces arise in a wide range of applications. The closest point method (Ruuth and Merriman (2008) [20]) is a recent embedding method that has been used to solve a variety of PDEs on smooth surfaces using a closest point representation of the surface and standard Cartesian grid methods in the embedding space. The original closest point method (CPM) was designed for problems posed on static surfaces, however the solution of PDEs on moving surfaces is of considerable interest as well. Here we propose solving PDEs on moving surfaces using a combination of the CPM and a modification of the grid based particle method (Leung and Zhao (2009) [12]). The grid based particle method (GBPM) represents and tracks surfaces using meshless particles and an Eulerian reference grid. Our modification of the GBPM introduces a reconstruction step into the original method to ensure that all the grid points within a computational tube surrounding the surface are active. We present a number of examples to illustrate the numerical convergence properties of our combined method. Experiments for advection-diffusion equations that are strongly coupled to the velocity of the surface are also presented.
3-D ultrasound volume reconstruction using the direct frame interpolation method.
Scheipers, Ulrich; Koptenko, Sergei; Remlinger, Rachel; Falco, Tony; Lachaine, Martin
2010-11-01
A new method for 3-D ultrasound volume reconstruction using tracked freehand 3-D ultrasound is proposed. The method is based on solving the forward volume reconstruction problem using direct interpolation of high-resolution ultrasound B-mode image frames. A series of ultrasound B-mode image frames (an image series) is acquired using the freehand scanning technique and position sensing via optical tracking equipment. The proposed algorithm creates additional intermediate image frames by directly interpolating between two or more adjacent image frames of the original image series. The target volume is filled using the original frames in combination with the additionally constructed frames. Compared with conventional volume reconstruction methods, no additional filling of empty voxels or holes within the volume is required, because the whole extent of the volume is defined by the arrangement of the original and the additionally constructed B-mode image frames. The proposed direct frame interpolation (DFI) method was tested on two different data sets acquired while scanning the head and neck region of different patients. The first data set consisted of eight B-mode 2-D frame sets acquired under optimal laboratory conditions. The second data set consisted of 73 image series acquired during a clinical study. Sample volumes were reconstructed for all 81 image series using the proposed DFI method with four different interpolation orders, as well as with the pixel nearest-neighbor method using three different interpolation neighborhoods. In addition, volumes based on a reduced number of image frames were reconstructed for comparison of the different methods' accuracy and robustness in reconstructing image data that lies between the original image frames. The DFI method is based on a forward approach making use of a priori information about the position and shape of the B-mode image frames (e.g., masking information) to optimize the reconstruction procedure and to reduce computation times and memory requirements. The method is straightforward, independent of additional input or parameters, and uses the high-resolution B-mode image frames instead of usually lower-resolution voxel information for interpolation. The DFI method can be considered as a valuable alternative to conventional 3-D ultrasound reconstruction methods based on pixel or voxel nearest-neighbor approaches, offering better quality and competitive reconstruction time.
EVALUATION OF HOST SPECIFIC PCR-BASED METHODS FOR THE IDENTIFICATION OF FECAL POLLUTION
Microbial Source Tracking (MST) is an approach to determine the origin of fecal pollution impacting a body of water. MST is based on the assumption that, given the appropriate method and indicator, the source of microbial pollution can be identified. One of the key elements of...
Medical image security using modified chaos-based cryptography approach
NASA Astrophysics Data System (ADS)
Talib Gatta, Methaq; Al-latief, Shahad Thamear Abd
2018-05-01
The progressive development in telecommunication and networking technologies have led to the increased popularity of telemedicine usage which involve storage and transfer of medical images and related information so security concern is emerged. This paper presents a method to provide the security to the medical images since its play a major role in people healthcare organizations. The main idea in this work based on the chaotic sequence in order to provide efficient encryption method that allows reconstructing the original image from the encrypted image with high quality and minimum distortion in its content and doesn’t effect in human treatment and diagnosing. Experimental results prove the efficiency of the proposed method using some of statistical measures and robust correlation between original image and decrypted image.
The Coordinate Transformation Method of High Resolution dem Data
NASA Astrophysics Data System (ADS)
Yan, Chaode; Guo, Wang; Li, Aimin
2018-04-01
Coordinate transformation methods of DEM data can be divided into two categories. One reconstruct based on original vector elevation data. The other transforms DEM data blocks by transforming parameters. But the former doesn't work in the absence of original vector data, and the later may cause errors at joint places between adjoining blocks of high resolution DEM data. In view of this problem, a method dealing with high resolution DEM data coordinate transformation is proposed. The method transforms DEM data into discrete vector elevation points, and then adjusts positions of points by bi-linear interpolation respectively. Finally, a TIN is generated by transformed points, and the new DEM data in target coordinate system is reconstructed based on TIN. An algorithm which can find blocks and transform automatically is given in this paper. The method is tested in different terrains and proved to be feasible and valid.
Gray-world-assumption-based illuminant color estimation using color gamuts with high and low chroma
NASA Astrophysics Data System (ADS)
Kawamura, Harumi; Yonemura, Shunichi; Ohya, Jun; Kojima, Akira
2013-02-01
A new approach is proposed for estimating illuminant colors from color images under an unknown scene illuminant. The approach is based on a combination of a gray-world-assumption-based illuminant color estimation method and a method using color gamuts. The former method, which is one we had previously proposed, improved on the original method that hypothesizes that the average of all the object colors in a scene is achromatic. Since the original method estimates scene illuminant colors by calculating the average of all the image pixel values, its estimations are incorrect when certain image colors are dominant. Our previous method improves on it by choosing several colors on the basis of an opponent-color property, which is that the average color of opponent colors is achromatic, instead of using all colors. However, it cannot estimate illuminant colors when there are only a few image colors or when the image colors are unevenly distributed in local areas in the color space. The approach we propose in this paper combines our previous method and one using high chroma and low chroma gamuts, which makes it possible to find colors that satisfy the gray world assumption. High chroma gamuts are used for adding appropriate colors to the original image and low chroma gamuts are used for narrowing down illuminant color possibilities. Experimental results obtained using actual images show that even if the image colors are localized in a certain area in the color space, the illuminant colors are accurately estimated, with smaller estimation error average than that generated in the conventional method.
Liver segmentation from CT images using a sparse priori statistical shape model (SP-SSM).
Wang, Xuehu; Zheng, Yongchang; Gan, Lan; Wang, Xuan; Sang, Xinting; Kong, Xiangfeng; Zhao, Jie
2017-01-01
This study proposes a new liver segmentation method based on a sparse a priori statistical shape model (SP-SSM). First, mark points are selected in the liver a priori model and the original image. Then, the a priori shape and its mark points are used to obtain a dictionary for the liver boundary information. Second, the sparse coefficient is calculated based on the correspondence between mark points in the original image and those in the a priori model, and then the sparse statistical model is established by combining the sparse coefficients and the dictionary. Finally, the intensity energy and boundary energy models are built based on the intensity information and the specific boundary information of the original image. Then, the sparse matching constraint model is established based on the sparse coding theory. These models jointly drive the iterative deformation of the sparse statistical model to approximate and accurately extract the liver boundaries. This method can solve the problems of deformation model initialization and a priori method accuracy using the sparse dictionary. The SP-SSM can achieve a mean overlap error of 4.8% and a mean volume difference of 1.8%, whereas the average symmetric surface distance and the root mean square symmetric surface distance can reach 0.8 mm and 1.4 mm, respectively.
Ai, Zhipin; Wang, Qinxue; Yang, Yonghui; Manevski, Kiril; Zhao, Xin; Eer, Deni
2017-12-19
Evaporation from land surfaces is a critical component of the Earth water cycle and of water management strategies. The complementary method originally proposed by Bouchet, which describes a linear relation between actual evaporation (E), potential evaporation (E po ) and apparent potential evaporation (E pa ) based on routinely measured weather data, is one of the various methods for evaporation calculation. This study evaluated the reformulated version of the original method, as proposed by Brutsaert, for forest land cover in Japan. The new complementary method is nonlinear and based on boundary conditions with strictly physical considerations. The only unknown parameter (α e ) was for the first time determined for various forest covers located from north to south across Japan. The values of α e ranged from 0.94 to 1.10, with a mean value of 1.01. Furthermore, the calculated evaporation with the new method showed a good fit with the eddy-covariance measured values, with a determination coefficient of 0.78 and a mean bias of 4%. Evaluation results revealed that the new nonlinear complementary relation performs better than the original linear relation in describing the relationship between E/E pa and E po /E pa , and also in depicting the asymmetry variation between E pa /E po and E/E po .
An hp symplectic pseudospectral method for nonlinear optimal control
NASA Astrophysics Data System (ADS)
Peng, Haijun; Wang, Xinwei; Li, Mingwu; Chen, Biaosong
2017-01-01
An adaptive symplectic pseudospectral method based on the dual variational principle is proposed and is successfully applied to solving nonlinear optimal control problems in this paper. The proposed method satisfies the first order necessary conditions of continuous optimal control problems, also the symplectic property of the original continuous Hamiltonian system is preserved. The original optimal control problem is transferred into a set of nonlinear equations which can be solved easily by Newton-Raphson iterations, and the Jacobian matrix is found to be sparse and symmetric. The proposed method, on one hand, exhibits exponent convergence rates when the number of collocation points are increasing with the fixed number of sub-intervals; on the other hand, exhibits linear convergence rates when the number of sub-intervals is increasing with the fixed number of collocation points. Furthermore, combining with the hp method based on the residual error of dynamic constraints, the proposed method can achieve given precisions in a few iterations. Five examples highlight the high precision and high computational efficiency of the proposed method.
Xiao, Kai; Chen, Danny Z; Hu, X Sharon; Zhou, Bo
2012-12-01
The three-dimensional digital differential analyzer (3D-DDA) algorithm is a widely used ray traversal method, which is also at the core of many convolution∕superposition (C∕S) dose calculation approaches. However, porting existing C∕S dose calculation methods onto graphics processing unit (GPU) has brought challenges to retaining the efficiency of this algorithm. In particular, straightforward implementation of the original 3D-DDA algorithm inflicts a lot of branch divergence which conflicts with the GPU programming model and leads to suboptimal performance. In this paper, an efficient GPU implementation of the 3D-DDA algorithm is proposed, which effectively reduces such branch divergence and improves performance of the C∕S dose calculation programs running on GPU. The main idea of the proposed method is to convert a number of conditional statements in the original 3D-DDA algorithm into a set of simple operations (e.g., arithmetic, comparison, and logic) which are better supported by the GPU architecture. To verify and demonstrate the performance improvement, this ray traversal method was integrated into a GPU-based collapsed cone convolution∕superposition (CCCS) dose calculation program. The proposed method has been tested using a water phantom and various clinical cases on an NVIDIA GTX570 GPU. The CCCS dose calculation program based on the efficient 3D-DDA ray traversal implementation runs 1.42 ∼ 2.67× faster than the one based on the original 3D-DDA implementation, without losing any accuracy. The results show that the proposed method can effectively reduce branch divergence in the original 3D-DDA ray traversal algorithm and improve the performance of the CCCS program running on GPU. Considering the wide utilization of the 3D-DDA algorithm, various applications can benefit from this implementation method.
Mal-Xtract: Hidden Code Extraction using Memory Analysis
NASA Astrophysics Data System (ADS)
Lim, Charles; Syailendra Kotualubun, Yohanes; Suryadi; Ramli, Kalamullah
2017-01-01
Software packer has been used effectively to hide the original code inside a binary executable, making it more difficult for existing signature based anti malware software to detect malicious code inside the executable. A new method of written and rewritten memory section is introduced to to detect the exact end time of unpacking routine and extract original code from packed binary executable using Memory Analysis running in an software emulated environment. Our experiment results show that at least 97% of the original code from the various binary executable packed with different software packers could be extracted. The proposed method has also been successfully extracted hidden code from recent malware family samples.
Representation of photon limited data in emission tomography using origin ensembles
NASA Astrophysics Data System (ADS)
Sitek, A.
2008-06-01
Representation and reconstruction of data obtained by emission tomography scanners are challenging due to high noise levels in the data. Typically, images obtained using tomographic measurements are represented using grids. In this work, we define images as sets of origins of events detected during tomographic measurements; we call these origin ensembles (OEs). A state in the ensemble is characterized by a vector of 3N parameters Y, where the parameters are the coordinates of origins of detected events in a three-dimensional space and N is the number of detected events. The 3N-dimensional probability density function (PDF) for that ensemble is derived, and we present an algorithm for OE image estimation from tomographic measurements. A displayable image (e.g. grid based image) is derived from the OE formulation by calculating ensemble expectations based on the PDF using the Markov chain Monte Carlo method. The approach was applied to computer-simulated 3D list-mode positron emission tomography data. The reconstruction errors for a 10 000 000 event acquisition for simulated ranged from 0.1 to 34.8%, depending on object size and sampling density. The method was also applied to experimental data and the results of the OE method were consistent with those obtained by a standard maximum-likelihood approach. The method is a new approach to representation and reconstruction of data obtained by photon-limited emission tomography measurements.
NASA Astrophysics Data System (ADS)
Sahoo, Madhumita; Sahoo, Satiprasad; Dhar, Anirban; Pradhan, Biswajeet
2016-10-01
Groundwater vulnerability assessment has been an accepted practice to identify the zones with relatively increased potential for groundwater contamination. DRASTIC is the most popular secondary information-based vulnerability assessment approach. Original DRASTIC approach considers relative importance of features/sub-features based on subjective weighting/rating values. However variability of features at a smaller scale is not reflected in this subjective vulnerability assessment process. In contrast to the subjective approach, the objective weighting-based methods provide flexibility in weight assignment depending on the variation of the local system. However experts' opinion is not directly considered in the objective weighting-based methods. Thus effectiveness of both subjective and objective weighting-based approaches needs to be evaluated. In the present study, three methods - Entropy information method (E-DRASTIC), Fuzzy pattern recognition method (F-DRASTIC) and Single parameter sensitivity analysis (SA-DRASTIC), were used to modify the weights of the original DRASTIC features to include local variability. Moreover, a grey incidence analysis was used to evaluate the relative performance of subjective (DRASTIC and SA-DRASTIC) and objective (E-DRASTIC and F-DRASTIC) weighting-based methods. The performance of the developed methodology was tested in an urban area of Kanpur City, India. Relative performance of the subjective and objective methods varies with the choice of water quality parameters. This methodology can be applied without/with suitable modification. These evaluations establish the potential applicability of the methodology for general vulnerability assessment in urban context.
Determination of sex origin of meat and meat products on the DNA basis: a review.
Gokulakrishnan, Palanisamy; Kumar, Rajiv Ranjan; Sharma, Brahm Deo; Mendiratta, Sanjod Kumar; Malav, Omprakash; Sharma, Deepak
2015-01-01
Sex determination of domestic animal's meat is of potential value in meat authentication and quality control studies. Methods aiming at determining the sex origin of meat may be based either on the analysis of hormone or on the analysis of nucleic acids. At the present time, sex determination of meat and meat products based on hormone analysis employ gas chromatography-mass spectrometry (GC-MS), high-performance liquid chromatography-mass spectrometry/mass spectrometry (HPLC-MS/MS), and enzyme-linked immunosorbent assay (ELISA). Most of the hormone-based methods proved to be highly specific and sensitive but were not performed on a regular basis for meat sexing due to the technical limitations or the expensive equipments required. On the other hand, the most common methodology to determine the sex of meat is unquestionably traditional polymerase chain reaction (PCR) that involves gel electrophoresis of DNA amplicons. This review is intended to provide an overview of the DNA-based methods for sex determination of meat and meat products.
Huikang Wang; Luzheng Bi; Teng Teng
2017-07-01
This paper proposes a novel method of electroencephalography (EEG)-based driver emergency braking intention detection system for brain-controlled driving considering one electrode falling-off. First, whether one electrode falls off is discriminated based on EEG potentials. Then, the missing signals are estimated by using the signals collected from other channels based on multivariate linear regression. Finally, a linear decoder is applied to classify driver intentions. Experimental results show that the falling-off discrimination accuracy is 99.63% on average and the correlation coefficient and root mean squared error (RMSE) between the estimated and experimental data are 0.90 and 11.43 μV, respectively, on average. Given one electrode falls off, the system accuracy of the proposed intention prediction method is significantly higher than that of the original method (95.12% VS 79.11%) and is close to that (95.95%) of the original system under normal situations (i. e., no electrode falling-off).
Gaudin, Valérie
2017-04-15
Antibiotic residues may be found in food of animal origin, since veterinary drugs are used for preventive and curative purposes to treat animals. The control of veterinary drug residues in food is necessary to ensure consumer safety. Screening methods are the first step in the control of antibiotic residues in food of animal origin. Conventional screening methods are based on different technologies, microbiological methods, immunological methods or physico-chemical methods (e.g. thin-layer chromatography, HPLC, LC-MS/MS). Screening methods should be simple, quick, inexpensive and specific, with low detection limits and high sample throughput. Biosensors can meet some of these requirements. Therefore, the development of biosensors for the screening of antibiotic residues has been increasing since the 1980s. The present review provides extensive and up-to-date findings on biosensors for the screening of antibiotic residues in food products of animal origin. Biosensors are constituted of a bioreceptor and a transducer. In the detection of antibiotic residues, even though antibodies were the first bioreceptors to be used, new kinds of bioreceptors are being developed more and more (enzymes, aptamers, MIPs); their advantages and drawbacks are discussed in this review. The different categories of transducers (electrochemical, mass-based biosensors, optical and thermal) and their potential applications for the screening of antibiotic residues in food are presented. Moreover, the advantages and drawbacks of the different types of transducers are discussed. Lastly, outlook and the future development of biosensors for the control of antibiotic residues in food are highlighted. Copyright © 2016. Published by Elsevier B.V.
Efficient image enhancement using sparse source separation in the Retinex theory
NASA Astrophysics Data System (ADS)
Yoon, Jongsu; Choi, Jangwon; Choe, Yoonsik
2017-11-01
Color constancy is the feature of the human vision system (HVS) that ensures the relative constancy of the perceived color of objects under varying illumination conditions. The Retinex theory of machine vision systems is based on the HVS. Among Retinex algorithms, the physics-based algorithms are efficient; however, they generally do not satisfy the local characteristics of the original Retinex theory because they eliminate global illumination from their optimization. We apply the sparse source separation technique to the Retinex theory to present a physics-based algorithm that satisfies the locality characteristic of the original Retinex theory. Previous Retinex algorithms have limited use in image enhancement because the total variation Retinex results in an overly enhanced image and the sparse source separation Retinex cannot completely restore the original image. In contrast, our proposed method preserves the image edge and can very nearly replicate the original image without any special operation.
A Two-Time Scale Decentralized Model Predictive Controller Based on Input and Output Model
Niu, Jian; Zhao, Jun; Xu, Zuhua; Qian, Jixin
2009-01-01
A decentralized model predictive controller applicable for some systems which exhibit different dynamic characteristics in different channels was presented in this paper. These systems can be regarded as combinations of a fast model and a slow model, the response speeds of which are in two-time scale. Because most practical models used for control are obtained in the form of transfer function matrix by plant tests, a singular perturbation method was firstly used to separate the original transfer function matrix into two models in two-time scale. Then a decentralized model predictive controller was designed based on the two models derived from the original system. And the stability of the control method was proved. Simulations showed that the method was effective. PMID:19834542
Projection methods for the numerical solution of Markov chain models
NASA Technical Reports Server (NTRS)
Saad, Youcef
1989-01-01
Projection methods for computing stationary probability distributions for Markov chain models are presented. A general projection method is a method which seeks an approximation from a subspace of small dimension to the original problem. Thus, the original matrix problem of size N is approximated by one of dimension m, typically much smaller than N. A particularly successful class of methods based on this principle is that of Krylov subspace methods which utilize subspaces of the form span(v,av,...,A(exp m-1)v). These methods are effective in solving linear systems and eigenvalue problems (Lanczos, Arnoldi,...) as well as nonlinear equations. They can be combined with more traditional iterative methods such as successive overrelaxation, symmetric successive overrelaxation, or with incomplete factorization methods to enhance convergence.
[A research in speech endpoint detection based on boxes-coupling generalization dimension].
Wang, Zimei; Yang, Cuirong; Wu, Wei; Fan, Yingle
2008-06-01
In this paper, a new calculating method of generalized dimension, based on boxes-coupling principle, is proposed to overcome the edge effects and to improve the capability of the speech endpoint detection which is based on the original calculating method of generalized dimension. This new method has been applied to speech endpoint detection. Firstly, the length of overlapping border was determined, and through calculating the generalized dimension by covering the speech signal with overlapped boxes, three-dimension feature vectors including the box dimension, the information dimension and the correlation dimension were obtained. Secondly, in the light of the relation between feature distance and similarity degree, feature extraction was conducted by use of common distance. Lastly, bi-threshold method was used to classify the speech signals. The results of experiment indicated that, by comparison with the original generalized dimension (OGD) and the spectral entropy (SE) algorithm, the proposed method is more robust and effective for detecting the speech signals which contain different kinds of noise in different signal noise ratio (SNR), especially in low SNR.
NASA Astrophysics Data System (ADS)
Zhang, B.; Sang, Jun; Alam, Mohammad S.
2013-03-01
An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm was proposed. Firstly, the original secret image was encrypted into two phase-only masks M1 and M2 via cascaded iterative Fourier transform (CIFT) algorithm. Then, the public-key encryption algorithm RSA was adopted to encrypt M2 into M2' . Finally, a host image was enlarged by extending one pixel into 2×2 pixels and each element in M1 and M2' was multiplied with a superimposition coefficient and added to or subtracted from two different elements in the 2×2 pixels of the enlarged host image. To recover the secret image from the stego-image, the two masks were extracted from the stego-image without the original host image. By applying public-key encryption algorithm, the key distribution was facilitated, and also compared with the image hiding method based on optical interference, the proposed method may reach higher robustness by employing the characteristics of the CIFT algorithm. Computer simulations show that this method has good robustness against image processing.
Research on High Accuracy Detection of Red Tide Hyperspecrral Based on Deep Learning Cnn
NASA Astrophysics Data System (ADS)
Hu, Y.; Ma, Y.; An, J.
2018-04-01
Increasing frequency in red tide outbreaks has been reported around the world. It is of great concern due to not only their adverse effects on human health and marine organisms, but also their impacts on the economy of the affected areas. this paper put forward a high accuracy detection method based on a fully-connected deep CNN detection model with 8-layers to monitor red tide in hyperspectral remote sensing images, then make a discussion of the glint suppression method for improving the accuracy of red tide detection. The results show that the proposed CNN hyperspectral detection model can detect red tide accurately and effectively. The red tide detection accuracy of the proposed CNN model based on original image and filter-image is 95.58 % and 97.45 %, respectively, and compared with the SVM method, the CNN detection accuracy is increased by 7.52 % and 2.25 %. Compared with SVM method base on original image, the red tide CNN detection accuracy based on filter-image increased by 8.62 % and 6.37 %. It also indicates that the image glint affects the accuracy of red tide detection seriously.
Methods for Data-based Delineation of Spatial Regions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, John E.
In data analysis, it is often useful to delineate or segregate areas of interest from the general population of data in order to concentrate further analysis efforts on smaller areas. Three methods are presented here for automatically generating polygons around spatial data of interest. Each method addresses a distinct data type. These methods were developed for and implemented in the sample planning tool called Visual Sample Plan (VSP). Method A is used to delineate areas of elevated values in a rectangular grid of data (raster). The data used for this method are spatially related. Although VSP uses data from amore » kriging process for this method, it will work for any type of data that is spatially coherent and appears on a regular grid. Method B is used to surround areas of interest characterized by individual data points that are congregated within a certain distance of each other. Areas where data are “clumped” together spatially will be delineated. Method C is used to recreate the original boundary in a raster of data that separated data values from non-values. This is useful when a rectangular raster of data contains non-values (missing data) that indicate they were outside of some original boundary. If the original boundary is not delivered with the raster, this method will approximate the original boundary.« less
A graph decomposition-based approach for water distribution network optimization
NASA Astrophysics Data System (ADS)
Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.; Deuerlein, Jochen W.
2013-04-01
A novel optimization approach for water distribution network design is proposed in this paper. Using graph theory algorithms, a full water network is first decomposed into different subnetworks based on the connectivity of the network's components. The original whole network is simplified to a directed augmented tree, in which the subnetworks are substituted by augmented nodes and directed links are created to connect them. Differential evolution (DE) is then employed to optimize each subnetwork based on the sequence specified by the assigned directed links in the augmented tree. Rather than optimizing the original network as a whole, the subnetworks are sequentially optimized by the DE algorithm. A solution choice table is established for each subnetwork (except for the subnetwork that includes a supply node) and the optimal solution of the original whole network is finally obtained by use of the solution choice tables. Furthermore, a preconditioning algorithm is applied to the subnetworks to produce an approximately optimal solution for the original whole network. This solution specifies promising regions for the final optimization algorithm to further optimize the subnetworks. Five water network case studies are used to demonstrate the effectiveness of the proposed optimization method. A standard DE algorithm (SDE) and a genetic algorithm (GA) are applied to each case study without network decomposition to enable a comparison with the proposed method. The results show that the proposed method consistently outperforms the SDE and GA (both with tuned parameters) in terms of both the solution quality and efficiency.
Ultra-low background DNA cloning system.
Goto, Kenta; Nagano, Yukio
2013-01-01
Yeast-based in vivo cloning is useful for cloning DNA fragments into plasmid vectors and is based on the ability of yeast to recombine the DNA fragments by homologous recombination. Although this method is efficient, it produces some by-products. We have developed an "ultra-low background DNA cloning system" on the basis of yeast-based in vivo cloning, by almost completely eliminating the generation of by-products and applying the method to commonly used Escherichia coli vectors, particularly those lacking yeast replication origins and carrying an ampicillin resistance gene (Amp(r)). First, we constructed a conversion cassette containing the DNA sequences in the following order: an Amp(r) 5' UTR (untranslated region) and coding region, an autonomous replication sequence and a centromere sequence from yeast, a TRP1 yeast selectable marker, and an Amp(r) 3' UTR. This cassette allowed conversion of the Amp(r)-containing vector into the yeast/E. coli shuttle vector through use of the Amp(r) sequence by homologous recombination. Furthermore, simultaneous transformation of the desired DNA fragment into yeast allowed cloning of this DNA fragment into the same vector. We rescued the plasmid vectors from all yeast transformants, and by-products containing the E. coli replication origin disappeared. Next, the rescued vectors were transformed into E. coli and the by-products containing the yeast replication origin disappeared. Thus, our method used yeast- and E. coli-specific "origins of replication" to eliminate the generation of by-products. Finally, we successfully cloned the DNA fragment into the vector with almost 100% efficiency.
Robust kernel collaborative representation for face recognition
NASA Astrophysics Data System (ADS)
Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong
2015-05-01
One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.
Chew, David S. H.; Choi, Kwok Pui; Leung, Ming-Ying
2005-01-01
Many empirical studies show that there are unusual clusters of palindromes, closely spaced direct and inverted repeats around the replication origins of herpesviruses. In this paper, we introduce two new scoring schemes to quantify the spatial abundance of palindromes in a genomic sequence. Based on these scoring schemes, a computational method to predict the locations of replication origins is developed. When our predictions are compared with 39 known or annotated replication origins in 19 herpesviruses, close to 80% of the replication origins are located within 2% of the genome length. A list of predicted locations of replication origins in all the known herpesviruses with complete genome sequences is reported. PMID:16141192
Defeaturing CAD models using a geometry-based size field and facet-based reduction operators.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quadros, William Roshan; Owen, Steven James
2010-04-01
We propose a method to automatically defeature a CAD model by detecting irrelevant features using a geometry-based size field and a method to remove the irrelevant features via facet-based operations on a discrete representation. A discrete B-Rep model is first created by obtaining a faceted representation of the CAD entities. The candidate facet entities are then marked for reduction by using a geometry-based size field. This is accomplished by estimating local mesh sizes based on geometric criteria. If the field value at a facet entity goes below a user specified threshold value then it is identified as an irrelevant featuremore » and is marked for reduction. The reduction of marked facet entities is primarily performed using an edge collapse operator. Care is taken to retain a valid geometry and topology of the discrete model throughout the procedure. The original model is not altered as the defeaturing is performed on a separate discrete model. Associativity between the entities of the discrete model and that of original CAD model is maintained in order to decode the attributes and boundary conditions applied on the original CAD entities onto the mesh via the entities of the discrete model. Example models are presented to illustrate the effectiveness of the proposed approach.« less
Development of a Compound Optimization Approach Based on Imperialist Competitive Algorithm
NASA Astrophysics Data System (ADS)
Wang, Qimei; Yang, Zhihong; Wang, Yong
In this paper, an improved novel approach is developed for the imperialist competitive algorithm to achieve a greater performance. The Nelder-Meand simplex method is applied to execute alternately with the original procedures of the algorithm. The approach is tested on twelve widely-used benchmark functions and is also compared with other relative studies. It is shown that the proposed approach has a faster convergence rate, better search ability, and higher stability than the original algorithm and other relative methods.
Evaluation of speaker de-identification based on voice gender and age conversion
NASA Astrophysics Data System (ADS)
Přibil, Jiří; Přibilová, Anna; Matoušek, Jindřich
2018-03-01
Two basic tasks are covered in this paper. The first one consists in the design and practical testing of a new method for voice de-identification that changes the apparent age and/or gender of a speaker by multi-segmental frequency scale transformation combined with prosody modification. The second task is aimed at verification of applicability of a classifier based on Gaussian mixture models (GMM) to detect the original Czech and Slovak speakers after applied voice deidentification. The performed experiments confirm functionality of the developed gender and age conversion for all selected types of de-identification which can be objectively evaluated by the GMM-based open-set classifier. The original speaker detection accuracy was compared also for sentences uttered by German and English speakers showing language independence of the proposed method.
A Novel Antibody Humanization Method Based on Epitopes Scanning and Molecular Dynamics Simulation
Zhao, Bin-Bin; Gong, Lu-Lu; Jin, Wen-Jing; Liu, Jing-Jun; Wang, Jing-Fei; Wang, Tian-Tian; Yuan, Xiao-Hui; He, You-Wen
2013-01-01
1-17-2 is a rat anti-human DEC-205 monoclonal antibody that induces internalization and delivers antigen to dendritic cells (DCs). The potentially clinical application of this antibody is limited by its murine origin. Traditional humanization method such as complementarity determining regions (CDRs) graft often leads to a decreased or even lost affinity. Here we have developed a novel antibody humanization method based on computer modeling and bioinformatics analysis. First, we used homology modeling technology to build the precise model of Fab. A novel epitope scanning algorithm was designed to identify antigenic residues in the framework regions (FRs) that need to be mutated to human counterpart in the humanization process. Then virtual mutation and molecular dynamics (MD) simulation were used to assess the conformational impact imposed by all the mutations. By comparing the root-mean-square deviations (RMSDs) of CDRs, we found five key residues whose mutations would destroy the original conformation of CDRs. These residues need to be back-mutated to rescue the antibody binding affinity. Finally we constructed the antibodies in vitro and compared their binding affinity by flow cytometry and surface plasmon resonance (SPR) assay. The binding affinity of the refined humanized antibody was similar to that of the original rat antibody. Our results have established a novel method based on epitopes scanning and MD simulation for antibody humanization. PMID:24278299
Zhou, Yulong; Gao, Min; Fang, Dan; Zhang, Baoquan
2016-01-01
In an effort to implement fast and effective tank segmentation from infrared images in complex background, the threshold of the maximum between-class variance method (i.e., the Otsu method) is analyzed and the working mechanism of the Otsu method is discussed. Subsequently, a fast and effective method for tank segmentation from infrared images in complex background is proposed based on the Otsu method via constraining the complex background of the image. Considering the complexity of background, the original image is firstly divided into three classes of target region, middle background and lower background via maximizing the sum of their between-class variances. Then, the unsupervised background constraint is implemented based on the within-class variance of target region and hence the original image can be simplified. Finally, the Otsu method is applied to simplified image for threshold selection. Experimental results on a variety of tank infrared images (880 × 480 pixels) in complex background demonstrate that the proposed method enjoys better segmentation performance and even could be comparative with the manual segmentation in segmented results. In addition, its average running time is only 9.22 ms, implying the new method with good performance in real time processing.
NASA Astrophysics Data System (ADS)
Wacławczyk, Marta; Ma, Yong-Feng; Kopeć, Jacek M.; Malinowski, Szymon P.
2017-11-01
In this paper we propose two approaches to estimating the turbulent kinetic energy (TKE) dissipation rate, based on the zero-crossing method by Sreenivasan et al. (1983). The original formulation requires a fine resolution of the measured signal, down to the smallest dissipative scales. However, due to finite sampling frequency, as well as measurement errors, velocity time series obtained from airborne experiments are characterized by the presence of effective spectral cutoffs. In contrast to the original formulation the new approaches are suitable for use with signals originating from airborne experiments. The suitability of the new approaches is tested using measurement data obtained during the Physics of Stratocumulus Top (POST) airborne research campaign as well as synthetic turbulence data. They appear useful and complementary to existing methods. We show the number-of-crossings-based approaches respond differently to errors due to finite sampling and finite averaging than the classical power spectral method. Hence, their application for the case of short signals and small sampling frequencies is particularly interesting, as it can increase the robustness of turbulent kinetic energy dissipation rate retrieval.
Unsupervised malaria parasite detection based on phase spectrum.
Fang, Yuming; Xiong, Wei; Lin, Weisi; Chen, Zhenzhong
2011-01-01
In this paper, we propose a novel method for malaria parasite detection based on phase spectrum. The method first obtains the amplitude spectrum and phase spectrum for blood smear images through Quaternion Fourier Transform (QFT). Then it gets the reconstructed image based on Inverse Quaternion Fourier transform (IQFT) on a constant amplitude spectrum and the original phase spectrum. The malaria parasite areas can be detected easily from the reconstructed blood smear images. Extensive experiments have demonstrated the effectiveness of this novel method.
Original analytic solution of a half-bridge modelled as a statically indeterminate system
NASA Astrophysics Data System (ADS)
Oanta, Emil M.; Panait, Cornel; Raicu, Alexandra; Barhalescu, Mihaela
2016-12-01
The paper presents an original computer based analytical model of a half-bridge belonging to a circular settling tank. The primary unknown is computed using the force method, the coefficients of the canonical equation being calculated using either the discretization of the bending moment diagram in trapezoids, or using the relations specific to the polygons. A second algorithm based on the method of initial parameters is also presented. Analyzing the new solution we came to the conclusion that most of the computer code developed for other model may be reused. The results are useful to evaluate the behavior of the structure and to compare with the results of the finite element models.
An improved design method of a tuned mass damper for an in-service footbridge
NASA Astrophysics Data System (ADS)
Shi, Weixing; Wang, Liangkun; Lu, Zheng
2018-03-01
Tuned mass damper (TMD) has a wide range of applications in the vibration control of footbridges. However, the traditional engineering design method may lead to a mistuned TMD. In this paper, an improved TMD design method based on the model updating is proposed. Firstly, the original finite element model (FEM) is studied and the natural characteristics of the in-service or newly built footbridge is identified by field test, and then the original FEM is updated. TMD is designed according to the new updated FEM, and it is optimized according to the simulation on vibration control effects. Finally, the installation and field measurement of TMD are carried out. The improved design method can be applied to both in-service and newly built footbridges. This paper illustrates the improved design method with an engineering example. The frequency identification results of field test and original FEM show that there is a relatively large difference between them. The TMD designed according to the updated FEM has better vibration control effect than the TMD designed according to the original FEM. The site test results show that TMD has good effect on controlling human-induced vibrations.
Human ear detection in the thermal infrared spectrum
NASA Astrophysics Data System (ADS)
Abaza, Ayman; Bourlai, Thirimachos
2012-06-01
In this paper the problem of human ear detection in the thermal infrared (IR) spectrum is studied in order to illustrate the advantages and limitations of the most important steps of ear-based biometrics that can operate in day and night time environments. The main contributions of this work are two-fold: First, a dual-band database is assembled that consists of visible and thermal profile face images. The thermal data was collected using a high definition middle-wave infrared (3-5 microns) camera that is capable of acquiring thermal imprints of human skin. Second, a fully automated, thermal imaging based ear detection method is developed for real-time segmentation of human ears in either day or night time environments. The proposed method is based on Haar features forming a cascaded AdaBoost classifier (our modified version of the original Viola-Jones approach1 that was designed to be applied mainly in visible band images). The main advantage of the proposed method, applied on our profile face image data set collected in the thermal-band, is that it is designed to reduce the learning time required by the original Viola-Jones method from several weeks to several hours. Unlike other approaches reported in the literature, which have been tested but not designed to operate in the thermal band, our method yields a high detection accuracy that reaches ~ 91.5%. Further analysis on our data set yielded that: (a) photometric normalization techniques do not directly improve ear detection performance. However, when using a certain photometric normalization technique (CLAHE) on falsely detected images, the detection rate improved by ~ 4%; (b) the high detection accuracy of our method did not degrade when we lowered down the original spatial resolution of thermal ear images. For example, even after using one third of the original spatial resolution (i.e. ~ 20% of the original computational time) of the thermal profile face images, the high ear detection accuracy of our method remained unaffected. This resulted also in speeding up the detection time of an ear image from 265 to 17 milliseconds per image. To the best of our knowledge this is the first time that the problem of human ear detection in the thermal band is being investigated in the open literature.
Determination of 13C/12C Isotope Ratio in Alcohols of Different Origin by 1н Nuclei NMR-Spectroscopy
NASA Astrophysics Data System (ADS)
Dzhimak, S. S.; Basov, A. A.; Buzko, V. Yu.; Kopytov, G. F.; Kashaev, D. V.; Shashkov, D. I.; Shlapakov, M. S.; Baryshev, M. G.
2017-02-01
A new express method of indirect assessment of 13C/12C isotope ratio on 1H nuclei is developed to verify the authenticity of ethanol origin in alcohol-water-based fluids and assess the facts of various alcoholic beverages falsification. It is established that in water-based alcohol-containing systems, side satellites for the signals of ethanol methyl and methylene protons in the NMR spectra on 1H nuclei, correspond to the protons associated with 13C nuclei. There is a direct correlation between the intensities of the signals of ethanol methyl and methylene protons' 1H- NMR and their side satellites, therefore, the data obtained can be used to assess 13C/12C isotope ratio in water-based alcohol-containing systems. The analysis of integrated intensities of main and satellite signals of methyl and methylene protons of ethanol obtained by NMR on 1H nuclei makes it possible to differentiate between ethanol of synthetic and natural origin. Furthermore, the method proposed made it possible to differentiate between wheat and corn ethanol.
Tahayori, B; Khaneja, N; Johnston, L A; Farrell, P M; Mareels, I M Y
2016-01-01
The design of slice selective pulses for magnetic resonance imaging can be cast as an optimal control problem. The Fourier synthesis method is an existing approach to solve these optimal control problems. In this method the gradient field as well as the excitation field are switched rapidly and their amplitudes are calculated based on a Fourier series expansion. Here, we provide a novel insight into the Fourier synthesis method via representing the Bloch equation in spherical coordinates. Based on the spherical Bloch equation, we propose an alternative sequence of pulses that can be used for slice selection which is more time efficient compared to the original method. Simulation results demonstrate that while the performance of both methods is approximately the same, the required time for the proposed sequence of pulses is half of the original sequence of pulses. Furthermore, the slice selectivity of both sequences of pulses changes with radio frequency field inhomogeneities in a similar way. We also introduce a measure, referred to as gradient complexity, to compare the performance of both sequences of pulses. This measure indicates that for a desired level of uniformity in the excited slice, the gradient complexity for the proposed sequence of pulses is less than the original sequence. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Critically Reflective Leadership
ERIC Educational Resources Information Center
Cunningham, Christine L.
2012-01-01
Critical Reflective Practice (CRP) has a proven reputation as a method for teacher-researchers in K-12 classrooms, but there have been few published examples of this method being used to document school leaders' work-based practice. This paper outlines adaptations made by the author from an original CRP method to a Critically Reflective Leadership…
Zou, Feng; Chen, Debao; Wang, Jiangtao
2016-01-01
An improved teaching-learning-based optimization with combining of the social character of PSO (TLBO-PSO), which is considering the teacher's behavior influence on the students and the mean grade of the class, is proposed in the paper to find the global solutions of function optimization problems. In this method, the teacher phase of TLBO is modified; the new position of the individual is determined by the old position, the mean position, and the best position of current generation. The method overcomes disadvantage that the evolution of the original TLBO might stop when the mean position of students equals the position of the teacher. To decrease the computation cost of the algorithm, the process of removing the duplicate individual in original TLBO is not adopted in the improved algorithm. Moreover, the probability of local convergence of the improved method is decreased by the mutation operator. The effectiveness of the proposed method is tested on some benchmark functions, and the results are competitive with respect to some other methods.
NASA Astrophysics Data System (ADS)
Li, Jie; Zhang, Ji; Zhao, Yan-Li; Huang, Heng-Yu; Wang, Yuan-Zhong
2017-12-01
Roots, stems, leaves and flowers of Longdan (Gentiana rigescens Franch. ex Hemsl) were collected from six geographic origins of Yunnan Province (n = 240) to implement the quality assessment based on contents of gentiopicroside, loganic acid, sweroside and swertiamarin and chemical profile using HPLC-DAD and FTIR method combined with principal component analysis (PCA). The content of gentiopicroside (major iridoid glycoside) was the highest in G. rigescens, regardless of tissue and geographic origin. The level of swertiamarin was the lowest, even unable to be detected in samples from Kunming and Qujing. Significant correlations (p < 0.05) between gentiopicroside, loganic acid, sweroside and swertiamarin were found at inter- or intra-tissues, which were highly depended on geographic origins, indicating the influence of environmental conditions on the conversion and transport of secondary metabolites in G. rigescens. Furthermore, samples were reasonably classified as three clusters along large producing areas where have similar climate conditions, characterized by carbohydrates, phenols, benzoates, terpenoids, aliphatic alcohols, aromatic hydrocarbons, and so forth. The present work provided global information on the chemical profile and contents of major iridoid glycosides in G. rigescens originated from six different origins, which is helpful for controlling quality of herbal medicines systematically.
Li, Jie; Zhang, Ji; Zhao, Yan-Li; Huang, Heng-Yu; Wang, Yuan-Zhong
2017-01-01
Roots, stems, leaves, and flowers of Longdan (Gentiana rigescens Franch. ex Hemsl) were collected from six geographic origins of Yunnan Province (n = 240) to implement the quality assessment based on contents of gentiopicroside, loganic acid, sweroside and swertiamarin and chemical profile using HPLC-DAD and FTIR method combined with principal component analysis (PCA). The content of gentiopicroside (major iridoid glycoside) was the highest in G. rigescens, regardless of tissue and geographic origin. The level of swertiamarin was the lowest, even unable to be detected in samples from Kunming and Qujing. Significant correlations (p < 0.05) between gentiopicroside, loganic acid, sweroside, and swertiamarin were found at inter- or intra-tissues, which were highly depended on geographic origins, indicating the influence of environmental conditions on the conversion and transport of secondary metabolites in G. rigescens. Furthermore, samples were reasonably classified as three clusters along large producing areas where have similar climate conditions, characterized by carbohydrates, phenols, benzoates, terpenoids, aliphatic alcohols, aromatic hydrocarbons, and so forth. The present work provided global information on the chemical profile and contents of major iridoid glycosides in G. rigescens originated from six different origins, which is helpful for controlling quality of herbal medicines systematically. PMID:29312929
Xu, Ning; Zhou, Guofu; Li, Xiaojuan; Lu, Heng; Meng, Fanyun; Zhai, Huaqiang
2017-05-01
A reliable and comprehensive method for identifying the origin and assessing the quality of Epimedium has been developed. The method is based on analysis of HPLC fingerprints, combined with similarity analysis, hierarchical cluster analysis (HCA), principal component analysis (PCA) and multi-ingredient quantitative analysis. Nineteen batches of Epimedium, collected from different areas in the western regions of China, were used to establish the fingerprints and 18 peaks were selected for the analysis. Similarity analysis, HCA and PCA all classified the 19 areas into three groups. Simultaneous quantification of the five major bioactive ingredients in the Epimedium samples was also carried out to confirm the consistency of the quality tests. These methods were successfully used to identify the geographical origin of the Epimedium samples and to evaluate their quality. Copyright © 2016 John Wiley & Sons, Ltd.
Overview of Krylov subspace methods with applications to control problems
NASA Technical Reports Server (NTRS)
Saad, Youcef
1989-01-01
An overview of projection methods based on Krylov subspaces are given with emphasis on their application to solving matrix equations that arise in control problems. The main idea of Krylov subspace methods is to generate a basis of the Krylov subspace Span and seek an approximate solution the the original problem from this subspace. Thus, the original matrix problem of size N is approximated by one of dimension m typically much smaller than N. Krylov subspace methods have been very successful in solving linear systems and eigenvalue problems and are now just becoming popular for solving nonlinear equations. It is shown how they can be used to solve partial pole placement problems, Sylvester's equation, and Lyapunov's equation.
Krylov subspace methods - Theory, algorithms, and applications
NASA Technical Reports Server (NTRS)
Sad, Youcef
1990-01-01
Projection methods based on Krylov subspaces for solving various types of scientific problems are reviewed. The main idea of this class of methods when applied to a linear system Ax = b, is to generate in some manner an approximate solution to the original problem from the so-called Krylov subspace span. Thus, the original problem of size N is approximated by one of dimension m, typically much smaller than N. Krylov subspace methods have been very successful in solving linear systems and eigenvalue problems and are now becoming popular for solving nonlinear equations. The main ideas in Krylov subspace methods are shown and their use in solving linear systems, eigenvalue problems, parabolic partial differential equations, Liapunov matrix equations, and nonlinear system of equations are discussed.
Iterative Nonlocal Total Variation Regularization Method for Image Restoration
Xu, Huanyu; Sun, Quansen; Luo, Nan; Cao, Guo; Xia, Deshen
2013-01-01
In this paper, a Bregman iteration based total variation image restoration algorithm is proposed. Based on the Bregman iteration, the algorithm splits the original total variation problem into sub-problems that are easy to solve. Moreover, non-local regularization is introduced into the proposed algorithm, and a method to choose the non-local filter parameter locally and adaptively is proposed. Experiment results show that the proposed algorithms outperform some other regularization methods. PMID:23776560
Pseudo CT estimation from MRI using patch-based random forest
NASA Astrophysics Data System (ADS)
Yang, Xiaofeng; Lei, Yang; Shu, Hui-Kuo; Rossi, Peter; Mao, Hui; Shim, Hyunsuk; Curran, Walter J.; Liu, Tian
2017-02-01
Recently, MR simulators gain popularity because of unnecessary radiation exposure of CT simulators being used in radiation therapy planning. We propose a method for pseudo CT estimation from MR images based on a patch-based random forest. Patient-specific anatomical features are extracted from the aligned training images and adopted as signatures for each voxel. The most robust and informative features are identified using feature selection to train the random forest. The well-trained random forest is used to predict the pseudo CT of a new patient. This prediction technique was tested with human brain images and the prediction accuracy was assessed using the original CT images. Peak signal-to-noise ratio (PSNR) and feature similarity (FSIM) indexes were used to quantify the differences between the pseudo and original CT images. The experimental results showed the proposed method could accurately generate pseudo CT images from MR images. In summary, we have developed a new pseudo CT prediction method based on patch-based random forest, demonstrated its clinical feasibility, and validated its prediction accuracy. This pseudo CT prediction technique could be a useful tool for MRI-based radiation treatment planning and attenuation correction in a PET/MRI scanner.
A Model-Based Method for Content Validation of Automatically Generated Test Items
ERIC Educational Resources Information Center
Zhang, Xinxin; Gierl, Mark
2016-01-01
The purpose of this study is to describe a methodology to recover the item model used to generate multiple-choice test items with a novel graph theory approach. Beginning with the generated test items and working backward to recover the original item model provides a model-based method for validating the content used to automatically generate test…
A Fixed-Pattern Noise Correction Method Based on Gray Value Compensation for TDI CMOS Image Sensor.
Liu, Zhenwang; Xu, Jiangtao; Wang, Xinlei; Nie, Kaiming; Jin, Weimin
2015-09-16
In order to eliminate the fixed-pattern noise (FPN) in the output image of time-delay-integration CMOS image sensor (TDI-CIS), a FPN correction method based on gray value compensation is proposed. One hundred images are first captured under uniform illumination. Then, row FPN (RFPN) and column FPN (CFPN) are estimated based on the row-mean vector and column-mean vector of all collected images, respectively. Finally, RFPN are corrected by adding the estimated RFPN gray value to the original gray values of pixels in the corresponding row, and CFPN are corrected by subtracting the estimated CFPN gray value from the original gray values of pixels in the corresponding column. Experimental results based on a 128-stage TDI-CIS show that, after correcting the FPN in the image captured under uniform illumination with the proposed method, the standard-deviation of row-mean vector decreases from 5.6798 to 0.4214 LSB, and the standard-deviation of column-mean vector decreases from 15.2080 to 13.4623 LSB. Both kinds of FPN in the real images captured by TDI-CIS are eliminated effectively with the proposed method.
Calculus domains modelled using an original bool algebra based on polygons
NASA Astrophysics Data System (ADS)
Oanta, E.; Panait, C.; Raicu, A.; Barhalescu, M.; Axinte, T.
2016-08-01
Analytical and numerical computer based models require analytical definitions of the calculus domains. The paper presents a method to model a calculus domain based on a bool algebra which uses solid and hollow polygons. The general calculus relations of the geometrical characteristics that are widely used in mechanical engineering are tested using several shapes of the calculus domain in order to draw conclusions regarding the most effective methods to discretize the domain. The paper also tests the results of several CAD commercial software applications which are able to compute the geometrical characteristics, being drawn interesting conclusions. The tests were also targeting the accuracy of the results vs. the number of nodes on the curved boundary of the cross section. The study required the development of an original software consisting of more than 1700 computer code lines. In comparison with other calculus methods, the discretization using convex polygons is a simpler approach. Moreover, this method doesn't lead to large numbers as the spline approximation did, in that case being required special software packages in order to offer multiple, arbitrary precision. The knowledge resulted from this study may be used to develop complex computer based models in engineering.
Anderson, Samantha F; Maxwell, Scott E
2017-01-01
Psychology is undergoing a replication crisis. The discussion surrounding this crisis has centered on mistrust of previous findings. Researchers planning replication studies often use the original study sample effect size as the basis for sample size planning. However, this strategy ignores uncertainty and publication bias in estimated effect sizes, resulting in overly optimistic calculations. A psychologist who intends to obtain power of .80 in the replication study, and performs calculations accordingly, may have an actual power lower than .80. We performed simulations to reveal the magnitude of the difference between actual and intended power based on common sample size planning strategies and assessed the performance of methods that aim to correct for effect size uncertainty and/or bias. Our results imply that even if original studies reflect actual phenomena and were conducted in the absence of questionable research practices, popular approaches to designing replication studies may result in a low success rate, especially if the original study is underpowered. Methods correcting for bias and/or uncertainty generally had higher actual power, but were not a panacea for an underpowered original study. Thus, it becomes imperative that 1) original studies are adequately powered and 2) replication studies are designed with methods that are more likely to yield the intended level of power.
An implementation of the QMR method based on coupled two-term recurrences
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Nachtigal, Noeel M.
1992-01-01
The authors have proposed a new Krylov subspace iteration, the quasi-minimal residual algorithm (QMR), for solving non-Hermitian linear systems. In the original implementation of the QMR method, the Lanczos process with look-ahead is used to generate basis vectors for the underlying Krylov subspaces. In the Lanczos algorithm, these basis vectors are computed by means of three-term recurrences. It has been observed that, in finite precision arithmetic, vector iterations based on three-term recursions are usually less robust than mathematically equivalent coupled two-term vector recurrences. This paper presents a look-ahead algorithm that constructs the Lanczos basis vectors by means of coupled two-term recursions. Implementation details are given, and the look-ahead strategy is described. A new implementation of the QMR method, based on this coupled two-term algorithm, is described. A simplified version of the QMR algorithm without look-ahead is also presented, and the special case of QMR for complex symmetric linear systems is considered. Results of numerical experiments comparing the original and the new implementations of the QMR method are reported.
Using register data to deduce patterns of social exchange.
Jansson, Fredrik
2017-07-01
This paper presents a novel method for deducting propensities for social exchange between individuals based on the choices they make, and based on factors such as country of origin, sex, school grades and socioeconomic background. The objective here is to disentangle the effect of social ties from the other factors, in order to find patterns of social exchange. This is done through a control-treatment design on analysing available data, where the 'treatment' is similarity of choices between socially connected individuals, and the control is similarity of choices between non-connected individuals. Structural dependencies are controlled for and effects from different classes are pooled through a mix of methods from network and meta-analysis. The method is demonstrated and tested on Swedish register data on students at upper secondary school. The results show that having similar grades is a predictor of social exchange. Also, previous results from Norwegian data are replicated, showing that students cluster based on country of origin.
An improved multi-paths optimization method for video stabilization
NASA Astrophysics Data System (ADS)
Qin, Tao; Zhong, Sheng
2018-03-01
For video stabilization, the difference between original camera motion path and the optimized one is proportional to the cropping ratio and warping ratio. A good optimized path should preserve the moving tendency of the original one meanwhile the cropping ratio and warping ratio of each frame should be kept in a proper range. In this paper we use an improved warping-based motion representation model, and propose a gauss-based multi-paths optimization method to get a smoothing path and obtain a stabilized video. The proposed video stabilization method consists of two parts: camera motion path estimation and path smoothing. We estimate the perspective transform of adjacent frames according to warping-based motion representation model. It works well on some challenging videos where most previous 2D methods or 3D methods fail for lacking of long features trajectories. The multi-paths optimization method can deal well with parallax, as we calculate the space-time correlation of the adjacent grid, and then a kernel of gauss is used to weigh the motion of adjacent grid. Then the multi-paths are smoothed while minimize the crop ratio and the distortion. We test our method on a large variety of consumer videos, which have casual jitter and parallax, and achieve good results.
Approximate Dynamic Programming: Combining Regional and Local State Following Approximations.
Deptula, Patryk; Rosenfeld, Joel A; Kamalapurkar, Rushikesh; Dixon, Warren E
2018-06-01
An infinite-horizon optimal regulation problem for a control-affine deterministic system is solved online using a local state following (StaF) kernel and a regional model-based reinforcement learning (R-MBRL) method to approximate the value function. Unlike traditional methods such as R-MBRL that aim to approximate the value function over a large compact set, the StaF kernel approach aims to approximate the value function in a local neighborhood of the state that travels within a compact set. In this paper, the value function is approximated using a state-dependent convex combination of the StaF-based and the R-MBRL-based approximations. As the state enters a neighborhood containing the origin, the value function transitions from being approximated by the StaF approach to the R-MBRL approach. Semiglobal uniformly ultimately bounded (SGUUB) convergence of the system states to the origin is established using a Lyapunov-based analysis. Simulation results are provided for two, three, six, and ten-state dynamical systems to demonstrate the scalability and performance of the developed method.
An efficient and near linear scaling pair natural orbital based local coupled cluster method.
Riplinger, Christoph; Neese, Frank
2013-01-21
In previous publications, it was shown that an efficient local coupled cluster method with single- and double excitations can be based on the concept of pair natural orbitals (PNOs) [F. Neese, A. Hansen, and D. G. Liakos, J. Chem. Phys. 131, 064103 (2009)]. The resulting local pair natural orbital-coupled-cluster single double (LPNO-CCSD) method has since been proven to be highly reliable and efficient. For large molecules, the number of amplitudes to be determined is reduced by a factor of 10(5)-10(6) relative to a canonical CCSD calculation on the same system with the same basis set. In the original method, the PNOs were expanded in the set of canonical virtual orbitals and single excitations were not truncated. This led to a number of fifth order scaling steps that eventually rendered the method computationally expensive for large molecules (e.g., >100 atoms). In the present work, these limitations are overcome by a complete redesign of the LPNO-CCSD method. The new method is based on the combination of the concepts of PNOs and projected atomic orbitals (PAOs). Thus, each PNO is expanded in a set of PAOs that in turn belong to a given electron pair specific domain. In this way, it is possible to fully exploit locality while maintaining the extremely high compactness of the original LPNO-CCSD wavefunction. No terms are dropped from the CCSD equations and domains are chosen conservatively. The correlation energy loss due to the domains remains below <0.05%, which implies typically 15-20 but occasionally up to 30 atoms per domain on average. The new method has been given the acronym DLPNO-CCSD ("domain based LPNO-CCSD"). The method is nearly linear scaling with respect to system size. The original LPNO-CCSD method had three adjustable truncation thresholds that were chosen conservatively and do not need to be changed for actual applications. In the present treatment, no additional truncation parameters have been introduced. Any additional truncation is performed on the basis of the three original thresholds. There are no real-space cutoffs. Single excitations are truncated using singles-specific natural orbitals. Pairs are prescreened according to a multipole expansion of a pair correlation energy estimate based on local orbital specific virtual orbitals (LOSVs). Like its LPNO-CCSD predecessor, the method is completely of black box character and does not require any user adjustments. It is shown here that DLPNO-CCSD is as accurate as LPNO-CCSD while leading to computational savings exceeding one order of magnitude for larger systems. The largest calculations reported here featured >8800 basis functions and >450 atoms. In all larger test calculations done so far, the LPNO-CCSD step took less time than the preceding Hartree-Fock calculation, provided no approximations have been introduced in the latter. Thus, based on the present development reliable CCSD calculations on large molecules with unprecedented efficiency and accuracy are realized.
Least-Squares Support Vector Machine Approach to Viral Replication Origin Prediction
Cruz-Cano, Raul; Chew, David S.H.; Kwok-Pui, Choi; Ming-Ying, Leung
2010-01-01
Replication of their DNA genomes is a central step in the reproduction of many viruses. Procedures to find replication origins, which are initiation sites of the DNA replication process, are therefore of great importance for controlling the growth and spread of such viruses. Existing computational methods for viral replication origin prediction have mostly been tested within the family of herpesviruses. This paper proposes a new approach by least-squares support vector machines (LS-SVMs) and tests its performance not only on the herpes family but also on a collection of caudoviruses coming from three viral families under the order of caudovirales. The LS-SVM approach provides sensitivities and positive predictive values superior or comparable to those given by the previous methods. When suitably combined with previous methods, the LS-SVM approach further improves the prediction accuracy for the herpesvirus replication origins. Furthermore, by recursive feature elimination, the LS-SVM has also helped find the most significant features of the data sets. The results suggest that the LS-SVMs will be a highly useful addition to the set of computational tools for viral replication origin prediction and illustrate the value of optimization-based computing techniques in biomedical applications. PMID:20729987
Least-Squares Support Vector Machine Approach to Viral Replication Origin Prediction.
Cruz-Cano, Raul; Chew, David S H; Kwok-Pui, Choi; Ming-Ying, Leung
2010-06-01
Replication of their DNA genomes is a central step in the reproduction of many viruses. Procedures to find replication origins, which are initiation sites of the DNA replication process, are therefore of great importance for controlling the growth and spread of such viruses. Existing computational methods for viral replication origin prediction have mostly been tested within the family of herpesviruses. This paper proposes a new approach by least-squares support vector machines (LS-SVMs) and tests its performance not only on the herpes family but also on a collection of caudoviruses coming from three viral families under the order of caudovirales. The LS-SVM approach provides sensitivities and positive predictive values superior or comparable to those given by the previous methods. When suitably combined with previous methods, the LS-SVM approach further improves the prediction accuracy for the herpesvirus replication origins. Furthermore, by recursive feature elimination, the LS-SVM has also helped find the most significant features of the data sets. The results suggest that the LS-SVMs will be a highly useful addition to the set of computational tools for viral replication origin prediction and illustrate the value of optimization-based computing techniques in biomedical applications.
Jiang, Weiqin; Shen, Yifei; Ding, Yongfeng; Ye, Chuyu; Zheng, Yi; Zhao, Peng; Liu, Lulu; Tong, Zhou; Zhou, Linfu; Sun, Shuo; Zhang, Xingchen; Teng, Lisong; Timko, Michael P; Fan, Longjiang; Fang, Weijia
2018-01-15
Synchronous multifocal tumors are common in the hepatobiliary and pancreatic system but because of similarities in their histological features, oncologists have difficulty in identifying their precise tissue clonal origin through routine histopathological methods. To address this problem and assist in more precise diagnosis, we developed a computational approach for tissue origin diagnosis based on naive Bayes algorithm (TOD-Bayes) using ubiquitous RNA-Seq data. Massive tissue-specific RNA-Seq data sets were first obtained from The Cancer Genome Atlas (TCGA) and ∼1,000 feature genes were used to train and validate the TOD-Bayes algorithm. The accuracy of the model was >95% based on tenfold cross validation by the data from TCGA. A total of 18 clinical cancer samples (including six negative controls) with definitive tissue origin were subsequently used for external validation and 17 of the 18 samples were classified correctly in our study (94.4%). Furthermore, we included as cases studies seven tumor samples, taken from two individuals who suffered from synchronous multifocal tumors across tissues, where the efforts to make a definitive primary cancer diagnosis by traditional diagnostic methods had failed. Using our TOD-Bayes analysis, the two clinical test cases were successfully diagnosed as pancreatic cancer (PC) and cholangiocarcinoma (CC), respectively, in agreement with their clinical outcomes. Based on our findings, we believe that the TOD-Bayes algorithm is a powerful novel methodology to accurately identify the tissue origin of synchronous multifocal tumors of unknown primary cancers using RNA-Seq data and an important step toward more precision-based medicine in cancer diagnosis and treatment. © 2017 UICC.
Loosli, Gaelle; Canu, Stephane; Ong, Cheng Soon
2016-06-01
This paper presents a theoretical foundation for an SVM solver in Kreĭn spaces. Up to now, all methods are based either on the matrix correction, or on non-convex minimization, or on feature-space embedding. Here we justify and evaluate a solution that uses the original (indefinite) similarity measure, in the original Kreĭn space. This solution is the result of a stabilization procedure. We establish the correspondence between the stabilization problem (which has to be solved) and a classical SVM based on minimization (which is easy to solve). We provide simple equations to go from one to the other (in both directions). This link between stabilization and minimization problems is the key to obtain a solution in the original Kreĭn space. Using KSVM, one can solve SVM with usually troublesome kernels (large negative eigenvalues or large numbers of negative eigenvalues). We show experiments showing that our algorithm KSVM outperforms all previously proposed approaches to deal with indefinite matrices in SVM-like kernel methods.
Numerical solution of modified differential equations based on symmetry preservation.
Ozbenli, Ersin; Vedula, Prakash
2017-12-01
In this paper, we propose a method to construct invariant finite-difference schemes for solution of partial differential equations (PDEs) via consideration of modified forms of the underlying PDEs. The invariant schemes, which preserve Lie symmetries, are obtained based on the method of equivariant moving frames. While it is often difficult to construct invariant numerical schemes for PDEs due to complicated symmetry groups associated with cumbersome discrete variable transformations, we note that symmetries associated with more convenient transformations can often be obtained by appropriately modifying the original PDEs. In some cases, modifications to the original PDEs are also found to be useful in order to avoid trivial solutions that might arise from particular selections of moving frames. In our proposed method, modified forms of PDEs can be obtained either by addition of perturbation terms to the original PDEs or through defect correction procedures. These additional terms, whose primary purpose is to enable symmetries with more convenient transformations, are then removed from the system by considering moving frames for which these specific terms go to zero. Further, we explore selection of appropriate moving frames that result in improvement in accuracy of invariant numerical schemes based on modified PDEs. The proposed method is tested using the linear advection equation (in one- and two-dimensions) and the inviscid Burgers' equation. Results obtained for these tests cases indicate that numerical schemes derived from the proposed method perform significantly better than existing schemes not only by virtue of improvement in numerical accuracy but also due to preservation of qualitative properties or symmetries of the underlying differential equations.
IDENTIFICATION OF SOURCES OF FECAL POLLUTION IN ENVIRONMENTAL WATERS
A number of Microbial Source Tracking (MST) methods are currently used to determine the origin of fecal pollution impacting environmental waters. MST is based on the assumption that given the appropriate method and indicator organism, the source of fecal microbial pollution can ...
Applications of Fault Detection in Vibrating Structures
NASA Technical Reports Server (NTRS)
Eure, Kenneth W.; Hogge, Edward; Quach, Cuong C.; Vazquez, Sixto L.; Russell, Andrew; Hill, Boyd L.
2012-01-01
Structural fault detection and identification remains an area of active research. Solutions to fault detection and identification may be based on subtle changes in the time series history of vibration signals originating from various sensor locations throughout the structure. The purpose of this paper is to document the application of vibration based fault detection methods applied to several structures. Overall, this paper demonstrates the utility of vibration based methods for fault detection in a controlled laboratory setting and limitations of applying the same methods to a similar structure during flight on an experimental subscale aircraft.
A Method for Establishing a Depreciated Monetary Value for Print Collections.
ERIC Educational Resources Information Center
Marman, Edward
1995-01-01
Outlines a method for establishing a depreciated value of a library collection and includes an example of applying the formula for calculating depreciation. The method is based on the useful life of books, other print, and audio visual materials; their original cost; and on sampling subsets or sections of the collection. (JKP)
Comparing Performance of Methods to Deal with Differential Attrition in Lottery Based Evaluations
ERIC Educational Resources Information Center
Zamarro, Gema; Anderson, Kaitlin; Steele, Jennifer; Miller, Trey
2016-01-01
The purpose of this study is to study the performance of different methods (inverse probability weighting and estimation of informative bounds) to control for differential attrition by comparing the results of different methods using two datasets: an original dataset from Portland Public Schools (PPS) subject to high rates of differential…
NASA Astrophysics Data System (ADS)
Zhang, Zhengfang; Chen, Weifeng
2018-05-01
Maximization of the smallest eigenfrequency of the linearized elasticity system with area constraint is investigated. The elasticity system is extended into a large background domain, but the void is vacuum and not filled with ersatz material. The piecewise constant level set (PCLS) method is applied to present two regions, the original material region and the void region. A quadratic PCLS function is proposed to represent the characteristic function. Consequently, the functional derivative of the smallest eigenfrequency with respect to PCLS function takes nonzero value in the original material region and zero in the void region. A penalty gradient algorithm is proposed, which initializes the whole background domain with the original material and decreases the area of original material region till the area constraint is satisfied. 2D and 3D numerical examples are presented, illustrating the validity of the proposed algorithm.
Personality Disorders in People with Learning Disabilities: Follow-Up of a Community Survey
ERIC Educational Resources Information Center
Lidher, J.; Martin, D. M.; Jayaprakash, M. S.; Roy, A.
2005-01-01
Background: A sample of community-based service users with intellectual disability (ID) was re-examined after 5 years to determine the impact of a diagnosis of personality disorder (PD). Methods: Seventy-five of the original 101 participants were followed up. Of these, 21 people had a PD identified during the original study. Results: Compared with…
An Evolutionary Framework for Understanding the Origin of Eukaryotes.
Blackstone, Neil W
2016-04-27
Two major obstacles hinder the application of evolutionary theory to the origin of eukaryotes. The first is more apparent than real-the endosymbiosis that led to the mitochondrion is often described as "non-Darwinian" because it deviates from the incremental evolution championed by the modern synthesis. Nevertheless, endosymbiosis can be accommodated by a multi-level generalization of evolutionary theory, which Darwin himself pioneered. The second obstacle is more serious-all of the major features of eukaryotes were likely present in the last eukaryotic common ancestor thus rendering comparative methods ineffective. In addition to a multi-level theory, the development of rigorous, sequence-based phylogenetic and comparative methods represents the greatest achievement of modern evolutionary theory. Nevertheless, the rapid evolution of major features in the eukaryotic stem group requires the consideration of an alternative framework. Such a framework, based on the contingent nature of these evolutionary events, is developed and illustrated with three examples: the putative intron proliferation leading to the nucleus and the cell cycle; conflict and cooperation in the origin of eukaryotic bioenergetics; and the inter-relationship between aerobic metabolism, sterol synthesis, membranes, and sex. The modern synthesis thus provides sufficient scope to develop an evolutionary framework to understand the origin of eukaryotes.
NASA Astrophysics Data System (ADS)
Chi, Xu; Dongming, Guo; Zhuji, Jin; Renke, Kang
2010-12-01
A signal processing method for the friction-based endpoint detection system of a chemical mechanical polishing (CMP) process is presented. The signal process method uses the wavelet threshold denoising method to reduce the noise contained in the measured original signal, extracts the Kalman filter innovation from the denoised signal as the feature signal, and judges the CMP endpoint based on the feature of the Kalman filter innovation sequence during the CMP process. Applying the signal processing method, the endpoint detection experiments of the Cu CMP process were carried out. The results show that the signal processing method can judge the endpoint of the Cu CMP process.
NASA Astrophysics Data System (ADS)
Diego, M. C. R.; Purwanto, Y. A.; Sutrisno; Budiastra, I. W.
2018-05-01
Research related to the non-destructive method of near-infrared (NIR) spectroscopy in aromatic oil is still in development in Indonesia. The objectives of the study were to determine the characteristics of the near-infrared spectra of patchouli oil and classify it based on its origin. The samples were selected from seven different places in Indonesia (Bogor and Garut from West Java, Aceh, and Jambi from Sumatra and Konawe, Masamba and Kolaka from Sulawesi Island). The spectral data of patchouli oil was obtained by FT-NIR spectrometer at the wavelength of 1000-2500 nm, and after that, the samples were subjected to composition analysis using Gas Chromatography-Mass Spectrometry. The transmittance and absorbance spectra were analyzed and then principal component analysis (PCA) was carried out. Discriminant analysis (DA) of the principal component was developed to classify patchouli oil based on its origin. The result shows that the data of both spectra (transmittance and absorbance spectra) by the PC analysis give a similar result for discriminating the seven types of patchouli oil due to their distribution and behavior. The DA of the three principal component in both data processed spectra could classify patchouli oil accurately. This result exposed that NIR spectroscopy can be successfully used as a correct method to classify patchouli oil based on its origin.
Pixel-based speckle adjustment for noise reduction in Fourier-domain OCT images.
Zhang, Anqi; Xi, Jiefeng; Sun, Jitao; Li, Xingde
2017-03-01
Speckle resides in OCT signals and inevitably effects OCT image quality. In this work, we present a novel method for speckle noise reduction in Fourier-domain OCT images, which utilizes the phase information of complex OCT data. In this method, speckle area is pre-delineated pixelwise based on a phase-domain processing method and then adjusted by the results of wavelet shrinkage of the original image. Coefficient shrinkage method such as wavelet or contourlet is applied afterwards for further suppressing the speckle noise. Compared with conventional methods without speckle adjustment, the proposed method demonstrates significant improvement of image quality.
Cao, Hui; Yan, Xingyu; Li, Yaojiang; Wang, Yanxia; Zhou, Yan; Yang, Sanchun
2014-01-01
Quantitative analysis for the flue gas of natural gas-fired generator is significant for energy conservation and emission reduction. The traditional partial least squares method may not deal with the nonlinear problems effectively. In the paper, a nonlinear partial least squares method with extended input based on radial basis function neural network (RBFNN) is used for components prediction of flue gas. For the proposed method, the original independent input matrix is the input of RBFNN and the outputs of hidden layer nodes of RBFNN are the extension term of the original independent input matrix. Then, the partial least squares regression is performed on the extended input matrix and the output matrix to establish the components prediction model of flue gas. A near-infrared spectral dataset of flue gas of natural gas combustion is used for estimating the effectiveness of the proposed method compared with PLS. The experiments results show that the root-mean-square errors of prediction values of the proposed method for methane, carbon monoxide, and carbon dioxide are, respectively, reduced by 4.74%, 21.76%, and 5.32% compared to those of PLS. Hence, the proposed method has higher predictive capabilities and better robustness.
FCDECOMP: decomposition of metabolic networks based on flux coupling relations.
Rezvan, Abolfazl; Marashi, Sayed-Amir; Eslahchi, Changiz
2014-10-01
A metabolic network model provides a computational framework to study the metabolism of a cell at the system level. Due to their large sizes and complexity, rational decomposition of these networks into subsystems is a strategy to obtain better insight into the metabolic functions. Additionally, decomposing metabolic networks paves the way to use computational methods that will be otherwise very slow when run on the original genome-scale network. In the present study, we propose FCDECOMP decomposition method based on flux coupling relations (FCRs) between pairs of reaction fluxes. This approach utilizes a genetic algorithm (GA) to obtain subsystems that can be analyzed in isolation, i.e. without considering the reactions of the original network in the analysis. Therefore, we propose that our method is useful for discovering biologically meaningful modules in metabolic networks. As a case study, we show that when this method is applied to the metabolic networks of barley seeds and yeast, the modules are in good agreement with the biological compartments of these networks.
NASA Astrophysics Data System (ADS)
Hu, Bingbing; Li, Bing
2016-02-01
It is very difficult to detect weak fault signatures due to the large amount of noise in a wind turbine system. Multiscale noise tuning stochastic resonance (MSTSR) has proved to be an effective way to extract weak signals buried in strong noise. However, the MSTSR method originally based on discrete wavelet transform (DWT) has disadvantages such as shift variance and the aliasing effects in engineering application. In this paper, the dual-tree complex wavelet transform (DTCWT) is introduced into the MSTSR method, which makes it possible to further improve the system output signal-to-noise ratio and the accuracy of fault diagnosis by the merits of DTCWT (nearly shift invariant and reduced aliasing effects). Moreover, this method utilizes the relationship between the two dual-tree wavelet basis functions, instead of matching the single wavelet basis function to the signal being analyzed, which may speed up the signal processing and be employed in on-line engineering monitoring. The proposed method is applied to the analysis of bearing outer ring and shaft coupling vibration signals carrying fault information. The results confirm that the method performs better in extracting the fault features than the original DWT-based MSTSR, the wavelet transform with post spectral analysis, and EMD-based spectral analysis methods.
Color preservation for tone reproduction and image enhancement
NASA Astrophysics Data System (ADS)
Hsin, Chengho; Lee, Zong Wei; Lee, Zheng Zhan; Shin, Shaw-Jyh
2014-01-01
Applications based on luminance processing often face the problem of recovering the original chrominance in the output color image. A common approach to reconstruct a color image from the luminance output is by preserving the original hue and saturation. However, this approach often produces a highly colorful image which is undesirable. We develop a color preservation method that not only retains the ratios of the input tri-chromatic values but also adjusts the output chroma in an appropriate way. Linearizing the output luminance is the key idea to realize this method. In addition, a lightness difference metric together with a colorfulness difference metric are proposed to evaluate the performance of the color preservation methods. It shows that the proposed method performs consistently better than the existing approaches.
Detection of Parent-of-Origin Effects Using General Pedigree Data
Zhou, Ji-Yuan; Ding, Jie; Fung, Wing K.; Lin, Shili
2010-01-01
Genomic imprinting is an important epigenetic factor in complex traits study, which has generally been examined by testing for parent-of-origin effects of alleles. For a diallelic marker locus, the parental-asymmetry test (PAT) based on case-parents trios and its extensions to incomplete nuclear families (1-PAT and C-PAT) are simple and powerful for detecting parent-of-origin effects. However, these methods are suitable only for nuclear families and thus are not amenable to general pedigree data. Use of data from extended pedigrees, if available, may lead to more powerful methods than randomly selecting one two-generation nuclear family from each pedigree. In this study, we extend PAT to accommodate general pedigree data by proposing the pedigree PAT (PPAT) statistic, which uses all informative family trios from pedigrees. To fully utilize pedigrees with some missing genotypes, we further develop the Monte Carlo (MC) PPAT (MCPPAT) statistic based on MC sampling and estimation. Extensive simulations were carried out to evaluate the performance of the proposed methods. Under the assumption that the pedigrees and their associated affection patterns are randomly drawn from a population of pedigrees with at least one affected offspring, we demonstrated that MCPPAT is a valid test for parent-of-origin effects in the presence of association. Further, MCPPAT is much more powerful compared to PAT for trios or even PPAT for all informative family trios from the same pedigrees if there is missing data. Application of the proposed methods to a rheumatoid arthritis dataset further demonstrates the advantage of MCPPAT. PMID:19676055
[A graph cuts-based interactive method for segmentation of magnetic resonance images of meningioma].
Li, Shuan-qiang; Feng, Qian-jin; Chen, Wu-fan; Lin, Ya-zhong
2011-06-01
For accurate segmentation of the magnetic resonance (MR) images of meningioma, we propose a novel interactive segmentation method based on graph cuts. The high dimensional image features was extracted, and for each pixel, the probabilities of its origin, either the tumor or the background regions, were estimated by exploiting the weighted K-nearest neighborhood classifier. Based on these probabilities, a new energy function was proposed. Finally, a graph cut optimal framework was used for the solution of the energy function. The proposed method was evaluated by application in the segmentation of MR images of meningioma, and the results showed that the method significantly improved the segmentation accuracy compared with the gray level information-based graph cut method.
A Novel Fault Diagnosis Method for Rotating Machinery Based on a Convolutional Neural Network
Yang, Tao; Gao, Wei
2018-01-01
Fault diagnosis is critical to ensure the safety and reliable operation of rotating machinery. Most methods used in fault diagnosis of rotating machinery extract a few feature values from vibration signals for fault diagnosis, which is a dimensionality reduction from the original signal and may omit some important fault messages in the original signal. Thus, a novel diagnosis method is proposed involving the use of a convolutional neural network (CNN) to directly classify the continuous wavelet transform scalogram (CWTS), which is a time-frequency domain transform of the original signal and can contain most of the information of the vibration signals. In this method, CWTS is formed by discomposing vibration signals of rotating machinery in different scales using wavelet transform. Then the CNN is trained to diagnose faults, with CWTS as the input. A series of experiments is conducted on the rotor experiment platform using this method. The results indicate that the proposed method can diagnose the faults accurately. To verify the universality of this method, the trained CNN was also used to perform fault diagnosis for another piece of rotor equipment, and a good result was achieved. PMID:29734704
A Novel Fault Diagnosis Method for Rotating Machinery Based on a Convolutional Neural Network.
Guo, Sheng; Yang, Tao; Gao, Wei; Zhang, Chen
2018-05-04
Fault diagnosis is critical to ensure the safety and reliable operation of rotating machinery. Most methods used in fault diagnosis of rotating machinery extract a few feature values from vibration signals for fault diagnosis, which is a dimensionality reduction from the original signal and may omit some important fault messages in the original signal. Thus, a novel diagnosis method is proposed involving the use of a convolutional neural network (CNN) to directly classify the continuous wavelet transform scalogram (CWTS), which is a time-frequency domain transform of the original signal and can contain most of the information of the vibration signals. In this method, CWTS is formed by discomposing vibration signals of rotating machinery in different scales using wavelet transform. Then the CNN is trained to diagnose faults, with CWTS as the input. A series of experiments is conducted on the rotor experiment platform using this method. The results indicate that the proposed method can diagnose the faults accurately. To verify the universality of this method, the trained CNN was also used to perform fault diagnosis for another piece of rotor equipment, and a good result was achieved.
Liu, Derek; Sloboda, Ron S
2014-05-01
Boyer and Mok proposed a fast calculation method employing the Fourier transform (FT), for which calculation time is independent of the number of seeds but seed placement is restricted to calculation grid points. Here an interpolation method is described enabling unrestricted seed placement while preserving the computational efficiency of the original method. The Iodine-125 seed dose kernel was sampled and selected values were modified to optimize interpolation accuracy for clinically relevant doses. For each seed, the kernel was shifted to the nearest grid point via convolution with a unit impulse, implemented in the Fourier domain. The remaining fractional shift was performed using a piecewise third-order Lagrange filter. Implementation of the interpolation method greatly improved FT-based dose calculation accuracy. The dose distribution was accurate to within 2% beyond 3 mm from each seed. Isodose contours were indistinguishable from explicit TG-43 calculation. Dose-volume metric errors were negligible. Computation time for the FT interpolation method was essentially the same as Boyer's method. A FT interpolation method for permanent prostate brachytherapy TG-43 dose calculation was developed which expands upon Boyer's original method and enables unrestricted seed placement. The proposed method substantially improves the clinically relevant dose accuracy with negligible additional computation cost, preserving the efficiency of the original method.
Unbiased nonorthogonal bases for tomographic reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sainz, Isabel; Klimov, Andrei B.; Roa, Luis
2010-05-15
We have developed a general method for constructing a set of nonorthogonal bases with equal separations between all different basis states in prime dimensions. The results are that the corresponding biorthogonal counterparts are pairwise unbiased with the components of the original bases. Using these bases, we derive an explicit expression for the optimal tomography in nonorthogonal bases. A special two-dimensional case is analyzed separately.
Simulation of Earth textures by conditional image quilting
NASA Astrophysics Data System (ADS)
Mahmud, K.; Mariethoz, G.; Caers, J.; Tahmasebi, P.; Baker, A.
2014-04-01
Training image-based approaches for stochastic simulations have recently gained attention in surface and subsurface hydrology. This family of methods allows the creation of multiple realizations of a study domain, with a spatial continuity based on a training image (TI) that contains the variability, connectivity, and structural properties deemed realistic. A major drawback of these methods is their computational and/or memory cost, making certain applications challenging. It was found that similar methods, also based on training images or exemplars, have been proposed in computer graphics. One such method, image quilting (IQ), is introduced in this paper and adapted for hydrogeological applications. The main difficulty is that Image Quilting was originally not designed to produce conditional simulations and was restricted to 2-D images. In this paper, the original method developed in computer graphics has been modified to accommodate conditioning data and 3-D problems. This new conditional image quilting method (CIQ) is patch based, does not require constructing a pattern databases, and can be used with both categorical and continuous training images. The main concept is to optimally cut the patches such that they overlap with minimum discontinuity. The optimal cut is determined using a dynamic programming algorithm. Conditioning is accomplished by prior selection of patches that are compatible with the conditioning data. The performance of CIQ is tested for a variety of hydrogeological test cases. The results, when compared with previous multiple-point statistics (MPS) methods, indicate an improvement in CPU time by a factor of at least 50.
Bovine origin Staphylococcus aureus: A new zoonotic agent?
Rao, Relangi Tulasi; Jayakumar, Kannan; Kumar, Pavitra
2017-01-01
Aim: The study aimed to assess the nature of animal origin Staphylococcus aureus strains. The study has zoonotic importance and aimed to compare virulence between two different hosts, i.e., bovine and ovine origin. Materials and Methods: Conventional polymerase chain reaction-based methods used for the characterization of S. aureus strains and chick embryo model employed for the assessment of virulence capacity of strains. All statistical tests carried on R program, version 3.0.4. Results: After initial screening and molecular characterization of the prevalence of S. aureus found to be 42.62% in bovine origin samples and 28.35% among ovine origin samples. Meanwhile, the methicillin-resistant S. aureus prevalence is found to be meager in both the hosts. Among the samples, only 6.8% isolates tested positive for methicillin resistance. The biofilm formation quantified and the variation compared among the host. A Welch two-sample t-test found to be statistically significant, t=2.3179, df=28.103, and p=0.02795. Chicken embryo model found effective to test the pathogenicity of the strains. Conclusion: The study helped to conclude healthy bovines can act as S. aureus reservoirs. Bovine origin S. aureus strains are more virulent than ovine origin strains. Bovine origin strains have high probability to become zoonotic pathogen. Further, gene knock out studies may be conducted to conclude zoonocity of the bovine origin strains. PMID:29184376
A Method of the UMTS-FDD Network Design Based on Universal Load Characteristics
NASA Astrophysics Data System (ADS)
Gajewski, Slawomir
In the paper an original method of the UMTS radio network design was presented. The method is based on simple way of capacity-coverage trade-off estimation for WCDMA/FDD radio interface. This trade-off is estimated by using universal load characteristics and normalized coverage characteristics. The characteristics are useful for any propagation environment as well as for any service performance requirements. The practical applications of these characteristics on radio network planning and maintenance were described.
Beyond single-stream with the Schrödinger method
NASA Astrophysics Data System (ADS)
Uhlemann, Cora; Kopp, Michael
2016-10-01
We investigate large scale structure formation of collisionless dark matter in the phase space description based on the Vlasov-Poisson equation. We present the Schrödinger method, originally proposed by \\cite{WK93} as numerical technique based on the Schrödinger Poisson equation, as an analytical tool which is superior to the common standard pressureless fluid model. Whereas the dust model fails and develops singularities at shell crossing the Schrödinger method encompasses multi-streaming and even virialization.
Collins, Kodi; Warnow, Tandy
2018-06-19
PASTA is a multiple sequence method that uses divide-and-conquer plus iteration to enable base alignment methods to scale with high accuracy to large sequence datasets. By default, PASTA included MAFFT L-INS-i; our new extension of PASTA enables the use of MAFFT G-INS-i, MAFFT Homologs, CONTRAlign, and ProbCons. We analyzed the performance of each base method and PASTA using these base methods on 224 datasets from BAliBASE 4 with at least 50 sequences. We show that PASTA enables the most accurate base methods to scale to larger datasets at reduced computational effort, and generally improves alignment and tree accuracy on the largest BAliBASE datasets. PASTA is available at https://github.com/kodicollins/pasta and has also been integrated into the original PASTA repository at https://github.com/smirarab/pasta. Supplementary data are available at Bioinformatics online.
Dynamical Chaos in the Wisdom-Holman Integrator: Origins and Solutions
NASA Technical Reports Server (NTRS)
Rauch, Kevin P.; Holman, Matthew
1999-01-01
We examine the nonlinear stability of the Wisdom-Holman (WH) symplectic mapping applied to the integration of perturbed, highly eccentric (e-0.9) two-body orbits. We find that the method is unstable and introduces artificial chaos into the computed trajectories for this class of problems, unless the step size chosen 1s small enough that PeriaPse is always resolved, in which case the method is generically stable. This 'radial orbit instability' persists even for weakly perturbed systems. Using the Stark problem as a fiducial test case, we investigate the dynamical origin of this instability and argue that the numerical chaos results from the overlap of step-size resonances; interestingly, for the Stark-problem many of these resonances appear to be absolutely stable. We similarly examine the robustness of several alternative integration methods: a time-regularized version of the WH mapping suggested by Mikkola; the potential-splitting (PS) method of Duncan, Levison, Lee; and two original methods incorporating approximations based on Stark motion instead of Keplerian motion. The two fixed point problem and a related, more general problem are used to conduct a comparative test of the various methods for several types of motion. Among the algorithms tested, the time-transformed WH mapping is clearly the most efficient and stable method of integrating eccentric, nearly Keplerian orbits in the absence of close encounters. For test particles subject to both high eccentricities and very close encounters, we find an enhanced version of the PS method-incorporating time regularization, force-center switching, and an improved kernel function-to be both economical and highly versatile. We conclude that Stark-based methods are of marginal utility in N-body type integrations. Additional implications for the symplectic integration of N-body systems are discussed.
A Very Simple Method to Calculate the (Positive) Largest Lyapunov Exponent Using Interval Extensions
NASA Astrophysics Data System (ADS)
Mendes, Eduardo M. A. M.; Nepomuceno, Erivelton G.
2016-12-01
In this letter, a very simple method to calculate the positive Largest Lyapunov Exponent (LLE) based on the concept of interval extensions and using the original equations of motion is presented. The exponent is estimated from the slope of the line derived from the lower bound error when considering two interval extensions of the original system. It is shown that the algorithm is robust, fast and easy to implement and can be considered as alternative to other algorithms available in the literature. The method has been successfully tested in five well-known systems: Logistic, Hénon, Lorenz and Rössler equations and the Mackey-Glass system.
Parallel Implementation of the Discontinuous Galerkin Method
NASA Technical Reports Server (NTRS)
Baggag, Abdalkader; Atkins, Harold; Keyes, David
1999-01-01
This paper describes a parallel implementation of the discontinuous Galerkin method. Discontinuous Galerkin is a spatially compact method that retains its accuracy and robustness on non-smooth unstructured grids and is well suited for time dependent simulations. Several parallelization approaches are studied and evaluated. The most natural and symmetric of the approaches has been implemented in all object-oriented code used to simulate aeroacoustic scattering. The parallel implementation is MPI-based and has been tested on various parallel platforms such as the SGI Origin, IBM SP2, and clusters of SGI and Sun workstations. The scalability results presented for the SGI Origin show slightly superlinear speedup on a fixed-size problem due to cache effects.
Identification of species origin of meat and meat products on the DNA basis: a review.
Kumar, Arun; Kumar, Rajiv Ranjan; Sharma, Brahm Deo; Gokulakrishnan, Palanisamy; Mendiratta, Sanjod Kumar; Sharma, Deepak
2015-01-01
The adulteration/substitution of meat has always been a concern for various reasons such as public health, religious factors, wholesomeness, and unhealthy competition in meat market. Consumer should be protected from these malicious practices of meat adulterations by quick, precise, and specific identification of meat animal species. Several analytical methodologies have been employed for meat speciation based on anatomical, histological, microscopic, organoleptic, chemical, electrophoretic, chromatographic, or immunological principles. However, by virtue of their inherent limitations, most of these techniques have been replaced by the recent DNA-based molecular techniques. In the last decades, several methods based on polymerase chain reaction have been proposed as useful means for identifying the species origin in meat and meat products, due to their high specificity and sensitivity, as well as rapid processing time and low cost. This review intends to provide an updated and extensive overview on the DNA-based methods for species identification in meat and meat products.
Morphological inversion of complex diffusion
NASA Astrophysics Data System (ADS)
Nguyen, V. A. T.; Vural, D. C.
2017-09-01
Epidemics, neural cascades, power failures, and many other phenomena can be described by a diffusion process on a network. To identify the causal origins of a spread, it is often necessary to identify the triggering initial node. Here, we define a new morphological operator and use it to detect the origin of a diffusive front, given the final state of a complex network. Our method performs better than algorithms based on distance (closeness) and Jordan centrality. More importantly, our method is applicable regardless of the specifics of the forward model, and therefore can be applied to a wide range of systems such as identifying the patient zero in an epidemic, pinpointing the neuron that triggers a cascade, identifying the original malfunction that causes a catastrophic infrastructure failure, and inferring the ancestral species from which a heterogeneous population evolves.
Christodoulidis, Argyrios; Hurtut, Thomas; Tahar, Houssem Ben; Cheriet, Farida
2016-09-01
Segmenting the retinal vessels from fundus images is a prerequisite for many CAD systems for the automatic detection of diabetic retinopathy lesions. So far, research efforts have concentrated mainly on the accurate localization of the large to medium diameter vessels. However, failure to detect the smallest vessels at the segmentation step can lead to false positive lesion detection counts in a subsequent lesion analysis stage. In this study, a new hybrid method for the segmentation of the smallest vessels is proposed. Line detection and perceptual organization techniques are combined in a multi-scale scheme. Small vessels are reconstructed from the perceptual-based approach via tracking and pixel painting. The segmentation was validated in a high resolution fundus image database including healthy and diabetic subjects using pixel-based as well as perceptual-based measures. The proposed method achieves 85.06% sensitivity rate, while the original multi-scale line detection method achieves 81.06% sensitivity rate for the corresponding images (p<0.05). The improvement in the sensitivity rate for the database is 6.47% when only the smallest vessels are considered (p<0.05). For the perceptual-based measure, the proposed method improves the detection of the vasculature by 7.8% against the original multi-scale line detection method (p<0.05). Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Michel, Claude; Andréassian, Vazken; Perrin, Charles
2005-02-01
This paper unveils major inconsistencies in the age-old and yet efficient Soil Conservation Service Curve Number (SCS-CN) procedure. Our findings are based on an analysis of the continuous soil moisture accounting procedure implied by the SCS-CN equation. It is shown that several flaws plague the original SCS-CN procedure, the most important one being a confusion between intrinsic parameter and initial condition. A change of parameterization and a more complete assessment of the initial condition lead to a renewed SCS-CN procedure, while keeping the acknowledged efficiency of the original method.
Simplified adsorption method for detection of antibodies to Candida albicans germ tubes.
Ponton, J; Quindos, G; Arilla, M C; Mackenzie, D W
1994-01-01
Two modifications that simplify and shorten a method for adsorption of the antibodies against the antigens expressed on both blastospore and germ tube cell wall surfaces (methods 2 and 3) were compared with the original method of adsorption (method 1) to detect anti-Candida albicans germ tube antibodies in 154 serum specimens. Adsorption of the sera by both modified methods resulted in titers very similar to those obtained by the original method. Only 5.2% of serum specimens tested by method 2 and 5.8% of serum specimens tested by method 3 presented greater than one dilution discrepancies in the titers with respect to the titer observed by method 1. When a test based on method 2 was evaluated with sera from patients with invasive candidiasis, the best discriminatory results (sensitivity, 84.6%; specificity, 87.9%; positive predictive value, 75.9%; negative predictive value, 92.7%; efficiency, 86.9%) were obtained when a titer of > or = 1:160 was considered positive. PMID:8126184
Research on Image Encryption Based on DNA Sequence and Chaos Theory
NASA Astrophysics Data System (ADS)
Tian Zhang, Tian; Yan, Shan Jun; Gu, Cheng Yan; Ren, Ran; Liao, Kai Xin
2018-04-01
Nowadays encryption is a common technique to protect image data from unauthorized access. In recent years, many scientists have proposed various encryption algorithms based on DNA sequence to provide a new idea for the design of image encryption algorithm. Therefore, a new method of image encryption based on DNA computing technology is proposed in this paper, whose original image is encrypted by DNA coding and 1-D logistic chaotic mapping. First, the algorithm uses two modules as the encryption key. The first module uses the real DNA sequence, and the second module is made by one-dimensional logistic chaos mapping. Secondly, the algorithm uses DNA complementary rules to encode original image, and uses the key and DNA computing technology to compute each pixel value of the original image, so as to realize the encryption of the whole image. Simulation results show that the algorithm has good encryption effect and security.
A Comparison of Two Approaches to Beta-Flexible Clustering.
ERIC Educational Resources Information Center
Belbin, Lee; And Others
1992-01-01
A method for hierarchical agglomerative polythetic (multivariate) clustering, based on unweighted pair group using arithmetic averages (UPGMA) is compared with the original beta-flexible technique, a weighted average method. Reasons the flexible UPGMA strategy is recommended are discussed, focusing on the ability to recover cluster structure over…
Storey, Rebecca
2007-01-01
Comparison of different adult age estimation methods on the same skeletal sample with unknown ages could forward paleodemographic inference, while researchers sort out various controversies. The original aging method for the auricular surface (Lovejoy et al., 1985a) assigned an age estimation based on several separate characteristics. Researchers have found this original method hard to apply. It is usually forgotten that before assigning an age, there was a seriation, an ordering of all available individuals from youngest to oldest. Thus, age estimation reflected the place of an individual within its sample. A recent article (Buckberry and Chamberlain, 2002) proposed a revised method that scores theses various characteristics into age stages, which can then be used with a Bayesian method to estimate an adult age distribution for the sample. Both methods were applied to the adult auricular surfaces of a Pre-Columbian Maya skeletal population from Copan, Honduras and resulted in age distributions with significant numbers of older adults. However, contrary to the usual paleodemographic distribution, one Bayesian estimation based on uniform prior probabilities yielded a population with 57% of the ages at death over 65, while another based on a high mortality life table still had 12% of the individuals aged over 75 years. The seriation method yielded an age distribution more similar to that known from preindustrial historical situations, without excessive longevity of adults. Paleodemography must still wrestle with its elusive goal of accurate adult age estimation from skeletons, a necessary base for demographic study of past populations. (c) 2006 Wiley-Liss, Inc
NASA Astrophysics Data System (ADS)
Chuang, Cheng-Hung; Chen, Yen-Lin
2013-02-01
This study presents a steganographic optical image encryption system based on reversible data hiding and double random phase encoding (DRPE) techniques. Conventional optical image encryption systems can securely transmit valuable images using an encryption method for possible application in optical transmission systems. The steganographic optical image encryption system based on the DRPE technique has been investigated to hide secret data in encrypted images. However, the DRPE techniques vulnerable to attacks and many of the data hiding methods in the DRPE system can distort the decrypted images. The proposed system, based on reversible data hiding, uses a JBIG2 compression scheme to achieve lossless decrypted image quality and perform a prior encryption process. Thus, the DRPE technique enables a more secured optical encryption process. The proposed method extracts and compresses the bit planes of the original image using the lossless JBIG2 technique. The secret data are embedded in the remaining storage space. The RSA algorithm can cipher the compressed binary bits and secret data for advanced security. Experimental results show that the proposed system achieves a high data embedding capacity and lossless reconstruction of the original images.
Pixel-based speckle adjustment for noise reduction in Fourier-domain OCT images
Zhang, Anqi; Xi, Jiefeng; Sun, Jitao; Li, Xingde
2017-01-01
Speckle resides in OCT signals and inevitably effects OCT image quality. In this work, we present a novel method for speckle noise reduction in Fourier-domain OCT images, which utilizes the phase information of complex OCT data. In this method, speckle area is pre-delineated pixelwise based on a phase-domain processing method and then adjusted by the results of wavelet shrinkage of the original image. Coefficient shrinkage method such as wavelet or contourlet is applied afterwards for further suppressing the speckle noise. Compared with conventional methods without speckle adjustment, the proposed method demonstrates significant improvement of image quality. PMID:28663860
Downscaling Thermal Infrared Radiance for Subpixel Land Surface Temperature Retrieval
Liu, Desheng; Pu, Ruiliang
2008-01-01
Land surface temperature (LST) retrieved from satellite thermal sensors often consists of mixed temperature components. Retrieving subpixel LST is therefore needed in various environmental and ecological studies. In this paper, we developed two methods for downscaling coarse resolution thermal infrared (TIR) radiance for the purpose of subpixel temperature retrieval. The first method was developed on the basis of a scale-invariant physical model on TIR radiance. The second method was based on a statistical relationship between TIR radiance and land cover fraction at high spatial resolution. The two methods were applied to downscale simulated 990-m ASTER TIR data to 90-m resolution. When validated against the original 90-m ASTER TIR data, the results revealed that both downscaling methods were successful in capturing the general patterns of the original data and resolving considerable spatial details. Further quantitative assessments indicated a strong agreement between the true values and the estimated values by both methods. PMID:27879844
Downscaling Thermal Infrared Radiance for Subpixel Land Surface Temperature Retrieval.
Liu, Desheng; Pu, Ruiliang
2008-04-06
Land surface temperature (LST) retrieved from satellite thermal sensors often consists of mixed temperature components. Retrieving subpixel LST is therefore needed in various environmental and ecological studies. In this paper, we developed two methods for downscaling coarse resolution thermal infrared (TIR) radiance for the purpose of subpixel temperature retrieval. The first method was developed on the basis of a scale-invariant physical model on TIR radiance. The second method was based on a statistical relationship between TIR radiance and land cover fraction at high spatial resolution. The two methods were applied to downscale simulated 990-m ASTER TIR data to 90-m resolution. When validated against the original 90-m ASTER TIR data, the results revealed that both downscaling methods were successful in capturing the general patterns of the original data and resolving considerable spatial details. Further quantitative assessments indicated a strong agreement between the true values and the estimated values by both methods.
Single image super-resolution based on approximated Heaviside functions and iterative refinement
Wang, Xin-Yu; Huang, Ting-Zhu; Deng, Liang-Jian
2018-01-01
One method of solving the single-image super-resolution problem is to use Heaviside functions. This has been done previously by making a binary classification of image components as “smooth” and “non-smooth”, describing these with approximated Heaviside functions (AHFs), and iteration including l1 regularization. We now introduce a new method in which the binary classification of image components is extended to different degrees of smoothness and non-smoothness, these components being represented by various classes of AHFs. Taking into account the sparsity of the non-smooth components, their coefficients are l1 regularized. In addition, to pick up more image details, the new method uses an iterative refinement for the residuals between the original low-resolution input and the downsampled resulting image. Experimental results showed that the new method is superior to the original AHF method and to four other published methods. PMID:29329298
An Evolutionary Framework for Understanding the Origin of Eukaryotes
Blackstone, Neil W.
2016-01-01
Two major obstacles hinder the application of evolutionary theory to the origin of eukaryotes. The first is more apparent than real—the endosymbiosis that led to the mitochondrion is often described as “non-Darwinian” because it deviates from the incremental evolution championed by the modern synthesis. Nevertheless, endosymbiosis can be accommodated by a multi-level generalization of evolutionary theory, which Darwin himself pioneered. The second obstacle is more serious—all of the major features of eukaryotes were likely present in the last eukaryotic common ancestor thus rendering comparative methods ineffective. In addition to a multi-level theory, the development of rigorous, sequence-based phylogenetic and comparative methods represents the greatest achievement of modern evolutionary theory. Nevertheless, the rapid evolution of major features in the eukaryotic stem group requires the consideration of an alternative framework. Such a framework, based on the contingent nature of these evolutionary events, is developed and illustrated with three examples: the putative intron proliferation leading to the nucleus and the cell cycle; conflict and cooperation in the origin of eukaryotic bioenergetics; and the inter-relationship between aerobic metabolism, sterol synthesis, membranes, and sex. The modern synthesis thus provides sufficient scope to develop an evolutionary framework to understand the origin of eukaryotes. PMID:27128953
Sum-of-Squares-Based Region of Attraction Analysis for Gain-Scheduled Three-Loop Autopilot
NASA Astrophysics Data System (ADS)
Seo, Min-Won; Kwon, Hyuck-Hoon; Choi, Han-Lim
2018-04-01
A conventional method of designing a missile autopilot is to linearize the original nonlinear dynamics at several trim points, then to determine linear controllers for each linearized model, and finally implement gain-scheduling technique. The validation of such a controller is often based on linear system analysis for the linear closed-loop system at the trim conditions. Although this type of gain-scheduled linear autopilot works well in practice, validation based solely on linear analysis may not be sufficient to fully characterize the closed-loop system especially when the aerodynamic coefficients exhibit substantial nonlinearity with respect to the flight condition. The purpose of this paper is to present a methodology for analyzing the stability of a gain-scheduled controller in a setting close to the original nonlinear setting. The method is based on sum-of-squares (SOS) optimization that can be used to characterize the region of attraction of a polynomial system by solving convex optimization problems. The applicability of the proposed SOS-based methodology is verified on a short-period autopilot of a skid-to-turn missile.
Diway, Bibian; Khoo, Eyen
2017-01-01
The development of timber tracking methods based on genetic markers can provide scientific evidence to verify the origin of timber products and fulfill the growing requirement for sustainable forestry practices. In this study, the origin of an important Dark Red Meranti wood, Shorea platyclados, was studied by using the combination of seven chloroplast DNA and 15 short tandem repeats (STRs) markers. A total of 27 natural populations of S. platyclados were sampled throughout Malaysia to establish population level and individual level identification databases. A haplotype map was generated from chloroplast DNA sequencing for population identification, resulting in 29 multilocus haplotypes, based on 39 informative intraspecific variable sites. Subsequently, a DNA profiling database was developed from 15 STRs allowing for individual identification in Malaysia. Cluster analysis divided the 27 populations into two genetic clusters, corresponding to the region of Eastern and Western Malaysia. The conservativeness tests showed that the Malaysia database is conservative after removal of bias from population subdivision and sampling effects. Independent self-assignment tests correctly assigned individuals to the database in an overall 60.60−94.95% of cases for identified populations, and in 98.99−99.23% of cases for identified regions. Both the chloroplast DNA database and the STRs appear to be useful for tracking timber originating in Malaysia. Hence, this DNA-based method could serve as an effective addition tool to the existing forensic timber identification system for ensuring the sustainably management of this species into the future. PMID:28430826
Complex Archaeological Prospection Using Combination of Non-destructive Techniques
NASA Astrophysics Data System (ADS)
Faltýnová, M.; Pavelka, K.; Nový, P.; Šedina, J.
2015-08-01
This article describes the use of a combination of non-destructive techniques for the complex documentation of a fabulous historical site called Devil's Furrow, an unusual linear formation lying in the landscape of central Bohemia. In spite of many efforts towards interpretation of the formation, its original form and purpose have not yet been explained in a satisfactory manner. The study focuses on the northern part of the furrow which appears to be a dissimilar element within the scope of the whole Devil's Furrow. This article presents detailed description of relics of the formation based on historical map searches and modern investigation methods including airborne laser scanning, aerial photogrammetry (based on airplane and RPAS) and ground-penetrating radar. Airborne laser scanning data and aerial orthoimages acquired by the Czech Office for Surveying, Mapping and Cadastre were used. Other measurements were conducted by our laboratory. Data acquired by various methods provide sufficient information to determine the probable original shape of the formation and proves explicitly the anthropological origin of the northern part of the formation (around village Lipany).
Elhaik, Eran; Tatarinova, Tatiana; Chebotarev, Dmitri; Piras, Ignazio S.; Maria Calò, Carla; De Montis, Antonella; Atzori, Manuela; Marini, Monica; Tofanelli, Sergio; Francalacci, Paolo; Pagani, Luca; Tyler-Smith, Chris; Xue, Yali; Cucca, Francesco; Schurr, Theodore G.; Gaieski, Jill B.; Melendez, Carlalynne; Vilar, Miguel G.; Owings, Amanda C.; Gómez, Rocío; Fujita, Ricardo; Santos, Fabrício R.; Comas, David; Balanovsky, Oleg; Balanovska, Elena; Zalloua, Pierre; Soodyall, Himla; Pitchappan, Ramasamy; GaneshPrasad, ArunKumar; Hammer, Michael; Matisoo-Smith, Lisa; Wells, R. Spencer; Acosta, Oscar; Adhikarla, Syama; Adler, Christina J.; Bertranpetit, Jaume; Clarke, Andrew C.; Cooper, Alan; Der Sarkissian, Clio S. I.; Haak, Wolfgang; Haber, Marc; Jin, Li; Kaplan, Matthew E.; Li, Hui; Li, Shilin; Martínez-Cruz, Begoña; Merchant, Nirav C.; Mitchell, John R.; Parida, Laxmi; Platt, Daniel E.; Quintana-Murci, Lluis; Renfrew, Colin; Lacerda, Daniela R.; Royyuru, Ajay K.; Sandoval, Jose Raul; Santhakumari, Arun Varatharajan; Soria Hernanz, David F.; Swamikrishnan, Pandikumar; Ziegle, Janet S.
2014-01-01
The search for a method that utilizes biological information to predict humans’ place of origin has occupied scientists for millennia. Over the past four decades, scientists have employed genetic data in an effort to achieve this goal but with limited success. While biogeographical algorithms using next-generation sequencing data have achieved an accuracy of 700 km in Europe, they were inaccurate elsewhere. Here we describe the Geographic Population Structure (GPS) algorithm and demonstrate its accuracy with three data sets using 40,000–130,000 SNPs. GPS placed 83% of worldwide individuals in their country of origin. Applied to over 200 Sardinians villagers, GPS placed a quarter of them in their villages and most of the rest within 50 km of their villages. GPS’s accuracy and power to infer the biogeography of worldwide individuals down to their country or, in some cases, village, of origin, underscores the promise of admixture-based methods for biogeography and has ramifications for genetic ancestry testing. PMID:24781250
Dai, Erpeng; Zhang, Zhe; Ma, Xiaodong; Dong, Zijing; Li, Xuesong; Xiong, Yuhui; Yuan, Chun; Guo, Hua
2018-03-23
To study the effects of 2D navigator distortion and noise level on interleaved EPI (iEPI) DWI reconstruction, using either the image- or k-space-based method. The 2D navigator acquisition was adjusted by reducing its echo spacing in the readout direction and undersampling in the phase encoding direction. A POCS-based reconstruction using image-space sampling function (IRIS) algorithm (POCSIRIS) was developed to reduce the impact of navigator distortion. POCSIRIS was then compared with the original IRIS algorithm and a SPIRiT-based k-space algorithm, under different navigator distortion and noise levels. Reducing the navigator distortion can improve the reconstruction of iEPI DWI. The proposed POCSIRIS and SPIRiT-based algorithms are more tolerable to different navigator distortion levels, compared to the original IRIS algorithm. SPIRiT may be hindered by low SNR of the navigator. Multi-shot iEPI DWI reconstruction can be improved by reducing the 2D navigator distortion. Different reconstruction methods show variable sensitivity to navigator distortion or noise levels. Furthermore, the findings can be valuable in applications such as simultaneous multi-slice accelerated iEPI DWI and multi-slab diffusion imaging. © 2018 International Society for Magnetic Resonance in Medicine.
Teaching Evidence-based Medicine Using Literature for Problem Solving.
ERIC Educational Resources Information Center
Mottonen, Merja; Tapanainen, Paivi; Nuutinen, Matti; Rantala, Heikki; Vainionpaa, Leena; Uhari, Matti
2001-01-01
Evidence-based medicine--the process of using research findings systematically as the basis for clinical decisions--can be taught using problem-solving teaching methods. Evaluates whether it was possible to motivate students to use the original literature by giving them selected patient problems to solve. (Author/ASK)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chi, Y; Li, Y; Tian, Z
2015-06-15
Purpose: Pencil-beam or superposition-convolution type dose calculation algorithms are routinely used in inverse plan optimization for intensity modulated radiation therapy (IMRT). However, due to their limited accuracy in some challenging cases, e.g. lung, the resulting dose may lose its optimality after being recomputed using an accurate algorithm, e.g. Monte Carlo (MC). It is the objective of this study to evaluate the feasibility and advantages of a new method to include MC in the treatment planning process. Methods: We developed a scheme to iteratively perform MC-based beamlet dose calculations and plan optimization. In the MC stage, a GPU-based dose engine wasmore » used and the particle number sampled from a beamlet was proportional to its optimized fluence from the previous step. We tested this scheme in four lung cancer IMRT cases. For each case, the original plan dose, plan dose re-computed by MC, and dose optimized by our scheme were obtained. Clinically relevant dosimetric quantities in these three plans were compared. Results: Although the original plan achieved a satisfactory PDV dose coverage, after re-computing doses using MC method, it was found that the PTV D95% were reduced by 4.60%–6.67%. After re-optimizing these cases with our scheme, the PTV coverage was improved to the same level as in the original plan, while the critical OAR coverages were maintained to clinically acceptable levels. Regarding the computation time, it took on average 144 sec per case using only one GPU card, including both MC-based beamlet dose calculation and treatment plan optimization. Conclusion: The achieved dosimetric gains and high computational efficiency indicate the feasibility and advantages of the proposed MC-based IMRT optimization method. Comprehensive validations in more patient cases are in progress.« less
Registration of Panoramic/Fish-Eye Image Sequence and LiDAR Points Using Skyline Features
Zhu, Ningning; Jia, Yonghong; Ji, Shunping
2018-01-01
We propose utilizing a rigorous registration model and a skyline-based method for automatic registration of LiDAR points and a sequence of panoramic/fish-eye images in a mobile mapping system (MMS). This method can automatically optimize original registration parameters and avoid the use of manual interventions in control point-based registration methods. First, the rigorous registration model between the LiDAR points and the panoramic/fish-eye image was built. Second, skyline pixels from panoramic/fish-eye images and skyline points from the MMS’s LiDAR points were extracted, relying on the difference in the pixel values and the registration model, respectively. Third, a brute force optimization method was used to search for optimal matching parameters between skyline pixels and skyline points. In the experiments, the original registration method and the control point registration method were used to compare the accuracy of our method with a sequence of panoramic/fish-eye images. The result showed: (1) the panoramic/fish-eye image registration model is effective and can achieve high-precision registration of the image and the MMS’s LiDAR points; (2) the skyline-based registration method can automatically optimize the initial attitude parameters, realizing a high-precision registration of a panoramic/fish-eye image and the MMS’s LiDAR points; and (3) the attitude correction values of the sequences of panoramic/fish-eye images are different, and the values must be solved one by one. PMID:29883431
Nesteruk, L V; Makarova, N N; Svishcheva, G R; Stolpovsky, Yu A
2015-07-01
Estimation of the state of the genetic diversity and the originality of the breed structure is required for the conservation and management of domestic breeds of agricultural animals. The Romanov breed of sheep from the leading breeding and gene pool farms in Yaroslavl oblast (Russia) is the object of our study. ISS R fingerprinting was used as a molecular method of the study of sheep gene pools. Forty-three DNA fragments were detected (25 and 18, respectively) by two primers ((AG)9C and (GA)9C). Of the discovered ISSR markers, 81% were polymorphic. The coefficient of genetic originality was for the first time used for the study of the specificity and originality of the Romanov-breed gene pool. Based on its values, the studied individuals were divided into five classes depending on the frequency of the ISSR fragment. The most original or the rarest, as well as typical genotypes, were singled out in the Romanov sheep gene pool. Use the obtained data on genetic originality was proposed as a means to increase the efficiency of selection and breeding during the breeding of autochthonous breeds of domesticated animal species.
A multispectral photon-counting double random phase encoding scheme for image authentication.
Yi, Faliu; Moon, Inkyu; Lee, Yeon H
2014-05-20
In this paper, we propose a new method for color image-based authentication that combines multispectral photon-counting imaging (MPCI) and double random phase encoding (DRPE) schemes. The sparsely distributed information from MPCI and the stationary white noise signal from DRPE make intruder attacks difficult. In this authentication method, the original multispectral RGB color image is down-sampled into a Bayer image. The three types of color samples (red, green and blue color) in the Bayer image are encrypted with DRPE and the amplitude part of the resulting image is photon counted. The corresponding phase information that has nonzero amplitude after photon counting is then kept for decryption. Experimental results show that the retrieved images from the proposed method do not visually resemble their original counterparts. Nevertheless, the original color image can be efficiently verified with statistical nonlinear correlations. Our experimental results also show that different interpolation algorithms applied to Bayer images result in different verification effects for multispectral RGB color images.
Lacombe, Pierre J.
2011-01-01
Pump and Treat (P&T) remediation is the primary technique used to contain and remove trichloroethylene (TCE) and its degradation products cis 1-2,dichloroethylene (cDCE) and vinyl chloride (VC) from groundwater at the Naval Air Warfare Center (NAWC), West Trenton, NJ. Three methods were used to determine the masses of TCE, cDCE, and VC removed from groundwater by the P&T system since it became fully operational in 1996. Method 1, is based on the flow volume and concentrations of TCE, cDCE, and VC in groundwater that entered the P&T building as influent. Method 2 is based on withdrawal volume from each active recovery well and the concentrations of TCE, cDCE, and VC in the water samples from each well. Method 3 compares the maximum monthly amount of TCE, cDCE, and VC from Method 1 and Method 2. The greater of the two values is selected to represent the masses of TCE, cDCE and VC removed from groundwater each month. Previously published P&T monthly reports used Method 1 to determine the mass of TCE, cDCE, and VC removed. The reports state that 8,666 pounds (lbs) of TCE, 13,689 lbs of cDCE, and 2,455 lbs of VC were removed by the P&T system during 1996-2010. By using Method 2, the mass removed was determined to be 8,985 lbs of TCE, 17,801 lbs of cDCE, and 3,056 lbs of VC removed, and Method 3, resulted in 10,602 lbs of TCE, 21,029 lbs of cDCE, and 3,496 lbs of VC removed. To determine the mass of original TCE removed from groundwater, the individual masses of TCE, cDCE, and VC (determined using Methods 1, 2, and 3) were converted to numbers of moles, summed, and converted to pounds of original TCE. By using the molar conversion the mass of original TCE removed from groundwater by Methods 1, 2, and 3 was 32,381 lbs, 39,535 lbs, and 46,452 lbs, respectively, during 1996-2010. P&T monthly reports state that 24,805 lbs of summed TCE, cDCE, and VC were removed from groundwater. The simple summing method underestimates the mass of original TCE removed by the P&T system.
On an image reconstruction method for ECT
NASA Astrophysics Data System (ADS)
Sasamoto, Akira; Suzuki, Takayuki; Nishimura, Yoshihiro
2007-04-01
An image by Eddy Current Testing(ECT) is a blurred image to original flaw shape. In order to reconstruct fine flaw image, a new image reconstruction method has been proposed. This method is based on an assumption that a very simple relationship between measured data and source were described by a convolution of response function and flaw shape. This assumption leads to a simple inverse analysis method with deconvolution.In this method, Point Spread Function (PSF) and Line Spread Function(LSF) play a key role in deconvolution processing. This study proposes a simple data processing to determine PSF and LSF from ECT data of machined hole and line flaw. In order to verify its validity, ECT data for SUS316 plate(200x200x10mm) with artificial machined hole and notch flaw had been acquired by differential coil type sensors(produced by ZETEC Inc). Those data were analyzed by the proposed method. The proposed method restored sharp discrete multiple hole image from interfered data by multiple holes. Also the estimated width of line flaw has been much improved compared with original experimental data. Although proposed inverse analysis strategy is simple and easy to implement, its validity to holes and line flaw have been shown by many results that much finer image than original image have been reconstructed.
Challenges of Sanitary Compliance Related to Trade in Products of Animal Origin in Southern Africa.
Magwedere, Kudakwashe; Songabe, Tembile; Dziva, Francis
2015-06-30
Irrespective of the existence of potentially pathogenic organisms carried by animals, foods of animal origin remain the prime nutrition of humans world-wide. As such, food safety continues to be a global concern primarily to safeguard public health and to promote international trade. Application of integrated risk-based quality assurance procedures on-farm and at slaughterhouses plays a crucial role in controlling hazards associated with foods of animal origin. In the present paper we examine safety assurance systems and associated value chains for foods of animal origin based on historical audit results of some Southern African countries with thriving export trade in animal products, mainly to identify areas for improvement. Among the key deficiencies identified were: i) failure to keep pace with scientific advances related to the ever-changing food supply chain; ii) lack of effective national and regional intervention strategies to curtail pathogen transmission and evolution, notably the zoonotic Shiga toxin-producing Escherichia coli ; and iii) a lack of effective methods to reduce contamination of foods of wildlife origin. The introduction of foods of wildlife origin for domestic consumption and export markets seriously compounds already existing conflicts in legislation governing food supply and safety. This analysis identifies gaps required to improve the safety of foods of wildlife origin.
Challenges of Sanitary Compliance Related to Trade in Products of Animal Origin in Southern Africa
Magwedere, Kudakwashe; Songabe, Tembile
2015-01-01
Irrespective of the existence of potentially pathogenic organisms carried by animals, foods of animal origin remain the prime nutrition of humans world-wide. As such, food safety continues to be a global concern primarily to safeguard public health and to promote international trade. Application of integrated risk-based quality assurance procedures on-farm and at slaughterhouses plays a crucial role in controlling hazards associated with foods of animal origin. In the present paper we examine safety assurance systems and associated value chains for foods of animal origin based on historical audit results of some Southern African countries with thriving export trade in animal products, mainly to identify areas for improvement. Among the key deficiencies identified were: i) failure to keep pace with scientific advances related to the ever-changing food supply chain; ii) lack of effective national and regional intervention strategies to curtail pathogen transmission and evolution, notably the zoonotic Shiga toxin-producing Escherichia coli; and iii) a lack of effective methods to reduce contamination of foods of wildlife origin. The introduction of foods of wildlife origin for domestic consumption and export markets seriously compounds already existing conflicts in legislation governing food supply and safety. This analysis identifies gaps required to improve the safety of foods of wildlife origin. PMID:27800409
Sorted Index Numbers for Privacy Preserving Face Recognition
NASA Astrophysics Data System (ADS)
Wang, Yongjin; Hatzinakos, Dimitrios
2009-12-01
This paper presents a novel approach for changeable and privacy preserving face recognition. We first introduce a new method of biometric matching using the sorted index numbers (SINs) of feature vectors. Since it is impossible to recover any of the exact values of the original features, the transformation from original features to the SIN vectors is noninvertible. To address the irrevocable nature of biometric signals whilst obtaining stronger privacy protection, a random projection-based method is employed in conjunction with the SIN approach to generate changeable and privacy preserving biometric templates. The effectiveness of the proposed method is demonstrated on a large generic data set, which contains images from several well-known face databases. Extensive experimentation shows that the proposed solution may improve the recognition accuracy.
Assessment of the floral origin of honey by SDS-page immunoblot techniques.
Baroni, María V; Chiabrando, Gustavo A; Costa, Cristina; Wunderlin, Daniel A
2002-03-13
We report on the development of a novel alternative method for the assessment of floral origin in honey samples based on the study of honey proteins using immunoblot assays. The main goal of our work was to evaluate the use of honey proteins as chemical markers of the floral origin of honey. Considering that honeybee proteins should be common to all types of honey, we decided to verify the usefulness of pollen proteins as floral origin markers in honey. We used polyclonal anti-pollen antibodies raised in rabbits by repeated immunization of Sunflower (Elianthus annuus) and Eucalyptus (Eucalyptus sp.) pollen extracts. The IgG fraction was purified by immunoaffinity. These antibodies were verified with nitrocellulose blotted pollen and unifloral honey protein extracts. The antibodies anti-Sunflower pollen, bound to the 36 and 33 kDa proteins of Sunflower unifloral honey and to honey containing Sunflower pollen; and the antibodies anti-Eucalyptus sp. pollen bound to the 38 kDa proteins of Eucalyptus sp. unifloral honey in immunoblot assays. Satisfactory results were obtained in differentiating between the types of pollen analyzed and between Sunflower honey and Eucalyptus honey with less cross reactivity with other types of honey from different origin and also with good sensitivity in the detection. This immunoblot method opens an interesting field for the development of new antibodies from different plants, which could serve as an alternative or complementary method to the usual melissopalynological analysis to assess honey floral origin.
Method for rapid base sequencing in DNA and RNA
Jett, J.H.; Keller, R.A.; Martin, J.C.; Moyzis, R.K.; Ratliff, R.L.; Shera, E.B.; Stewart, C.C.
1987-10-07
A method is provided for the rapid base sequencing of DNA or RNA fragments wherein a single fragment of DNA or RNA is provided with identifiable bases and suspended in a moving flow stream. An exonuclease sequentially cleaves individual bases from the end of the suspended fragment. The moving flow stream maintains the cleaved bases in an orderly train for subsequent detection and identification. In a particular embodiment, individual bases forming the DNA or RNA fragments are individually tagged with a characteristic fluorescent dye. The train of bases is then excited to fluorescence with an output spectrum characteristic of the individual bases. Accordingly, the base sequence of the original DNA or RNA fragment can be reconstructed. 2 figs.
Method for rapid base sequencing in DNA and RNA
Jett, J.H.; Keller, R.A.; Martin, J.C.; Moyzis, R.K.; Ratliff, R.L.; Shera, E.B.; Stewart, C.C.
1990-10-09
A method is provided for the rapid base sequencing of DNA or RNA fragments wherein a single fragment of DNA or RNA is provided with identifiable bases and suspended in a moving flow stream. An exonuclease sequentially cleaves individual bases from the end of the suspended fragment. The moving flow stream maintains the cleaved bases in an orderly train for subsequent detection and identification. In a particular embodiment, individual bases forming the DNA or RNA fragments are individually tagged with a characteristic fluorescent dye. The train of bases is then excited to fluorescence with an output spectrum characteristic of the individual bases. Accordingly, the base sequence of the original DNA or RNA fragment can be reconstructed. 2 figs.
Method for rapid base sequencing in DNA and RNA
Jett, James H.; Keller, Richard A.; Martin, John C.; Moyzis, Robert K.; Ratliff, Robert L.; Shera, E. Brooks; Stewart, Carleton C.
1990-01-01
A method is provided for the rapid base sequencing of DNA or RNA fragments wherein a single fragment of DNA or RNA is provided with identifiable bases and suspended in a moving flow stream. An exonuclease sequentially cleaves individual bases from the end of the suspended fragment. The moving flow stream maintains the cleaved bases in an orderly train for subsequent detection and identification. In a particular embodiment, individual bases forming the DNA or RNA fragments are individually tagged with a characteristic fluorescent dye. The train of bases is then excited to fluorescence with an output spectrum characteristic of the individual bases. Accordingly, the base sequence of the original DNA or RNA fragment can be reconstructed.
Boric Acid in Kjeldahl Analysis
ERIC Educational Resources Information Center
Cruz, Gregorio
2013-01-01
The use of boric acid in the Kjeldahl determination of nitrogen is a variant of the original method widely applied in many laboratories all over the world. Its use is recommended by control organizations such as ISO, IDF, and EPA because it yields reliable and accurate results. However, the chemical principles the method is based on are not…
Multigrid optimal mass transport for image registration and morphing
NASA Astrophysics Data System (ADS)
Rehman, Tauseef ur; Tannenbaum, Allen
2007-02-01
In this paper we present a computationally efficient Optimal Mass Transport algorithm. This method is based on the Monge-Kantorovich theory and is used for computing elastic registration and warping maps in image registration and morphing applications. This is a parameter free method which utilizes all of the grayscale data in an image pair in a symmetric fashion. No landmarks need to be specified for correspondence. In our work, we demonstrate significant improvement in computation time when our algorithm is applied as compared to the originally proposed method by Haker et al [1]. The original algorithm was based on a gradient descent method for removing the curl from an initial mass preserving map regarded as 2D vector field. This involves inverting the Laplacian in each iteration which is now computed using full multigrid technique resulting in an improvement in computational time by a factor of two. Greater improvement is achieved by decimating the curl in a multi-resolutional framework. The algorithm was applied to 2D short axis cardiac MRI images and brain MRI images for testing and comparison.
Use of Vertically Integrated Ice in WRF-Based Forecasts of Lightning Threat
NASA Technical Reports Server (NTRS)
McCaul, E. W., jr.; Goodman, S. J.
2008-01-01
Previously reported methods of forecasting lightning threat using fields of graupel flux from WRF simulations are extended to include the simulated field of vertically integrated ice within storms. Although the ice integral shows less temporal variability than graupel flux, it provides more areal coverage, and can thus be used to create a lightning forecast that better matches the areal coverage of the lightning threat found in observations of flash extent density. A blended lightning forecast threat can be constructed that retains much of the desirable temporal sensitivity of the graupel flux method, while also incorporating the coverage benefits of the ice integral method. The graupel flux and ice integral fields contributing to the blended forecast are calibrated against observed lightning flash origin density data, based on Lightning Mapping Array observations from a series of case studies chosen to cover a wide range of flash rate conditions. Linear curve fits that pass through the origin are found to be statistically robust for the calibration procedures.
All-inkjet-printed thin-film transistors: manufacturing process reliability by root cause analysis.
Sowade, Enrico; Ramon, Eloi; Mitra, Kalyan Yoti; Martínez-Domingo, Carme; Pedró, Marta; Pallarès, Jofre; Loffredo, Fausta; Villani, Fulvia; Gomes, Henrique L; Terés, Lluís; Baumann, Reinhard R
2016-09-21
We report on the detailed electrical investigation of all-inkjet-printed thin-film transistor (TFT) arrays focusing on TFT failures and their origins. The TFT arrays were manufactured on flexible polymer substrates in ambient condition without the need for cleanroom environment or inert atmosphere and at a maximum temperature of 150 °C. Alternative manufacturing processes for electronic devices such as inkjet printing suffer from lower accuracy compared to traditional microelectronic manufacturing methods. Furthermore, usually printing methods do not allow the manufacturing of electronic devices with high yield (high number of functional devices). In general, the manufacturing yield is much lower compared to the established conventional manufacturing methods based on lithography. Thus, the focus of this contribution is set on a comprehensive analysis of defective TFTs printed by inkjet technology. Based on root cause analysis, we present the defects by developing failure categories and discuss the reasons for the defects. This procedure identifies failure origins and allows the optimization of the manufacturing resulting finally to a yield improvement.
NASA Astrophysics Data System (ADS)
Bartlett, M. S.; Parolari, A. J.; McDonnell, J. J.; Porporato, A.
2017-07-01
Though Ogden et al. list several shortcomings of the original SCS-CN method, fit for purpose is a key consideration in hydrological modelling, as shown by the adoption of SCS-CN method in many design standards. The theoretical framework of Bartlett et al. [2016a] reveals a family of semidistributed models, of which the SCS-CN method is just one member. Other members include event-based versions of the Variable Infiltration Capacity (VIC) model and TOPMODEL. This general model allows us to move beyond the limitations of the original SCS-CN method under different rainfall-runoff mechanisms and distributions for soil and rainfall variability. Future research should link this general model approach to different hydrogeographic settings, in line with the call for action proposed by Ogden et al.
[Clinical examples of professor LI Zhi-dao's "tonifying three qi" acupuncture method].
Li, Rui-Chao; Li, Yan; Fu, Yuan-Xin; Zhao, Xiang-Fei; Sun, Jing; Li, Lan-Yuan
2014-08-01
Professor LI Zhi-dao, according to acupoint selection of syndrome differentiation in TCM basic theory, concluded a new therapy, namely "tonifying three qi" that is mainly based on three acupoints in the Conception Vessel. This method is consisted of Danzhong (CV 17), Zhongwan (CV 12) and Qihai (CV 6) in the Conception Vessel, which could successively nourish clear qi, stomach qi and original qi. In clinic, according to the severity of symptoms of three qi, the acupoints are selected flexibly, which could respectively treat deficiency of heart-lung qi, deficiency of stomach-spleen qi and deficiency of original qi. Some examples are also given in the article.
NASA Astrophysics Data System (ADS)
Sumihara, K.
Based upon legitimate variational principles, one microscopic-macroscopic finite element formulation for linear dynamics is presented by Hybrid Stress Finite Element Method. The microscopic application of Geometric Perturbation introduced by Pian and the introduction of infinitesimal limit core element (Baby Element) have been consistently combined according to the flexible and inherent interpretation of the legitimate variational principles initially originated by Pian and Tong. The conceptual development based upon Hybrid Finite Element Method is extended to linear dynamics with the introduction of physically meaningful higher modes.
A New DEM Generalization Method Based on Watershed and Tree Structure
Chen, Yonggang; Ma, Tianwu; Chen, Xiaoyin; Chen, Zhende; Yang, Chunju; Lin, Chenzhi; Shan, Ligang
2016-01-01
The DEM generalization is the basis of multi-dimensional observation, the basis of expressing and analyzing the terrain. DEM is also the core of building the Multi-Scale Geographic Database. Thus, many researchers have studied both the theory and the method of DEM generalization. This paper proposed a new method of generalizing terrain, which extracts feature points based on the tree model construction which considering the nested relationship of watershed characteristics. The paper used the 5 m resolution DEM of the Jiuyuan gully watersheds in the Loess Plateau as the original data and extracted the feature points in every single watershed to reconstruct the DEM. The paper has achieved generalization from 1:10000 DEM to 1:50000 DEM by computing the best threshold. The best threshold is 0.06. In the last part of the paper, the height accuracy of the generalized DEM is analyzed by comparing it with some other classic methods, such as aggregation, resample, and VIP based on the original 1:50000 DEM. The outcome shows that the method performed well. The method can choose the best threshold according to the target generalization scale to decide the density of the feature points in the watershed. Meanwhile, this method can reserve the skeleton of the terrain, which can meet the needs of different levels of generalization. Additionally, through overlapped contour contrast, elevation statistical parameters and slope and aspect analysis, we found out that the W8D algorithm performed well and effectively in terrain representation. PMID:27517296
[Spectral scatter correction of coal samples based on quasi-linear local weighted method].
Lei, Meng; Li, Ming; Ma, Xiao-Ping; Miao, Yan-Zi; Wang, Jian-Sheng
2014-07-01
The present paper puts forth a new spectral correction method based on quasi-linear expression and local weighted function. The first stage of the method is to search 3 quasi-linear expressions to replace the original linear expression in MSC method, such as quadratic, cubic and growth curve expression. Then the local weighted function is constructed by introducing 4 kernel functions, such as Gaussian, Epanechnikov, Biweight and Triweight kernel function. After adding the function in the basic estimation equation, the dependency between the original and ideal spectra is described more accurately and meticulously at each wavelength point. Furthermore, two analytical models were established respectively based on PLS and PCA-BP neural network method, which can be used for estimating the accuracy of corrected spectra. At last, the optimal correction mode was determined by the analytical results with different combination of quasi-linear expression and local weighted function. The spectra of the same coal sample have different noise ratios while the coal sample was prepared under different particle sizes. To validate the effectiveness of this method, the experiment analyzed the correction results of 3 spectral data sets with the particle sizes of 0.2, 1 and 3 mm. The results show that the proposed method can eliminate the scattering influence, and also can enhance the information of spectral peaks. This paper proves a more efficient way to enhance the correlation between corrected spectra and coal qualities significantly, and improve the accuracy and stability of the analytical model substantially.
Campos, G W
1998-01-01
This paper describes a new health care management method. A triangular confrontation system was constructed, based on a theoretical review, empirical facts observed from health services, and the researcher's knowledge, jointly analyzed. This new management model was termed 'health-team-focused collegiate management', entailing several original organizational concepts: production unity, matrix-based reference team, collegiate management system, cogovernance, and product/production interface.
High Order Modulation Protograph Codes
NASA Technical Reports Server (NTRS)
Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)
2014-01-01
Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S; Chen, S; Zhang, B
Purpose: This study compares the geometric-based setup (GBS) which is currently used in the clinic to a novel concept of dose-based setup (DBS) of head and neck (H&N) patients using cone beam CT (CBCT) of the day; and evaluates the clinical advantages. Methods: Ten H&N patients who underwent re-simulation and re-plan due to noticeable anatomic changes during the course of the treatments were retrospectively reviewed on dosimetric changes in the assumption of no plan modification was performed. RayStation planning system (RaySearch Laboratories AB, Sweden) was used to match (ROI fusion module) between prescribed isodoseline (IDL) in the CBCT imported alongmore » with ROIs from re-planned CT and the IDL of original plan (Dose-based setup: DBS). Then, the CBCT plan based on daily setup using the GBS (previously used for a patient) and the DBS CBCT plan recalculated in RayStation compared against the original CT-sim plan. Results: Most of patients’ tumor coverage and OAR doses got generally worsen when the CBCT plans were compared with original CT-sim plan with GBS. However, when DBS intervened, the OAR dose and tumor coverage was better than the GBS. For example, one of patients’ daily average doses of right parotid and oral cavity increased to 26% and 36%, respectively from the original plan to the GBS planning. However, it only increased by 13% and 24%, respectively with DBS. GTV D95 coverage also decreased by 16% with GBS, but only 2% decreased with DBS. Conclusion: DBS method is superior to GBS to prevent any abrupt dose changes to OARs as well as PTV/CTV or GTV at least for some H&N cases. Since it is not known when the DBS is beneficial to the GBS, a system which enables the on-line DBS may be helpful for better treatment of H&N.« less
A weakly-compressible Cartesian grid approach for hydrodynamic flows
NASA Astrophysics Data System (ADS)
Bigay, P.; Oger, G.; Guilcher, P.-M.; Le Touzé, D.
2017-11-01
The present article aims at proposing an original strategy to solve hydrodynamic flows. In introduction, the motivations for this strategy are developed. It aims at modeling viscous and turbulent flows including complex moving geometries, while avoiding meshing constraints. The proposed approach relies on a weakly-compressible formulation of the Navier-Stokes equations. Unlike most hydrodynamic CFD (Computational Fluid Dynamics) solvers usually based on implicit incompressible formulations, a fully-explicit temporal scheme is used. A purely Cartesian grid is adopted for numerical accuracy and algorithmic simplicity purposes. This characteristic allows an easy use of Adaptive Mesh Refinement (AMR) methods embedded within a massively parallel framework. Geometries are automatically immersed within the Cartesian grid with an AMR compatible treatment. The method proposed uses an Immersed Boundary Method (IBM) adapted to the weakly-compressible formalism and imposed smoothly through a regularization function, which stands as another originality of this work. All these features have been implemented within an in-house solver based on this WCCH (Weakly-Compressible Cartesian Hydrodynamic) method which meets the above requirements whilst allowing the use of high-order (> 3) spatial schemes rarely used in existing hydrodynamic solvers. The details of this WCCH method are presented and validated in this article.
Hoang, Tuan; Tran, Dat; Huang, Xu
2013-01-01
Common Spatial Pattern (CSP) is a state-of-the-art method for feature extraction in Brain-Computer Interface (BCI) systems. However it is designed for 2-class BCI classification problems. Current extensions of this method to multiple classes based on subspace union and covariance matrix similarity do not provide a high performance. This paper presents a new approach to solving multi-class BCI classification problems by forming a subspace resembled from original subspaces and the proposed method for this approach is called Approximation-based Common Principal Component (ACPC). We perform experiments on Dataset 2a used in BCI Competition IV to evaluate the proposed method. This dataset was designed for motor imagery classification with 4 classes. Preliminary experiments show that the proposed ACPC feature extraction method when combining with Support Vector Machines outperforms CSP-based feature extraction methods on the experimental dataset.
Model-based color halftoning using direct binary search.
Agar, A Ufuk; Allebach, Jan P
2005-12-01
In this paper, we develop a model-based color halftoning method using the direct binary search (DBS) algorithm. Our method strives to minimize the perceived error between the continuous tone original color image and the color halftone image. We exploit the differences in how the human viewers respond to luminance and chrominance information and use the total squared error in a luminance/chrominance based space as our metric. Starting with an initial halftone, we minimize this error metric using the DBS algorithm. Our method also incorporates a measurement based color printer dot interaction model to prevent the artifacts due to dot overlap and to improve color texture quality. We calibrate our halftoning algorithm to ensure accurate colorant distributions in resulting halftones. We present the color halftones which demonstrate the efficacy of our method.
Projector primary-based optimization for superimposed projection mappings
NASA Astrophysics Data System (ADS)
Ahmed, Bilal; Lee, Jong Hun; Lee, Yong Yi; Lee, Kwan H.
2018-01-01
Recently, many researchers have focused on fully overlapping projections for three-dimensional (3-D) projection mapping systems but reproducing a high-quality appearance using this technology still remains a challenge. On top of existing color compensation-based methods, much effort is still required to faithfully reproduce an appearance that is free from artifacts, colorimetric inconsistencies, and inappropriate illuminance over the 3-D projection surface. According to our observation, this is due to the fact that overlapping projections are treated as an additive-linear mixture of color. However, this is not the case according to our elaborated observations. We propose a method that enables us to use high-quality appearance data that are measured from original objects and regenerate the same appearance by projecting optimized images using multiple projectors, ensuring that the projection-rendered results look visually close to the real object. We prepare our target appearances by photographing original objects. Then, using calibrated projector-camera pairs, we compensate for missing geometric correspondences to make our method robust against noise. The heart of our method is a target appearance-driven adaptive sampling of the projection surface followed by a representation of overlapping projections in terms of the projector-primary response. This gives off projector-primary weights to facilitate blending and the system is applied with constraints. These samples are used to populate a light transport-based system. Then, the system is solved minimizing the error to get the projection images in a noise-free manner by utilizing intersample overlaps. We ensure that we make the best utilization of available hardware resources to recreate projection mapped appearances that look as close to the original object as possible. Our experimental results show compelling results in terms of visual similarity and colorimetric error.
Vanderlinden, Karen; Van de Putte, Bart
2017-04-01
Even though breastfeeding is typically considered the preferred feeding method for infants worldwide, in Belgium, breastfeeding rates remain low across native and migrant groups while the underlying determinants are unclear. Furthermore, research examining contextual effects, especially regarding gender (in)equality and ideology, has not been conducted. We hypothesized that greater gender equality scores in the country of origin will result in higher breastfeeding chances. Because gender equality does not operate only at the contextual level but can be mediated through individual level resources, we hypothesized the following for maternal education: higher maternal education will be an important positive predictor for exclusive breastfeeding chances in Belgium, but its effects will differ over subsequent origin countries. Based on IKAROS data (GeÏntegreerd Kind Activiteiten en Regio Ondersteunings Systeem), we perform multilevel analyses on 27 936 newborns. Feeding method is indicated by exclusive breastfeeding 3 months after childbirth. We measure gender (in)equality using Global Gender Gap scores from the mother's origin country. Maternal education is a metric variable based on International Standard Classification of Education indicators. Results show that 3.6% of the variation in breastfeeding can be explained by differences between the migrant mother's country of origin. However, the effect of gender (in)equality appears to be non-significant. After adding maternal education, the effect for origin countries scoring low on gender equality turns significant. Maternal education on its own shows strong positive association with exclusive breastfeeding and, furthermore, has different effects for different origin countries. Possible explanations are discussed in-depth setting direction for further research regarding the different pathways gender (in)equality and maternal education affect breastfeeding. © 2016 John Wiley & Sons Ltd. © 2016 John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, J; Lasio, G; Chen, S
2015-06-15
Purpose: To develop a CBCT HU correction method using a patient specific HU to mass density conversion curve based on a novel image registration and organ mapping method for head-and-neck radiation therapy. Methods: There are three steps to generate a patient specific CBCT HU to mass density conversion curve. First, we developed a novel robust image registration method based on sparseness analysis to register the planning CT (PCT) and the CBCT. Second, a novel organ mapping method was developed to transfer the organs at risk (OAR) contours from the PCT to the CBCT and corresponding mean HU values of eachmore » OAR were measured in both the PCT and CBCT volumes. Third, a set of PCT and CBCT HU to mass density conversion curves were created based on the mean HU values of OARs and the corresponding mass density of the OAR in the PCT. Then, we compared our proposed conversion curve with the traditional Catphan phantom based CBCT HU to mass density calibration curve. Both curves were input into the treatment planning system (TPS) for dose calculation. Last, the PTV and OAR doses, DVH and dose distributions of CBCT plans are compared to the original treatment plan. Results: One head-and-neck cases which contained a pair of PCT and CBCT was used. The dose differences between the PCT and CBCT plans using the proposed method are −1.33% for the mean PTV, 0.06% for PTV D95%, and −0.56% for the left neck. The dose differences between plans of PCT and CBCT corrected using the CATPhan based method are −4.39% for mean PTV, 4.07% for PTV D95%, and −2.01% for the left neck. Conclusion: The proposed CBCT HU correction method achieves better agreement with the original treatment plan compared to the traditional CATPhan based calibration method.« less
[Study on Application of NIR Spectral Information Screening in Identification of Maca Origin].
Wang, Yuan-zhong; Zhao, Yan-li; Zhang, Ji; Jin, Hang
2016-02-01
Medicinal and edible plant Maca is rich in various nutrients and owns great medicinal value. Based on near infrared diffuse reflectance spectra, 139 Maca samples collected from Peru and Yunnan were used to identify their geographical origins. Multiplication signal correction (MSC) coupled with second derivative (SD) and Norris derivative filter (ND) was employed in spectral pretreatment. Spectrum range (7,500-4,061 cm⁻¹) was chosen by spectrum standard deviation. Combined with principal component analysis-mahalanobis distance (PCA-MD), the appropriate number of principal components was selected as 5. Based on the spectrum range and the number of principal components selected, two abnormal samples were eliminated by modular group iterative singular sample diagnosis method. Then, four methods were used to filter spectral variable information, competitive adaptive reweighted sampling (CARS), monte carlo-uninformative variable elimination (MC-UVE), genetic algorithm (GA) and subwindow permutation analysis (SPA). The spectral variable information filtered was evaluated by model population analysis (MPA). The results showed that RMSECV(SPA) > RMSECV(CARS) > RMSECV(MC-UVE) > RMSECV(GA), were 2. 14, 2. 05, 2. 02, and 1. 98, and the spectral variables were 250, 240, 250 and 70, respectively. According to the spectral variable filtered, partial least squares discriminant analysis (PLS-DA) was used to build the model, with random selection of 97 samples as training set, and the other 40 samples as validation set. The results showed that, R²: GA > MC-UVE > CARS > SPA, RMSEC and RMSEP: GA < MC-UVE < CARS
Original Research and Peer Review Using Web-Based Collaborative Tools by College Students
ERIC Educational Resources Information Center
Cakir, Mustafa; Carlsen, William S.
2007-01-01
The Environmental Inquiry program supports inquiry based, student-centered science teaching on selected topics in the environmental sciences. Many teachers are unfamiliar with both the underlying science of toxicology, and the process and importance of peer review in scientific method. The protocol and peer review process was tested with college…
USDA-ARS?s Scientific Manuscript database
Adaptive waveform interpretation with Gaussian filtering (AWIGF) and second order bounded mean oscillation operator Z square 2(u,t,r) are TDR analysis methods based on second order differentiation. AWIGF was originally designed for relatively long probe (greater than 150 mm) TDR waveforms, while Z s...
Joint sparse coding based spatial pyramid matching for classification of color medical image.
Shi, Jun; Li, Yi; Zhu, Jie; Sun, Haojie; Cai, Yin
2015-04-01
Although color medical images are important in clinical practice, they are usually converted to grayscale for further processing in pattern recognition, resulting in loss of rich color information. The sparse coding based linear spatial pyramid matching (ScSPM) and its variants are popular for grayscale image classification, but cannot extract color information. In this paper, we propose a joint sparse coding based SPM (JScSPM) method for the classification of color medical images. A joint dictionary can represent both the color information in each color channel and the correlation between channels. Consequently, the joint sparse codes calculated from a joint dictionary can carry color information, and therefore this method can easily transform a feature descriptor originally designed for grayscale images to a color descriptor. A color hepatocellular carcinoma histological image dataset was used to evaluate the performance of the proposed JScSPM algorithm. Experimental results show that JScSPM provides significant improvements as compared with the majority voting based ScSPM and the original ScSPM for color medical image classification. Copyright © 2014 Elsevier Ltd. All rights reserved.
Chemical and genetic discrimination of Cistanches Herba based on UPLC-QTOF/MS and DNA barcoding.
Zheng, Sihao; Jiang, Xue; Wu, Labin; Wang, Zenghui; Huang, Linfang
2014-01-01
Cistanches Herba (Rou Cong Rong), known as "Ginseng of the desert", has a striking curative effect on strength and nourishment, especially in kidney reinforcement to strengthen yang. However, the two plant origins of Cistanches Herba, Cistanche deserticola and Cistanche tubulosa, vary in terms of pharmacological action and chemical components. To discriminate the plant origin of Cistanches Herba, a combined method system of chemical and genetic--UPLC-QTOF/MS technology and DNA barcoding--were firstly employed in this study. The results indicated that three potential marker compounds (isomer of campneoside II, cistanoside C, and cistanoside A) were obtained to discriminate the two origins by PCA and OPLS-DA analyses. DNA barcoding enabled to differentiate two origins accurately. NJ tree showed that two origins clustered into two clades. Our findings demonstrate that the two origins of Cistanches Herba possess different chemical compositions and genetic variation. This is the first reported evaluation of two origins of Cistanches Herba, and the finding will facilitate quality control and its clinical application.
Yudthavorasit, Soparat; Wongravee, Kanet; Leepipatpiboon, Natchanun
2014-09-01
Chromatographic fingerprints of gingers from five different ginger-producing countries (China, India, Malaysia, Thailand and Vietnam) were newly established to discriminate the origin of ginger. The pungent bioactive principles of ginger, gingerols and six other gingerol-related compounds were determined and identified. Their variations in HPLC profiles create the characteristic pattern of each origin by employing similarity analysis, hierarchical cluster analysis (HCA), principal component analysis (PCA) and linear discriminant analysis (LDA). As results, the ginger profiles tended to be grouped and separated on the basis of the geographical closeness of the countries of origin. An effective mathematical model with high predictive ability was obtained and chemical markers for each origin were also identified as the characteristic active compounds to differentiate the ginger origin. The proposed method is useful for quality control of ginger in case of origin labelling and to assess food authenticity issues. Copyright © 2014 Elsevier Ltd. All rights reserved.
Computer-aided diagnosis system: a Bayesian hybrid classification method.
Calle-Alonso, F; Pérez, C J; Arias-Nicolás, J P; Martín, J
2013-10-01
A novel method to classify multi-class biomedical objects is presented. The method is based on a hybrid approach which combines pairwise comparison, Bayesian regression and the k-nearest neighbor technique. It can be applied in a fully automatic way or in a relevance feedback framework. In the latter case, the information obtained from both an expert and the automatic classification is iteratively used to improve the results until a certain accuracy level is achieved, then, the learning process is finished and new classifications can be automatically performed. The method has been applied in two biomedical contexts by following the same cross-validation schemes as in the original studies. The first one refers to cancer diagnosis, leading to an accuracy of 77.35% versus 66.37%, originally obtained. The second one considers the diagnosis of pathologies of the vertebral column. The original method achieves accuracies ranging from 76.5% to 96.7%, and from 82.3% to 97.1% in two different cross-validation schemes. Even with no supervision, the proposed method reaches 96.71% and 97.32% in these two cases. By using a supervised framework the achieved accuracy is 97.74%. Furthermore, all abnormal cases were correctly classified. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Sensitive, Rapid Detection of Bacterial Spores
NASA Technical Reports Server (NTRS)
Kern, Roger G.; Venkateswaran, Kasthuri; Chen, Fei; Pickett, Molly; Matsuyama, Asahi
2009-01-01
A method of sensitive detection of bacterial spores within delays of no more than a few hours has been developed to provide an alternative to a prior three-day NASA standard culture-based assay. A capability for relatively rapid detection of bacterial spores would be beneficial for many endeavors, a few examples being agriculture, medicine, public health, defense against biowarfare, water supply, sanitation, hygiene, and the food-packaging and medical-equipment industries. The method involves the use of a commercial rapid microbial detection system (RMDS) that utilizes a combination of membrane filtration, adenosine triphosphate (ATP) bioluminescence chemistry, and analysis of luminescence images detected by a charge-coupled-device camera. This RMDS has been demonstrated to be highly sensitive in enumerating microbes (it can detect as little as one colony-forming unit per sample) and has been found to yield data in excellent correlation with those of culture-based methods. What makes the present method necessary is that the specific RMDS and the original protocols for its use are not designed for discriminating between bacterial spores and other microbes. In this method, a heat-shock procedure is added prior to an incubation procedure that is specified in the original RMDS protocols. In this heat-shock procedure (which was also described in a prior NASA Tech Briefs article on enumerating sporeforming bacteria), a sample is exposed to a temperature of 80 C for 15 minutes. Spores can survive the heat shock, but nonspore- forming bacteria and spore-forming bacteria that are not in spore form cannot survive. Therefore, any colonies that grow during incubation after the heat shock are deemed to have originated as spores.
Fan, Chunlin; Deng, Jiewei; Yang, Yunyun; Liu, Junshan; Wang, Ying; Zhang, Xiaoqi; Fai, Kuokchiu; Zhang, Qingwen; Ye, Wencai
2013-10-01
An ultra-performance liquid chromatography coupled with quadrupole time-of-flight mass spectrometry (UPLC-QTOF-MS) method integrating multi-ingredients determination and fingerprint analysis has been established for quality assessment and control of leaves from Ilex latifolia. The method possesses the advantages of speediness, efficiency, accuracy, and allows the multi-ingredients determination and fingerprint analysis in one chromatographic run within 13min. Multi-ingredients determination was performed based on the extracted ion chromatograms of the exact pseudo-molecular ions (with a 0.01Da window), and fingerprint analysis was performed based on the base peak chromatograms, obtained by negative-ion electrospray ionization QTOF-MS. The method validation results demonstrated our developed method possessing desirable specificity, linearity, precision and accuracy. The method was utilized to analyze 22 I. latifolia samples from different origins. The quality assessment was achieved by using both similarity analysis (SA) and principal component analysis (PCA), and the results from SA were consistent with those from PCA. Our experimental results demonstrate that the strategy integrated multi-ingredients determination and fingerprint analysis using UPLC-QTOF-MS technique is a useful approach for rapid pharmaceutical analysis, with promising prospects for the differentiation of origin, the determination of authenticity, and the overall quality assessment of herbal medicines. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ahmed, H. O. A.; Wong, M. L. D.; Nandi, A. K.
2018-01-01
Condition classification of rolling element bearings in rotating machines is important to prevent the breakdown of industrial machinery. A considerable amount of literature has been published on bearing faults classification. These studies aim to determine automatically the current status of a roller element bearing. Of these studies, methods based on compressed sensing (CS) have received some attention recently due to their ability to allow one to sample below the Nyquist sampling rate. This technology has many possible uses in machine condition monitoring and has been investigated as a possible approach for fault detection and classification in the compressed domain, i.e., without reconstructing the original signal. However, previous CS based methods have been found to be too weak for highly compressed data. The present paper explores computationally, for the first time, the effects of sparse autoencoder based over-complete sparse representations on the classification performance of highly compressed measurements of bearing vibration signals. For this study, the CS method was used to produce highly compressed measurements of the original bearing dataset. Then, an effective deep neural network (DNN) with unsupervised feature learning algorithm based on sparse autoencoder is used for learning over-complete sparse representations of these compressed datasets. Finally, the fault classification is achieved using two stages, namely, pre-training classification based on stacked autoencoder and softmax regression layer form the deep net stage (the first stage), and re-training classification based on backpropagation (BP) algorithm forms the fine-tuning stage (the second stage). The experimental results show that the proposed method is able to achieve high levels of accuracy even with extremely compressed measurements compared with the existing techniques.
Faruki, Hawazin; Mayhew, Gregory M; Fan, Cheng; Wilkerson, Matthew D; Parker, Scott; Kam-Morgan, Lauren; Eisenberg, Marcia; Horten, Bruce; Hayes, D Neil; Perou, Charles M; Lai-Goldman, Myla
2016-06-01
Context .- A histologic classification of lung cancer subtypes is essential in guiding therapeutic management. Objective .- To complement morphology-based classification of lung tumors, a previously developed lung subtyping panel (LSP) of 57 genes was tested using multiple public fresh-frozen gene-expression data sets and a prospectively collected set of formalin-fixed, paraffin-embedded lung tumor samples. Design .- The LSP gene-expression signature was evaluated in multiple lung cancer gene-expression data sets totaling 2177 patients collected from 4 platforms: Illumina RNAseq (San Diego, California), Agilent (Santa Clara, California) and Affymetrix (Santa Clara) microarrays, and quantitative reverse transcription-polymerase chain reaction. Gene centroids were calculated for each of 3 genomic-defined subtypes: adenocarcinoma, squamous cell carcinoma, and neuroendocrine, the latter of which encompassed both small cell carcinoma and carcinoid. Classification by LSP into 3 subtypes was evaluated in both fresh-frozen and formalin-fixed, paraffin-embedded tumor samples, and agreement with the original morphology-based diagnosis was determined. Results .- The LSP-based classifications demonstrated overall agreement with the original clinical diagnosis ranging from 78% (251 of 322) to 91% (492 of 538 and 869 of 951) in the fresh-frozen public data sets and 84% (65 of 77) in the formalin-fixed, paraffin-embedded data set. The LSP performance was independent of tissue-preservation method and gene-expression platform. Secondary, blinded pathology review of formalin-fixed, paraffin-embedded samples demonstrated concordance of 82% (63 of 77) with the original morphology diagnosis. Conclusions .- The LSP gene-expression signature is a reproducible and objective method for classifying lung tumors and demonstrates good concordance with morphology-based classification across multiple data sets. The LSP panel can supplement morphologic assessment of lung cancers, particularly when classification by standard methods is challenging.
Horacek, Micha; Hansel-Hohl, Karin; Burg, Kornel; Soja, Gerhard; Okello-Anyanga, Walter; Fluch, Silvia
2015-01-01
The indication of origin of sesame seeds and sesame oil is one of the important factors influencing its price, as it is produced in many regions worldwide and certain provenances are especially sought after. We joined stable carbon and hydrogen isotope analysis with DNA based molecular marker analysis to study their combined potential for the discrimination of different origins of sesame seeds. For the stable carbon and hydrogen isotope data a positive correlation between both isotope parameters was observed, indicating a dominant combined influence of climate and water availability. This enabled discrimination between sesame samples from tropical and subtropical/moderate climatic provenances. Carbon isotope values also showed differences between oil from black and white sesame seeds from identical locations, indicating higher water use efficiency of plants producing black seeds. DNA based markers gave independent evidence for geographic variation as well as provided information on the genetic relatedness of the investigated samples. Depending on the differences in ambient environmental conditions and in the genotypic fingerprint, a combination of both analytical methods is a very powerful tool to assess the declared geographic origin. To our knowledge this is the first paper on food authenticity combining the stable isotope analysis of bio-elements with DNA based markers and their combined statistical analysis. PMID:25831054
Horacek, Micha; Hansel-Hohl, Karin; Burg, Kornel; Soja, Gerhard; Okello-Anyanga, Walter; Fluch, Silvia
2015-01-01
The indication of origin of sesame seeds and sesame oil is one of the important factors influencing its price, as it is produced in many regions worldwide and certain provenances are especially sought after. We joined stable carbon and hydrogen isotope analysis with DNA based molecular marker analysis to study their combined potential for the discrimination of different origins of sesame seeds. For the stable carbon and hydrogen isotope data a positive correlation between both isotope parameters was observed, indicating a dominant combined influence of climate and water availability. This enabled discrimination between sesame samples from tropical and subtropical/moderate climatic provenances. Carbon isotope values also showed differences between oil from black and white sesame seeds from identical locations, indicating higher water use efficiency of plants producing black seeds. DNA based markers gave independent evidence for geographic variation as well as provided information on the genetic relatedness of the investigated samples. Depending on the differences in ambient environmental conditions and in the genotypic fingerprint, a combination of both analytical methods is a very powerful tool to assess the declared geographic origin. To our knowledge this is the first paper on food authenticity combining the stable isotope analysis of bio-elements with DNA based markers and their combined statistical analysis.
Ma, Ruijie; Lin, Xianming
2015-12-01
The problem based teaching (PBT) has been the main approach to the training in the universities o the world. Combined with the team oriented learning method, PBT will become the method available to the education in medical universities. In the paper, based on the common questions in teaching Jingluo Shuxue Xue (Science of Meridian and Acupoint), the concepts and characters of PBT and the team oriented learning method were analyzed. The implementation steps of PBT were set up in reference to the team oriented learning method. By quoting the original text in Beiji Qianjin Yaofang (Essential recipes for emergent use worth a thousand gold), the case analysis on "the thirteen devil points" was established with PBT.
Image dehazing based on non-local saturation
NASA Astrophysics Data System (ADS)
Wang, Linlin; Zhang, Qian; Yang, Deyun; Hou, Yingkun; He, Xiaoting
2018-04-01
In this paper, a method based on non-local saturation algorithm is proposed to avoid block and halo effect for single image dehazing with dark channel prior. First we convert original image from RGB color space into HSV color space with the idea of non-local method. Image saturation is weighted equally by the size of fixed window according to image resolution. Second we utilize the saturation to estimate the atmospheric light value and transmission rate. Then through the function of saturation and transmission, the haze-free image is obtained based on the atmospheric scattering model. Comparing the results of existing methods, our method can restore image color and enhance contrast. We guarantee the proposed method with quantitative and qualitative evaluation respectively. Experiments show the better visual effect with high efficiency.
Lenses that provide the transformation between two given wavefronts
NASA Astrophysics Data System (ADS)
Criado, C.; Alamo, N.
2016-12-01
We give an original method to design four types of lenses solving the following problems: focusing a given wavefront in a given point, and performing the transformation between two arbitrary incoming and outgoing wavefronts. The method to design the lenses profiles is based on the optical properties of the envelopes of certain families of Cartesian ovals of revolution.
ERIC Educational Resources Information Center
Sunarto, M. J. Dewiyani; Sagirani, Tri
2014-01-01
"The rise of Indonesia Golden Generation" is the theme of National Education Day in 2012. In an effort to create a golden generation; education must be interpreted as a complex problem, in particular the cultivation of character education that was originally using indoctrination method. Given the shifting of the changing times,…
Zhang, Haitao; Wu, Chenxue; Chen, Zewei; Liu, Zhao; Zhu, Yunhong
2017-01-01
Analyzing large-scale spatial-temporal k-anonymity datasets recorded in location-based service (LBS) application servers can benefit some LBS applications. However, such analyses can allow adversaries to make inference attacks that cannot be handled by spatial-temporal k-anonymity methods or other methods for protecting sensitive knowledge. In response to this challenge, first we defined a destination location prediction attack model based on privacy-sensitive sequence rules mined from large scale anonymity datasets. Then we proposed a novel on-line spatial-temporal k-anonymity method that can resist such inference attacks. Our anti-attack technique generates new anonymity datasets with awareness of privacy-sensitive sequence rules. The new datasets extend the original sequence database of anonymity datasets to hide the privacy-sensitive rules progressively. The process includes two phases: off-line analysis and on-line application. In the off-line phase, sequence rules are mined from an original sequence database of anonymity datasets, and privacy-sensitive sequence rules are developed by correlating privacy-sensitive spatial regions with spatial grid cells among the sequence rules. In the on-line phase, new anonymity datasets are generated upon LBS requests by adopting specific generalization and avoidance principles to hide the privacy-sensitive sequence rules progressively from the extended sequence anonymity datasets database. We conducted extensive experiments to test the performance of the proposed method, and to explore the influence of the parameter K value. The results demonstrated that our proposed approach is faster and more effective for hiding privacy-sensitive sequence rules in terms of hiding sensitive rules ratios to eliminate inference attacks. Our method also had fewer side effects in terms of generating new sensitive rules ratios than the traditional spatial-temporal k-anonymity method, and had basically the same side effects in terms of non-sensitive rules variation ratios with the traditional spatial-temporal k-anonymity method. Furthermore, we also found the performance variation tendency from the parameter K value, which can help achieve the goal of hiding the maximum number of original sensitive rules while generating a minimum of new sensitive rules and affecting a minimum number of non-sensitive rules.
Wu, Chenxue; Liu, Zhao; Zhu, Yunhong
2017-01-01
Analyzing large-scale spatial-temporal k-anonymity datasets recorded in location-based service (LBS) application servers can benefit some LBS applications. However, such analyses can allow adversaries to make inference attacks that cannot be handled by spatial-temporal k-anonymity methods or other methods for protecting sensitive knowledge. In response to this challenge, first we defined a destination location prediction attack model based on privacy-sensitive sequence rules mined from large scale anonymity datasets. Then we proposed a novel on-line spatial-temporal k-anonymity method that can resist such inference attacks. Our anti-attack technique generates new anonymity datasets with awareness of privacy-sensitive sequence rules. The new datasets extend the original sequence database of anonymity datasets to hide the privacy-sensitive rules progressively. The process includes two phases: off-line analysis and on-line application. In the off-line phase, sequence rules are mined from an original sequence database of anonymity datasets, and privacy-sensitive sequence rules are developed by correlating privacy-sensitive spatial regions with spatial grid cells among the sequence rules. In the on-line phase, new anonymity datasets are generated upon LBS requests by adopting specific generalization and avoidance principles to hide the privacy-sensitive sequence rules progressively from the extended sequence anonymity datasets database. We conducted extensive experiments to test the performance of the proposed method, and to explore the influence of the parameter K value. The results demonstrated that our proposed approach is faster and more effective for hiding privacy-sensitive sequence rules in terms of hiding sensitive rules ratios to eliminate inference attacks. Our method also had fewer side effects in terms of generating new sensitive rules ratios than the traditional spatial-temporal k-anonymity method, and had basically the same side effects in terms of non-sensitive rules variation ratios with the traditional spatial-temporal k-anonymity method. Furthermore, we also found the performance variation tendency from the parameter K value, which can help achieve the goal of hiding the maximum number of original sensitive rules while generating a minimum of new sensitive rules and affecting a minimum number of non-sensitive rules. PMID:28767687
Half-unit weighted bilinear algorithm for image contrast enhancement in capsule endoscopy
NASA Astrophysics Data System (ADS)
Rukundo, Olivier
2018-04-01
This paper proposes a novel enhancement method based exclusively on the bilinear interpolation algorithm for capsule endoscopy images. The proposed method does not convert the original RBG image components to HSV or any other color space or model; instead, it processes directly RGB components. In each component, a group of four adjacent pixels and half-unit weight in the bilinear weighting function are used to calculate the average pixel value, identical for each pixel in that particular group. After calculations, groups of identical pixels are overlapped successively in horizontal and vertical directions to achieve a preliminary-enhanced image. The final-enhanced image is achieved by halving the sum of the original and preliminary-enhanced image pixels. Quantitative and qualitative experiments were conducted focusing on pairwise comparisons between original and enhanced images. Final-enhanced images have generally the best diagnostic quality and gave more details about the visibility of vessels and structures in capsule endoscopy images.
Dual Key Speech Encryption Algorithm Based Underdetermined BSS
Zhao, Huan; Chen, Zuo; Zhang, Xixiang
2014-01-01
When the number of the mixed signals is less than that of the source signals, the underdetermined blind source separation (BSS) is a significant difficult problem. Due to the fact that the great amount data of speech communications and real-time communication has been required, we utilize the intractability of the underdetermined BSS problem to present a dual key speech encryption method. The original speech is mixed with dual key signals which consist of random key signals (one-time pad) generated by secret seed and chaotic signals generated from chaotic system. In the decryption process, approximate calculation is used to recover the original speech signals. The proposed algorithm for speech signals encryption can resist traditional attacks against the encryption system, and owing to approximate calculation, decryption becomes faster and more accurate. It is demonstrated that the proposed method has high level of security and can recover the original signals quickly and efficiently yet maintaining excellent audio quality. PMID:24955430
Biometrics encryption combining palmprint with two-layer error correction codes
NASA Astrophysics Data System (ADS)
Li, Hengjian; Qiu, Jian; Dong, Jiwen; Feng, Guang
2017-07-01
To bridge the gap between the fuzziness of biometrics and the exactitude of cryptography, based on combining palmprint with two-layer error correction codes, a novel biometrics encryption method is proposed. Firstly, the randomly generated original keys are encoded by convolutional and cyclic two-layer coding. The first layer uses a convolution code to correct burst errors. The second layer uses cyclic code to correct random errors. Then, the palmprint features are extracted from the palmprint images. Next, they are fused together by XORing operation. The information is stored in a smart card. Finally, the original keys extraction process is the information in the smart card XOR the user's palmprint features and then decoded with convolutional and cyclic two-layer code. The experimental results and security analysis show that it can recover the original keys completely. The proposed method is more secure than a single password factor, and has higher accuracy than a single biometric factor.
Prediction of Protein Structure by Template-Based Modeling Combined with the UNRES Force Field.
Krupa, Paweł; Mozolewska, Magdalena A; Joo, Keehyoung; Lee, Jooyoung; Czaplewski, Cezary; Liwo, Adam
2015-06-22
A new approach to the prediction of protein structures that uses distance and backbone virtual-bond dihedral angle restraints derived from template-based models and simulations with the united residue (UNRES) force field is proposed. The approach combines the accuracy and reliability of template-based methods for the segments of the target sequence with high similarity to those having known structures with the ability of UNRES to pack the domains correctly. Multiplexed replica-exchange molecular dynamics with restraints derived from template-based models of a given target, in which each restraint is weighted according to the accuracy of the prediction of the corresponding section of the molecule, is used to search the conformational space, and the weighted histogram analysis method and cluster analysis are applied to determine the families of the most probable conformations, from which candidate predictions are selected. To test the capability of the method to recover template-based models from restraints, five single-domain proteins with structures that have been well-predicted by template-based methods were used; it was found that the resulting structures were of the same quality as the best of the original models. To assess whether the new approach can improve template-based predictions with incorrectly predicted domain packing, four such targets were selected from the CASP10 targets; for three of them the new approach resulted in significantly better predictions compared with the original template-based models. The new approach can be used to predict the structures of proteins for which good templates can be found for sections of the sequence or an overall good template can be found for the entire sequence but the prediction quality is remarkably weaker in putative domain-linker regions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shumway, R.H.; McQuarrie, A.D.
Robust statistical approaches to the problem of discriminating between regional earthquakes and explosions are developed. We compare linear discriminant analysis using descriptive features like amplitude and spectral ratios with signal discrimination techniques using the original signal waveforms and spectral approximations to the log likelihood function. Robust information theoretic techniques are proposed and all methods are applied to 8 earthquakes and 8 mining explosions in Scandinavia and to an event from Novaya Zemlya of unknown origin. It is noted that signal discrimination approaches based on discrimination information and Renyi entropy perform better in the test sample than conventional methods based onmore » spectral ratios involving the P and S phases. Two techniques for identifying the ripple-firing pattern for typical mining explosions are proposed and shown to work well on simulated data and on several Scandinavian earthquakes and explosions. We use both cepstral analysis in the frequency domain and a time domain method based on the autocorrelation and partial autocorrelation functions. The proposed approach strips off underlying smooth spectral and seasonal spectral components corresponding to the echo pattern induced by two simple ripple-fired models. For two mining explosions, a pattern is identified whereas for two earthquakes, no pattern is evident.« less
Research on hotspot discovery in internet public opinions based on improved K-means.
Wang, Gensheng
2013-01-01
How to discover hotspot in the Internet public opinions effectively is a hot research field for the researchers related which plays a key role for governments and corporations to find useful information from mass data in the Internet. An improved K-means algorithm for hotspot discovery in internet public opinions is presented based on the analysis of existing defects and calculation principle of original K-means algorithm. First, some new methods are designed to preprocess website texts, select and express the characteristics of website texts, and define the similarity between two website texts, respectively. Second, clustering principle and the method of initial classification centers selection are analyzed and improved in order to overcome the limitations of original K-means algorithm. Finally, the experimental results verify that the improved algorithm can improve the clustering stability and classification accuracy of hotspot discovery in internet public opinions when used in practice.
Research on Hotspot Discovery in Internet Public Opinions Based on Improved K-Means
2013-01-01
How to discover hotspot in the Internet public opinions effectively is a hot research field for the researchers related which plays a key role for governments and corporations to find useful information from mass data in the Internet. An improved K-means algorithm for hotspot discovery in internet public opinions is presented based on the analysis of existing defects and calculation principle of original K-means algorithm. First, some new methods are designed to preprocess website texts, select and express the characteristics of website texts, and define the similarity between two website texts, respectively. Second, clustering principle and the method of initial classification centers selection are analyzed and improved in order to overcome the limitations of original K-means algorithm. Finally, the experimental results verify that the improved algorithm can improve the clustering stability and classification accuracy of hotspot discovery in internet public opinions when used in practice. PMID:24106496
NASA Astrophysics Data System (ADS)
Popov, Igor; Sukov, Sergey
2018-02-01
A modification of the adaptive artificial viscosity (AAV) method is considered. This modification is based on one stage time approximation and is adopted to calculation of gasdynamics problems on unstructured grids with an arbitrary type of grid elements. The proposed numerical method has simplified logic, better performance and parallel efficiency compared to the implementation of the original AAV method. Computer experiments evidence the robustness and convergence of the method to difference solution.
Mu, Zhaobin; Feng, Xiaoxiao; Zhang, Yun; Zhang, Hongyan
2016-02-01
A multi-residue method based on modified QuEChERS (quick, easy, cheap, effective, rugged, and safe) sample preparation, followed by liquid chromatography tandem mass spectrometry (LC-MS/MS), was developed and validated for the determination of three selected fungicides (propiconazole, pyraclostrobin, and isopyrazam) in seven animal origin foods. The overall recoveries at the three spiking levels of 0.005, 0.05, and 0.5 mg kg(-1) spanned between 72.3 and 101.4% with relative standard deviation (RSD) values between 0.7 and 14.9%. The method shows good linearity in the concentrations between 0.001 and 1 mg L(-1) with the coefficient of determination (R (2)) value >0.99 for each target analyte. The limit of detections (LODs) for target analytes were between 0.04 and 1.26 μg kg(-1), and the limit of quantifications (LOQs) were between 0.13 and 4.20 μg kg(-1). The matrix effect for each individual compound was evaluated through the study of ratios of the areas obtained in solvent and matrix standards. The optimized method provided a negligible matrix effect for propiconazole within 20%, whereas for pyraclostrobin and isopyrazam, the matrix effect was relatively significant with a maximum value of 49.8%. The developed method has been successfully applied to the analysis of 210 animal origin samples obtained from 16 provinces of China. The results suggested that the developed method was satisfactory for trace analysis of three fungicides in animal origin foods.
Strehl-constrained reconstruction of post-adaptive optics data and the Software Package AIRY, v. 6.1
NASA Astrophysics Data System (ADS)
Carbillet, Marcel; La Camera, Andrea; Deguignet, Jérémy; Prato, Marco; Bertero, Mario; Aristidi, Éric; Boccacci, Patrizia
2014-08-01
We first briefly present the last version of the Software Package AIRY, version 6.1, a CAOS-based tool which includes various deconvolution methods, accelerations, regularizations, super-resolution, boundary effects reduction, point-spread function extraction/extrapolation, stopping rules, and constraints in the case of iterative blind deconvolution (IBD). Then, we focus on a new formulation of our Strehl-constrained IBD, here quantitatively compared to the original formulation for simulated near-infrared data of an 8-m class telescope equipped with adaptive optics (AO), showing their equivalence. Next, we extend the application of the original method to the visible domain with simulated data of an AO-equipped 1.5-m telescope, testing also the robustness of the method with respect to the Strehl ratio estimation.
[Up-to-date methods of cell therapy in treatment of burns].
Smirnov, S V; Kiselev, I V; Vasil'ev, A V; Terskikh, V V
2003-01-01
Complex of methods for repair of wounded and burned skin with transplantation of keratinocytes and fibroblasts grown in vitro is proposed. Five different tissue constructions may be used. Original method of skin recovery based on use of alive equivalent of skin is developed. Results of its clinical application are presented. It is concluded that cell constructions may be used in combined treatment of wounds and burns.
Remote sensing fusion based on guided image filtering
NASA Astrophysics Data System (ADS)
Zhao, Wenfei; Dai, Qinling; Wang, Leiguang
2015-12-01
In this paper, we propose a novel remote sensing fusion approach based on guided image filtering. The fused images can well preserve the spectral features of the original multispectral (MS) images, meanwhile, enhance the spatial details information. Four quality assessment indexes are also introduced to evaluate the fusion effect when compared with other fusion methods. Experiments carried out on Gaofen-2, QuickBird, WorldView-2 and Landsat-8 images. And the results show an excellent performance of the proposed method.
Kaganov, A Sh; Kir'yanov, P A
2015-01-01
The objective of the present publication was to discuss the possibility of application of cybernetic modeling methods to overcome the apparent discrepancy between two kinds of the speech records, viz. initial ones (e.g. obtained in the course of special investigation activities) and the voice prints obtained from the persons subjected to the criminalistic examination. The paper is based on the literature sources and the materials of original criminalistics expertises performed by the authors.
Sub-pattern based multi-manifold discriminant analysis for face recognition
NASA Astrophysics Data System (ADS)
Dai, Jiangyan; Guo, Changlu; Zhou, Wei; Shi, Yanjiao; Cong, Lin; Yi, Yugen
2018-04-01
In this paper, we present a Sub-pattern based Multi-manifold Discriminant Analysis (SpMMDA) algorithm for face recognition. Unlike existing Multi-manifold Discriminant Analysis (MMDA) approach which is based on holistic information of face image for recognition, SpMMDA operates on sub-images partitioned from the original face image and then extracts the discriminative local feature from the sub-images separately. Moreover, the structure information of different sub-images from the same face image is considered in the proposed method with the aim of further improve the recognition performance. Extensive experiments on three standard face databases (Extended YaleB, CMU PIE and AR) demonstrate that the proposed method is effective and outperforms some other sub-pattern based face recognition methods.
An archaeal origin of eukaryotes supports only two primary domains of life.
Williams, Tom A; Foster, Peter G; Cox, Cymon J; Embley, T Martin
2013-12-12
The discovery of the Archaea and the proposal of the three-domains 'universal' tree, based on ribosomal RNA and core genes mainly involved in protein translation, catalysed new ideas for cellular evolution and eukaryotic origins. However, accumulating evidence suggests that the three-domains tree may be incorrect: evolutionary trees made using newer methods place eukaryotic core genes within the Archaea, supporting hypotheses in which an archaeon participated in eukaryotic origins by founding the host lineage for the mitochondrial endosymbiont. These results provide support for only two primary domains of life--Archaea and Bacteria--because eukaryotes arose through partnership between them.
An Engineering Method of Civil Jet Requirements Validation Based on Requirements Project Principle
NASA Astrophysics Data System (ADS)
Wang, Yue; Gao, Dan; Mao, Xuming
2018-03-01
A method of requirements validation is developed and defined to meet the needs of civil jet requirements validation in product development. Based on requirements project principle, this method will not affect the conventional design elements, and can effectively connect the requirements with design. It realizes the modern civil jet development concept, which is “requirement is the origin, design is the basis”. So far, the method has been successfully applied in civil jet aircraft development in China. Taking takeoff field length as an example, the validation process and the validation method of the requirements are detailed introduced in the study, with the hope of providing the experiences to other civil jet product design.
Bertini, Sabrina; Risi, Giulia; Guerrini, Marco; Carrick, Kevin; Szajek, Anita Y; Mulloy, Barbara
2017-07-19
In a collaborative study involving six laboratories in the USA, Europe, and India the molecular weight distributions of a panel of heparin sodium samples were determined, in order to compare heparin sodium of bovine intestinal origin with that of bovine lung and porcine intestinal origin. Porcine samples met the current criteria as laid out in the USP Heparin Sodium monograph. Bovine lung heparin samples had consistently lower average molecular weights. Bovine intestinal heparin was variable in molecular weight; some samples fell below the USP limits, some fell within these limits and others fell above the upper limits. These data will inform the establishment of pharmacopeial acceptance criteria for heparin sodium derived from bovine intestinal mucosa. The method for MW determination as described in the USP monograph uses a single, broad standard calibrant to characterize the chromatographic profile of heparin sodium on high-resolution silica-based GPC columns. These columns may be short-lived in some laboratories. Using the panel of samples described above, methods based on the use of robust polymer-based columns have been developed. In addition to the use of the USP's broad standard calibrant for heparin sodium with these columns, a set of conditions have been devised that allow light-scattering detected molecular weight characterization of heparin sodium, giving results that agree well with the monograph method. These findings may facilitate the validation of variant chromatographic methods with some practical advantages over the USP monograph method.
ERIC Educational Resources Information Center
Klegeris, Andis; Bahniwal, Manpreet; Hurren, Heather
2013-01-01
Problem-based learning (PBL) was originally introduced in medical education programs as a form of small-group learning, but its use has now spread to large undergraduate classrooms in various other disciplines. Introduction of new teaching techniques, including PBL-based methods, needs to be justified by demonstrating the benefits of such…
Wang, Jing-Jing; Wu, Hai-Feng; Sun, Tao; Li, Xia; Wang, Wei; Tao, Li-Xin; Huo, Da; Lv, Ping-Xin; He, Wen; Guo, Xiu-Hua
2013-01-01
Lung cancer, one of the leading causes of cancer-related deaths, usually appears as solitary pulmonary nodules (SPNs) which are hard to diagnose using the naked eye. In this paper, curvelet-based textural features and clinical parameters are used with three prediction models [a multilevel model, a least absolute shrinkage and selection operator (LASSO) regression method, and a support vector machine (SVM)] to improve the diagnosis of benign and malignant SPNs. Dimensionality reduction of the original curvelet-based textural features was achieved using principal component analysis. In addition, non-conditional logistical regression was used to find clinical predictors among demographic parameters and morphological features. The results showed that, combined with 11 clinical predictors, the accuracy rates using 12 principal components were higher than those using the original curvelet-based textural features. To evaluate the models, 10-fold cross validation and back substitution were applied. The results obtained, respectively, were 0.8549 and 0.9221 for the LASSO method, 0.9443 and 0.9831 for SVM, and 0.8722 and 0.9722 for the multilevel model. All in all, it was found that using curvelet-based textural features after dimensionality reduction and using clinical predictors, the highest accuracy rate was achieved with SVM. The method may be used as an auxiliary tool to differentiate between benign and malignant SPNs in CT images.
Qi, Luming; Liu, Honggao; Li, Jieqing; Li, Tao; Wang, Yuanzhong
2018-01-15
Origin traceability is an important step to control the nutritional and pharmacological quality of food products. Boletus edulis mushroom is a well-known food resource in the world. Its nutritional and medicinal properties are drastically varied depending on geographical origins. In this study, three sensor systems (inductively coupled plasma atomic emission spectrophotometer (ICP-AES), ultraviolet-visible (UV-Vis) and Fourier transform mid-infrared spectroscopy (FT-MIR)) were applied for the origin traceability of 192 mushroom samples (caps and stipes) in combination with chemometrics. The difference between cap and stipe was clearly illustrated based on a single sensor technique, respectively. Feature variables from three instruments were used for origin traceability. Two supervised classification methods, partial least square discriminant analysis (FLS-DA) and grid search support vector machine (GS-SVM), were applied to develop mathematical models. Two steps (internal cross-validation and external prediction for unknown samples) were used to evaluate the performance of a classification model. The result is satisfactory with high accuracies ranging from 90.625% to 100%. These models also have an excellent generalization ability with the optimal parameters. Based on the combination of three sensory systems, our study provides a multi-sensory and comprehensive origin traceability of B. edulis mushrooms.
Qi, Luming; Liu, Honggao; Li, Jieqing; Li, Tao
2018-01-01
Origin traceability is an important step to control the nutritional and pharmacological quality of food products. Boletus edulis mushroom is a well-known food resource in the world. Its nutritional and medicinal properties are drastically varied depending on geographical origins. In this study, three sensor systems (inductively coupled plasma atomic emission spectrophotometer (ICP-AES), ultraviolet-visible (UV-Vis) and Fourier transform mid-infrared spectroscopy (FT-MIR)) were applied for the origin traceability of 184 mushroom samples (caps and stipes) in combination with chemometrics. The difference between cap and stipe was clearly illustrated based on a single sensor technique, respectively. Feature variables from three instruments were used for origin traceability. Two supervised classification methods, partial least square discriminant analysis (FLS-DA) and grid search support vector machine (GS-SVM), were applied to develop mathematical models. Two steps (internal cross-validation and external prediction for unknown samples) were used to evaluate the performance of a classification model. The result is satisfactory with high accuracies ranging from 90.625% to 100%. These models also have an excellent generalization ability with the optimal parameters. Based on the combination of three sensory systems, our study provides a multi-sensory and comprehensive origin traceability of B. edulis mushrooms. PMID:29342969
Haematological validation of a computer-based bone marrow reporting system.
Nguyen, D T; Diamond, L W; Cavenagh, J D; Parameswaran, R; Amess, J A
1997-01-01
AIMS: To prove the safety and effectiveness of "Professor Belmonte", a knowledge-based system for bone marrow reporting, a formal evaluation of the reports generated by the system was performed. METHODS: Three haematologists (a consultant, a senior registrar, and a junior registrar), none of whom were involved in the development of the software, compared the unedited reports generated by Professor Belmonte with the original bone marrow reports in 785 unselected cases. Each haematologist independently graded the quality of Belmonte's reports using one of four categories: (a) better than the original report (more informative, containing useful information missing in the original report); (b) equivalent to the original report; (c) satisfactory, but missing information that should have been included; and (d) unsatisfactory. RESULTS: The consultant graded 64 reports as more informative than the original, 687 as equivalent to the original, 32 as satisfactory, and two as unsatisfactory. The senior registrar considered 29 reports to be better than the original, 739 to be equivalent to the original, 15 to be satisfactory, and two to be unsatisfactory. The junior registrar found that 88 reports were better than the original, 681 were equivalent to the original, 14 were satisfactory, and two were unsatisfactory. Each judge found two different reports to be unsatisfactory according to their criteria. All 785 reports generated by the computer system received at least two scores of satisfactory or better. CONCLUSIONS: In this representative study, Professor Belmonte generated bone marrow reports that proved to be as accurate as the original reports in a large university hospital. The haematology knowledge contained within the system, the reasoning process, and the function of the software are safe and effective for assisting haematologists in generating high quality bone marrow reports. PMID:9215118
Gilbertson, Simon
2013-08-07
This article presents and discusses a long-term repeated-immersion research process that explores meaning allocated to an episode of 50 seconds of music improvisation in early neurosurgical rehabilitation by a teenage boy with severe traumatic brain injury and his music therapist. The process began with the original therapy session in August 1994 and extends to the current time of writing in 2013. A diverse selection of qualitative research methods were used during a repeated immersion and engagement with the selected episodes. The multiple methods used in this enquiry include therapeutic narrative analysis and musicological and video analysis during my doctoral research between 2002 and 2004, arts-based research in 2008 using expressive writing, and arts-based research in 2012 based on the creation of a body cast of my right hand as I used it to play the first note of my music improvising in the original therapy episode, which is accompanied by reflective journaling. The casting of my hand was done to explore and reconsider the role of my own body as an embodied and integral, but originally hidden, part of the therapy process. Put together, these investigations explore the potential meanings of the episode of music improvisation in therapy in an innovative and imaginative way. However, this article does not aim at this stage to present a model or theory for neurorehabilitation but offers an example of how a combination of diverse qualitative methods over an extended period of time can be instrumental in gaining innovative and rich insights into initially hidden perspectives on health, well-being, and human relating.
2013-01-01
This article presents and discusses a long-term repeated-immersion research process that explores meaning allocated to an episode of 50 seconds of music improvisation in early neurosurgical rehabilitation by a teenage boy with severe traumatic brain injury and his music therapist. The process began with the original therapy session in August 1994 and extends to the current time of writing in 2013. A diverse selection of qualitative research methods were used during a repeated immersion and engagement with the selected episodes. The multiple methods used in this enquiry include therapeutic narrative analysis and musicological and video analysis during my doctoral research between 2002 and 2004, arts-based research in 2008 using expressive writing, and arts-based research in 2012 based on the creation of a body cast of my right hand as I used it to play the first note of my music improvising in the original therapy episode, which is accompanied by reflective journaling. The casting of my hand was done to explore and reconsider the role of my own body as an embodied and integral, but originally hidden, part of the therapy process. Put together, these investigations explore the potential meanings of the episode of music improvisation in therapy in an innovative and imaginative way. However, this article does not aim at this stage to present a model or theory for neurorehabilitation but offers an example of how a combination of diverse qualitative methods over an extended period of time can be instrumental in gaining innovative and rich insights into initially hidden perspectives on health, well-being, and human relating. PMID:23930989
Teleradiology from the provider's perspective-cost analysis for a mid-size university hospital.
Rosenberg, Christian; Kroos, Kristin; Rosenberg, Britta; Hosten, Norbert; Flessa, Steffen
2013-08-01
Real costs of teleradiology services have not been systematically calculated. Pricing policies are not evidence-based. This study aims to prove the feasibility of performing an original cost analysis for teleradiology services and show break-even points to perform cost-effective practice. Based on the teleradiology services provided by the Greifswald University Hospital in northeastern Germany, a detailed process analysis and an activity-based costing model revealed costs per service unit according to eight examination categories. The Monte Carlo method was used to simulate the cost amplitude and identify pricing thresholds. Twenty-two sub-processes and four staff categories were identified. The average working time for one unit was 55 (x-ray) to 72 min (whole-body CT). Personnel costs were dominant (up to 68 %), representing lower limit costs. The Monte Carlo method showed the cost distribution per category according to the deficiency risk. Avoiding deficient pricing by a likelihood of 90 % increased the cost of a cranial CT almost twofold as compared with the lower limit cost. Original cost analysis is possible when providing teleradiology services with complex statutory requirements in place. Methodology and results provide useful data to help enhance efficiency in hospital management as well as implement realistic reimbursement fees. • Analysis of original costs of teleradiology is possible for a providing hospital • Results discriminate pricing thresholds and lower limit costs to perform cost-effective practice • The study methods represent a managing tool to enhance efficiency in providing facilities • The data are useful to help represent telemedicine services in regular medical fee schedules.
Methods for producing silicon carbide architectural preforms
NASA Technical Reports Server (NTRS)
DiCarlo, James A. (Inventor); Yun, Hee (Inventor)
2010-01-01
Methods are disclosed for producing architectural preforms and high-temperature composite structures containing high-strength ceramic fibers with reduced preforming stresses within each fiber, with an in-situ grown coating on each fiber surface, with reduced boron within the bulk of each fiber, and with improved tensile creep and rupture resistance properties for each fiber. The methods include the steps of preparing an original sample of a preform formed from a pre-selected high-strength silicon carbide ceramic fiber type, placing the original sample in a processing furnace under a pre-selected preforming stress state and thermally treating the sample in the processing furnace at a pre-selected processing temperature and hold time in a processing gas having a pre-selected composition, pressure, and flow rate. For the high-temperature composite structures, the method includes additional steps of depositing a thin interphase coating on the surface of each fiber and forming a ceramic or carbon-based matrix within the sample.
NASA Astrophysics Data System (ADS)
Rachmawati, Vimala; Khusnul Arif, Didik; Adzkiya, Dieky
2018-03-01
The systems contained in the universe often have a large order. Thus, the mathematical model has many state variables that affect the computation time. In addition, generally not all variables are known, so estimations are needed to measure the magnitude of the system that cannot be measured directly. In this paper, we discuss the model reduction and estimation of state variables in the river system to measure the water level. The model reduction of a system is an approximation method of a system with a lower order without significant errors but has a dynamic behaviour that is similar to the original system. The Singular Perturbation Approximation method is one of the model reduction methods where all state variables of the equilibrium system are partitioned into fast and slow modes. Then, The Kalman filter algorithm is used to estimate state variables of stochastic dynamic systems where estimations are computed by predicting state variables based on system dynamics and measurement data. Kalman filters are used to estimate state variables in the original system and reduced system. Then, we compare the estimation results of the state and computational time between the original and reduced system.
[Determination of wine original regions using information fusion of NIR and MIR spectroscopy].
Xiang, Ling-Li; Li, Meng-Hua; Li, Jing-Mingz; Li, Jun-Hui; Zhang, Lu-Da; Zhao, Long-Lian
2014-10-01
Geographical origins of wine grapes are significant factors affecting wine quality and wine prices. Tasters' evaluation is a good method but has some limitations. It is important to discriminate different wine original regions quickly and accurately. The present paper proposed a method to determine wine original regions based on Bayesian information fusion that fused near-infrared (NIR) transmission spectra information and mid-infrared (MIR) ATR spectra information of wines. This method improved the determination results by expanding the sources of analysis information. NIR spectra and MIR spectra of 153 wine samples from four different regions of grape growing were collected by near-infrared and mid-infrared Fourier transform spe trometer separately. These four different regions are Huailai, Yantai, Gansu and Changli, which areall typical geographical originals for Chinese wines. NIR and MIR discriminant models for wine regions were established using partial least squares discriminant analysis (PLS-DA) based on NIR spectra and MIR spectra separately. In PLS-DA, the regions of wine samples are presented in group of binary code. There are four wine regions in this paper, thereby using four nodes standing for categorical variables. The output nodes values for each sample in NIR and MIR models were normalized first. These values stand for the probabilities of each sample belonging to each category. They seemed as the input to the Bayesian discriminant formula as a priori probability value. The probabilities were substituteed into the Bayesian formula to get posterior probabilities, by which we can judge the new class characteristics of these samples. Considering the stability of PLS-DA models, all the wine samples were divided into calibration sets and validation sets randomly for ten times. The results of NIR and MIR discriminant models of four wine regions were as follows: the average accuracy rates of calibration sets were 78.21% (NIR) and 82.57% (MIR), and the average accuracy rates of validation sets were 82.50% (NIR) and 81.98% (MIR). After using the method proposed in this paper, the accuracy rates of calibration and validation changed to 87.11% and 90.87% separately, which all achieved better results of determination than individual spectroscopy. These results suggest that Bayesian information fusion of NIR and MIR spectra is feasible for fast identification of wine original regions.
Idehara, Kenji; Yamagishi, Gaku; Yamashita, Kunihiko; Ito, Michio
2008-01-01
The murine local lymph node assay (LLNA) is an accepted and widely used method for assessing the skin-sensitizing potential of chemicals. Here, we describe a non-radio isotopic modified LLNA in which adenosine triphosphate (ATP) content is used as an endpoint instead of radioisotope (RI); the method is termed LLNA modified by Daicel based on ATP content (LLNA-DA). Groups of female CBA/JNCrlj mice were treated topically on the dorsum of both ears with test chemicals or a vehicle control on days 1, 2, and 3; an additional fourth application was conducted on day 7. Pretreatment with 1% sodium lauryl sulfate solution was performed 1 h before each application. On day 8, the amount of ATP in the draining auricular lymph nodes was measured as an alternative endpoint by the luciferin-luciferase assay in terms of bioluminescence (relative light units, RLU). A stimulation index (SI) relative to the concurrent vehicle control was derived based on the RLU value, and an SI of 3 was set as the cut-off value. Using the LLNA-DA method, 31 chemicals were tested and the results were compared with those of other test methods. The accuracy of LLNA-DA vs LLNA, guinea pig tests, and human tests was 93% (28/30), 80% (20/25), and 79% (15/19), respectively. The estimated concentration (EC) 3 value was calculated and compared with that of the original LLNA. It was found that the EC3 values obtained by LLNA-DA were almost equal to those obtained by the original LLNA. The SI value based on ATP content is similar to that of the original LLNA as a result of the modifications in the chemical treatment procedure, which contribute to improving the SI value. It is concluded that LLNA-DA is a promising non-RI alternative method for evaluating the skin-sensitizing potential of chemicals.
Translations on North Korea No. 487
1976-10-19
where the great Ch’ongsan-ri Spirit and the Ch’ongsan-ri Method originally started. Ch’ongsan-ri—a land of ecstasies —shone in the burning glow of the...base and make complete combat prepara- tions so as to be able to repulse all the enemy attacks in a single stroke . The people in the guerrilla base
The quantization of the chiral Schwinger model based on the BFT - BFV formalism
NASA Astrophysics Data System (ADS)
Kim, Won T.; Kim, Yong-Wan; Park, Mu-In; Park, Young-Jai; Yoon, Sean J.
1997-03-01
We apply the newly improved Batalin - Fradkin - Tyutin (BFT) Hamiltonian method to the chiral Schwinger model in the case of the regularization ambiguity a>1. We show that one can systematically construct the first class constraints by the BFT Hamiltonian method, and also show that the well-known Dirac brackets of the original phase space variables are exactly the Poisson brackets of the corresponding modified fields in the extended phase space. Furthermore, we show that the first class Hamiltonian is simply obtained by replacing the original fields in the canonical Hamiltonian by these modified fields. Performing the momentum integrations, we obtain the corresponding first class Lagrangian in the configuration space.
Difference equation state approximations for nonlinear hereditary control problems
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1982-01-01
Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems.
U'ren, Jana M; Dalling, James W; Gallery, Rachel E; Maddison, David R; Davis, E Christine; Gibson, Cara M; Arnold, A Elizabeth
2009-04-01
Fungi associated with seeds of tropical trees pervasively affect seed survival and germination, and thus are an important, but understudied, component of forest ecology. Here, we examine the diversity and evolutionary origins of fungi isolated from seeds of an important pioneer tree (Cecropia insignis, Cecropiaceae) following burial in soil for five months in a tropical moist forest in Panama. Our approach, which relied on molecular sequence data because most isolates did not sporulate in culture, provides an opportunity to evaluate several methods currently used to analyse environmental samples of fungi. First, intra- and interspecific divergence were estimated for the nu-rITS and 5.8S gene for four genera of Ascomycota that are commonly recovered from seeds. Using these values we estimated species boundaries for 527 isolates, showing that seed-associated fungi are highly diverse, horizontally transmitted, and genotypically congruent with some foliar endophytes from the same site. We then examined methods for inferring the taxonomic placement and phylogenetic relationships of these fungi, evaluating the effects of manual versus automated alignment, model selection, and inference methods, as well as the quality of BLAST-based identification using GenBank. We found that common methods such as neighbor-joining and Bayesian inference differ in their sensitivity to alignment methods; analyses of particular fungal genera differ in their sensitivity to alignments; and numerous and sometimes intricate disparities exist between BLAST-based versus phylogeny-based identification methods. Lastly, we used our most robust methods to infer phylogenetic relationships of seed-associated fungi in four focal genera, and reconstructed ancestral states to generate preliminary hypotheses regarding the evolutionary origins of this guild. Our results illustrate the dynamic evolutionary relationships among endophytic fungi, pathogens, and seed-associated fungi, and the apparent evolutionary distinctiveness of saprotrophs. Our study also elucidates the diversity, taxonomy, and ecology of an important group of plant-associated fungi and highlights some of the advantages and challenges inherent in the use of ITS data for environmental sampling of fungi.
Recent developments in the Dorfman-Berbaum-Metz procedure for multireader ROC study analysis.
Hillis, Stephen L; Berbaum, Kevin S; Metz, Charles E
2008-05-01
The Dorfman-Berbaum-Metz (DBM) method has been one of the most popular methods for analyzing multireader receiver-operating characteristic (ROC) studies since it was proposed in 1992. Despite its popularity, the original procedure has several drawbacks: it is limited to jackknife accuracy estimates, it is substantially conservative, and it is not based on a satisfactory conceptual or theoretical model. Recently, solutions to these problems have been presented in three papers. Our purpose is to summarize and provide an overview of these recent developments. We present and discuss the recently proposed solutions for the various drawbacks of the original DBM method. We compare the solutions in a simulation study and find that they result in improved performance for the DBM procedure. We also compare the solutions using two real data studies and find that the modified DBM procedure that incorporates these solutions yields more significant results and clearer interpretations of the variance component parameters than the original DBM procedure. We recommend using the modified DBM procedure that incorporates the recent developments.
Double-tick realization of binary control program
NASA Astrophysics Data System (ADS)
Kobylecki, Michał; Kania, Dariusz
2016-12-01
This paper presents a procedure for the implementation of control algorithms for hardware-bit compatible with the standard IEC61131-3. The described transformation based on the sets of calculus and graphs, allows translation of the original form of the control program to the form in full compliance with the original, giving the architecture represented by two tick. The proposed method enables the efficient implementation of the control bits in the FPGA with the use of a standardized programming language LD.
ERIC Educational Resources Information Center
Obrecht, Dean H.
This report contrasts the results of a rigidly specified, pattern-oriented approach to learning Spanish with an approach that emphasizes the origination of sentences by the learner in direct response to stimuli. Pretesting and posttesting statistics are presented and conclusions are discussed. The experimental method, which required the student to…
Bending of an Infinite beam on a base with two parameters in the absence of a part of the base
NASA Astrophysics Data System (ADS)
Aleksandrovskiy, Maxim; Zaharova, Lidiya
2018-03-01
Currently, in connection with the rapid development of high-rise construction and the improvement of joint operation of high-rise structures and bases models, the questions connected with the use of various calculation methods become topical. The rigor of analytical methods is capable of more detailed and accurate characterization of the structures behavior, which will affect the reliability of objects and can lead to a reduction in their cost. In the article, a model with two parameters is used as a computational model of the base that can effectively take into account the distributive properties of the base by varying the coefficient reflecting the shift parameter. The paper constructs the effective analytical solution of the problem of a beam of infinite length interacting with a two-parameter voided base. Using the Fourier integral equations, the original differential equation is reduced to the Fredholm integral equation of the second kind with a degenerate kernel, and all the integrals are solved analytically and explicitly, which leads to an increase in the accuracy of the computations in comparison with the approximate methods. The paper consider the solution of the problem of a beam loaded with a concentrated force applied at the point of origin with a fixed value of the length of the dip section. The paper gives the analysis of the obtained results values for various parameters of coefficient taking into account cohesion of the ground.
Niang, Oumar; Thioune, Abdoulaye; El Gueirea, Mouhamed Cheikh; Deléchelle, Eric; Lemoine, Jacques
2012-09-01
The major problem with the empirical mode decomposition (EMD) algorithm is its lack of a theoretical framework. So, it is difficult to characterize and evaluate this approach. In this paper, we propose, in the 2-D case, the use of an alternative implementation to the algorithmic definition of the so-called "sifting process" used in the original Huang's EMD method. This approach, especially based on partial differential equations (PDEs), was presented by Niang in previous works, in 2005 and 2007, and relies on a nonlinear diffusion-based filtering process to solve the mean envelope estimation problem. In the 1-D case, the efficiency of the PDE-based method, compared to the original EMD algorithmic version, was also illustrated in a recent paper. Recently, several 2-D extensions of the EMD method have been proposed. Despite some effort, 2-D versions for EMD appear poorly performing and are very time consuming. So in this paper, an extension to the 2-D space of the PDE-based approach is extensively described. This approach has been applied in cases of both signal and image decomposition. The obtained results confirm the usefulness of the new PDE-based sifting process for the decomposition of various kinds of data. Some results have been provided in the case of image decomposition. The effectiveness of the approach encourages its use in a number of signal and image applications such as denoising, detrending, or texture analysis.
In Pursuit of Professionalism in the Field of Chemistry Education in China: The Story of Zhixin Liu
ERIC Educational Resources Information Center
Wei, Bing
2012-01-01
In China, science educators as a professional group were originally referred to as academic staff responsible for teaching the subject-based science teaching methods course at the related science departments at teachers' universities. In this study, a biographic method was used to approach the professional life of Zhixin Liu, who was a senior…
Situational Approaches to Direct Practice: Origin, Decline, and Re-Emergence
ERIC Educational Resources Information Center
Murdach, Allison D.
2007-01-01
During the 1890s and the first three decades of the 20th century, social work in the United States developed a community-based direct practice approach to family assistance and social reform. The basis for this method was a situational view of social life that emphasized the use of interpersonal and transactional methods to achieve social and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmidt, Matthew; Constable, Steve; Ing, Christopher
2014-06-21
We developed and studied the implementation of trial wavefunctions in the newly proposed Langevin equation Path Integral Ground State (LePIGS) method [S. Constable, M. Schmidt, C. Ing, T. Zeng, and P.-N. Roy, J. Phys. Chem. A 117, 7461 (2013)]. The LePIGS method is based on the Path Integral Ground State (PIGS) formalism combined with Path Integral Molecular Dynamics sampling using a Langevin equation based sampling of the canonical distribution. This LePIGS method originally incorporated a trivial trial wavefunction, ψ{sub T}, equal to unity. The present paper assesses the effectiveness of three different trial wavefunctions on three isotopes of hydrogen formore » cluster sizes N = 4, 8, and 13. The trial wavefunctions of interest are the unity trial wavefunction used in the original LePIGS work, a Jastrow trial wavefunction that includes correlations due to hard-core repulsions, and a normal mode trial wavefunction that includes information on the equilibrium geometry. Based on this analysis, we opt for the Jastrow wavefunction to calculate energetic and structural properties for parahydrogen, orthodeuterium, and paratritium clusters of size N = 4 − 19, 33. Energetic and structural properties are obtained and compared to earlier work based on Monte Carlo PIGS simulations to study the accuracy of the proposed approach. The new results for paratritium clusters will serve as benchmark for future studies. This paper provides a detailed, yet general method for optimizing the necessary parameters required for the study of the ground state of a large variety of systems.« less
Short text sentiment classification based on feature extension and ensemble classifier
NASA Astrophysics Data System (ADS)
Liu, Yang; Zhu, Xie
2018-05-01
With the rapid development of Internet social media, excavating the emotional tendencies of the short text information from the Internet, the acquisition of useful information has attracted the attention of researchers. At present, the commonly used can be attributed to the rule-based classification and statistical machine learning classification methods. Although micro-blog sentiment analysis has made good progress, there still exist some shortcomings such as not highly accurate enough and strong dependence from sentiment classification effect. Aiming at the characteristics of Chinese short texts, such as less information, sparse features, and diverse expressions, this paper considers expanding the original text by mining related semantic information from the reviews, forwarding and other related information. First, this paper uses Word2vec to compute word similarity to extend the feature words. And then uses an ensemble classifier composed of SVM, KNN and HMM to analyze the emotion of the short text of micro-blog. The experimental results show that the proposed method can make good use of the comment forwarding information to extend the original features. Compared with the traditional method, the accuracy, recall and F1 value obtained by this method have been improved.
A transformed path integral approach for solution of the Fokker-Planck equation
NASA Astrophysics Data System (ADS)
Subramaniam, Gnana M.; Vedula, Prakash
2017-10-01
A novel path integral (PI) based method for solution of the Fokker-Planck equation is presented. The proposed method, termed the transformed path integral (TPI) method, utilizes a new formulation for the underlying short-time propagator to perform the evolution of the probability density function (PDF) in a transformed computational domain where a more accurate representation of the PDF can be ensured. The new formulation, based on a dynamic transformation of the original state space with the statistics of the PDF as parameters, preserves the non-negativity of the PDF and incorporates short-time properties of the underlying stochastic process. New update equations for the state PDF in a transformed space and the parameters of the transformation (including mean and covariance) that better accommodate nonlinearities in drift and non-Gaussian behavior in distributions are proposed (based on properties of the SDE). Owing to the choice of transformation considered, the proposed method maps a fixed grid in transformed space to a dynamically adaptive grid in the original state space. The TPI method, in contrast to conventional methods such as Monte Carlo simulations and fixed grid approaches, is able to better represent the distributions (especially the tail information) and better address challenges in processes with large diffusion, large drift and large concentration of PDF. Additionally, in the proposed TPI method, error bounds on the probability in the computational domain can be obtained using the Chebyshev's inequality. The benefits of the TPI method over conventional methods are illustrated through simulations of linear and nonlinear drift processes in one-dimensional and multidimensional state spaces. The effects of spatial and temporal grid resolutions as well as that of the diffusion coefficient on the error in the PDF are also characterized.
Contextualising primate origins--an ecomorphological framework.
Soligo, Christophe; Smaers, Jeroen B
2016-04-01
Ecomorphology - the characterisation of the adaptive relationship between an organism's morphology and its ecological role - has long been central to theories of the origin and early evolution of the primate order. This is exemplified by two of the most influential theories of primate origins: Matt Cartmill's Visual Predation Hypothesis, and Bob Sussman's Angiosperm Co-Evolution Hypothesis. However, the study of primate origins is constrained by the absence of data directly documenting the events under investigation, and has to rely instead on a fragmentary fossil record and the methodological assumptions inherent in phylogenetic comparative analyses of extant species. These constraints introduce particular challenges for inferring the ecomorphology of primate origins, as morphology and environmental context must first be inferred before the relationship between the two can be considered. Fossils can be integrated in comparative analyses and observations of extant model species and laboratory experiments of form-function relationships are critical for the functional interpretation of the morphology of extinct species. Recent developments have led to important advancements, including phylogenetic comparative methods based on more realistic models of evolution, and improved methods for the inference of clade divergence times, as well as an improved fossil record. This contribution will review current perspectives on the origin and early evolution of primates, paying particular attention to their phylogenetic (including cladistic relationships and character evolution) and environmental (including chronology, geography, and physical environments) contextualisation, before attempting an up-to-date ecomorphological synthesis of primate origins. © 2016 Anatomical Society.
Interactive visual exploration and analysis of origin-destination data
NASA Astrophysics Data System (ADS)
Ding, Linfang; Meng, Liqiu; Yang, Jian; Krisp, Jukka M.
2018-05-01
In this paper, we propose a visual analytics approach for the exploration of spatiotemporal interaction patterns of massive origin-destination data. Firstly, we visually query the movement database for data at certain time windows. Secondly, we conduct interactive clustering to allow the users to select input variables/features (e.g., origins, destinations, distance, and duration) and to adjust clustering parameters (e.g. distance threshold). The agglomerative hierarchical clustering method is applied for the multivariate clustering of the origin-destination data. Thirdly, we design a parallel coordinates plot for visualizing the precomputed clusters and for further exploration of interesting clusters. Finally, we propose a gradient line rendering technique to show the spatial and directional distribution of origin-destination clusters on a map view. We implement the visual analytics approach in a web-based interactive environment and apply it to real-world floating car data from Shanghai. The experiment results show the origin/destination hotspots and their spatial interaction patterns. They also demonstrate the effectiveness of our proposed approach.
Degen, Bernd; Blanc-Jolivet, Céline; Stierand, Katrin; Gillet, Elizabeth
2017-03-01
During the past decade, the use of DNA for forensic applications has been extensively implemented for plant and animal species, as well as in humans. Tracing back the geographical origin of an individual usually requires genetic assignment analysis. These approaches are based on reference samples that are grouped into populations or other aggregates and intend to identify the most likely group of origin. Often this grouping does not have a biological but rather a historical or political justification, such as "country of origin". In this paper, we present a new nearest neighbour approach to individual assignment or classification within a given but potentially imperfect grouping of reference samples. This method, which is based on the genetic distance between individuals, functions better in many cases than commonly used methods. We demonstrate the operation of our assignment method using two data sets. One set is simulated for a large number of trees distributed in a 120km by 120km landscape with individual genotypes at 150 SNPs, and the other set comprises experimental data of 1221 individuals of the African tropical tree species Entandrophragma cylindricum (Sapelli) genotyped at 61 SNPs. Judging by the level of correct self-assignment, our approach outperformed the commonly used frequency and Bayesian approaches by 15% for the simulated data set and by 5-7% for the Sapelli data set. Our new approach is less sensitive to overlapping sources of genetic differentiation, such as genetic differences among closely-related species, phylogeographic lineages and isolation by distance, and thus operates better even for suboptimal grouping of individuals. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Topological charge number multiplexing for JTC multiple-image encryption
NASA Astrophysics Data System (ADS)
Chen, Qi; Shen, Xueju; Dou, Shuaifeng; Lin, Chao; Wang, Long
2018-04-01
We propose a method of topological charge number multiplexing based on the JTC encryption system to achieve multiple-image encryption. Using this method, multi-image can be encrypted into single ciphertext, and the original images can be recovered according to the authority level. The number of encrypted images is increased, moreover, the quality of decrypted images is improved. Results of computer simulation and initial experiment identify the validity of our proposed method.
de Rijke, E; Schoorl, J C; Cerli, C; Vonhof, H B; Verdegaal, S J A; Vivó-Truyols, G; Lopatka, M; Dekter, R; Bakker, D; Sjerps, M J; Ebskamp, M; de Koster, C G
2016-08-01
Two approaches were investigated to discriminate between bell peppers of different geographic origins. Firstly, δ(18)O fruit water and corresponding source water were analyzed and correlated to the regional GNIP (Global Network of Isotopes in Precipitation) values. The water and GNIP data showed good correlation with the pepper data, with constant isotope fractionation of about -4. Secondly, compound-specific stable hydrogen isotope data was used for classification. Using n-alkane fingerprinting data, both linear discriminant analysis (LDA) and a likelihood-based classification, using the kernel-density smoothed data, were developed to discriminate between peppers from different origins. Both methods were evaluated using the δ(2)H values and n-alkanes relative composition as variables. Misclassification rates were calculated using a Monte-Carlo 5-fold cross-validation procedure. Comparable overall classification performance was achieved, however, the two methods showed sensitivity to different samples. The combined values of δ(2)H IRMS, and complimentary information regarding the relative abundance of four main alkanes in bell pepper fruit water, has proven effective for geographic origin discrimination. Evaluation of the rarity of observing particular ranges for these characteristics could be used to make quantitative assertions regarding geographic origin of bell peppers and, therefore, have a role in verifying compliance with labeling of geographical origin. Copyright © 2016 Elsevier Ltd. All rights reserved.
Agent-Based Modeling of China's Rural-Urban Migration and Social Network Structure.
Fu, Zhaohao; Hao, Lingxin
2018-01-15
We analyze China's rural-urban migration and endogenous social network structures using agent-based modeling. The agents from census micro data are located in their rural origin with an empirical-estimated prior propensity to move. The population-scale social network is a hybrid one, combining observed family ties and locations of the origin with a parameter space calibrated from census, survey and aggregate data and sampled using a stepwise Latin Hypercube Sampling method. At monthly intervals, some agents migrate and these migratory acts change the social network by turning within-nonmigrant connections to between-migrant-nonmigrant connections, turning local connections to nonlocal connections, and adding among-migrant connections. In turn, the changing social network structure updates migratory propensities of those well-connected nonmigrants who become more likely to move. These two processes iterate over time. Using a core-periphery method developed from the k -core decomposition method, we identify and quantify the network structural changes and map these changes with the migration acceleration patterns. We conclude that network structural changes are essential for explaining migration acceleration observed in China during the 1995-2000 period.
Agent-based modeling of China's rural-urban migration and social network structure
NASA Astrophysics Data System (ADS)
Fu, Zhaohao; Hao, Lingxin
2018-01-01
We analyze China's rural-urban migration and endogenous social network structures using agent-based modeling. The agents from census micro data are located in their rural origin with an empirical-estimated prior propensity to move. The population-scale social network is a hybrid one, combining observed family ties and locations of the origin with a parameter space calibrated from census, survey and aggregate data and sampled using a stepwise Latin Hypercube Sampling method. At monthly intervals, some agents migrate and these migratory acts change the social network by turning within-nonmigrant connections to between-migrant-nonmigrant connections, turning local connections to nonlocal connections, and adding among-migrant connections. In turn, the changing social network structure updates migratory propensities of those well-connected nonmigrants who become more likely to move. These two processes iterate over time. Using a core-periphery method developed from the k-core decomposition method, we identify and quantify the network structural changes and map these changes with the migration acceleration patterns. We conclude that network structural changes are essential for explaining migration acceleration observed in China during the 1995-2000 period.
Muñoz, Eva; Muñoz, Gloria; Pineda, Laura; Serrahima, Eulalia; Centrich, Francesc
2012-01-01
A multiresidue method based on GC or LC and MS or MS/MS for the determination of 204 pesticides in diverse food matrixes of animal and plant origin is described. The method can include different stages of cleanup according to the chemical characteristics of each sample. Samples were extracted using accelerated solvent extraction. Those with a high fat content or that contained chlorophyll required further purification by gel permeation chromatography and/or SPE (ENVI-Carb). The methodology developed here was fully validated; the LOQs for the 204 pesticides are presented. The LOQ values lie between 0.01 to 0.02 mg/kg. However, in some cases, mainly in baby food, they were as low as 0.003 mg/kg, thereby meeting European Union requirements on maximum residue levels for pesticides, as outlined in European regulation 396/2005 and the Commission Directive 2003/13/EC. The procedure has been accredited for a wide scope of pesticides and matrixes by the Spanish Accreditation Body (ENAC) following ISO/IEC 17025:2005, as outlined in ENAC technical note NT-19.
All-inkjet-printed thin-film transistors: manufacturing process reliability by root cause analysis
Sowade, Enrico; Ramon, Eloi; Mitra, Kalyan Yoti; Martínez-Domingo, Carme; Pedró, Marta; Pallarès, Jofre; Loffredo, Fausta; Villani, Fulvia; Gomes, Henrique L.; Terés, Lluís; Baumann, Reinhard R.
2016-01-01
We report on the detailed electrical investigation of all-inkjet-printed thin-film transistor (TFT) arrays focusing on TFT failures and their origins. The TFT arrays were manufactured on flexible polymer substrates in ambient condition without the need for cleanroom environment or inert atmosphere and at a maximum temperature of 150 °C. Alternative manufacturing processes for electronic devices such as inkjet printing suffer from lower accuracy compared to traditional microelectronic manufacturing methods. Furthermore, usually printing methods do not allow the manufacturing of electronic devices with high yield (high number of functional devices). In general, the manufacturing yield is much lower compared to the established conventional manufacturing methods based on lithography. Thus, the focus of this contribution is set on a comprehensive analysis of defective TFTs printed by inkjet technology. Based on root cause analysis, we present the defects by developing failure categories and discuss the reasons for the defects. This procedure identifies failure origins and allows the optimization of the manufacturing resulting finally to a yield improvement. PMID:27649784
Critical Values for Lawshe's Content Validity Ratio: Revisiting the Original Methods of Calculation
ERIC Educational Resources Information Center
Ayre, Colin; Scally, Andrew John
2014-01-01
The content validity ratio originally proposed by Lawshe is widely used to quantify content validity and yet methods used to calculate the original critical values were never reported. Methods for original calculation of critical values are suggested along with tables of exact binomial probabilities.
Rathi, Vinay K; Wang, Bo; Ross, Joseph S; Downing, Nicholas S; Kesselheim, Aaron S; Gray, Stacey T
2017-04-01
Objective The US Food and Drug Administration (FDA) approves indications for prescription drugs based on premarket pivotal clinical studies designed to demonstrate safety and efficacy. We characterized the pivotal studies supporting FDA approval of otolaryngologic prescription drug indications. Study Design Retrospective cross-sectional analysis. Setting Publicly available FDA documents. Subjects Recently approved (2005-2014) prescription drug indications for conditions treated by otolaryngologists or their multidisciplinary teams. Drugs could be authorized for treatment of otolaryngologic disease upon initial approval (original indications) or thereafter via supplemental applications (supplemental indications). Methods Pivotal studies were categorized by enrollment, randomization, blinding, comparator type, and primary endpoint. Results Between 2005 and 2014, the FDA approved 48 otolaryngologic prescription drug indications based on 64 pivotal studies, including 21 original indications (19 drugs, 31 studies) and 27 supplemental indications (18 drugs, 33 studies). Median enrollment was 299 patients (interquartile range, 198-613) for original indications and 197 patients (interquartile range, 64-442) for supplemental indications. Most indications were supported by ≥1 randomized study (original: 20/21 [95%], supplemental: 21/27 [78%]) and ≥1 double-blinded study (original: 14/21 [67%], supplemental: 17/27 [63%]). About half of original indications (9/21 [43%]) and one-quarter of supplemental indications (7/27 [26%]) were supported by ≥1 active-controlled study. Nearly half (original: 8/21 [38%], supplemental: 14/27 [52%]) of all indications were approved based exclusively on studies using surrogate markers as primary endpoints. Conclusion The quality of clinical evidence supporting FDA approval of otolaryngologic prescription drug indications varied widely. Otolaryngologists should consider limitations in premarket evidence when helping patients make informed treatment decisions about newly approved drugs.
Chen, Zhaoxue; Chen, Hao
2014-01-01
A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.
Chaotic Signal Denoising Based on Hierarchical Threshold Synchrosqueezed Wavelet Transform
NASA Astrophysics Data System (ADS)
Wang, Wen-Bo; Jing, Yun-yu; Zhao, Yan-chao; Zhang, Lian-Hua; Wang, Xiang-Li
2017-12-01
In order to overcoming the shortcoming of single threshold synchrosqueezed wavelet transform(SWT) denoising method, an adaptive hierarchical threshold SWT chaotic signal denoising method is proposed. Firstly, a new SWT threshold function is constructed based on Stein unbiased risk estimation, which is two order continuous derivable. Then, by using of the new threshold function, a threshold process based on the minimum mean square error was implemented, and the optimal estimation value of each layer threshold in SWT chaotic denoising is obtained. The experimental results of the simulating chaotic signal and measured sunspot signals show that, the proposed method can filter the noise of chaotic signal well, and the intrinsic chaotic characteristic of the original signal can be recovered very well. Compared with the EEMD denoising method and the single threshold SWT denoising method, the proposed method can obtain better denoising result for the chaotic signal.
Kahl, Johannes; Busscher, Nicolaas; Mergardt, Gaby; Mäder, Paul; Torp, Torfinn; Ploeger, Angelika
2015-01-01
There is a need for authentication tools in order to verify the existing certification system. Recently, markers for analytical authentication of organic products were evaluated. Herein, crystallization with additives was described as an interesting fingerprint approach which needs further evidence, based on a standardized method and well-documented sample origin. The fingerprint of wheat cultivars from a controlled field trial is generated from structure analysis variables of crystal patterns. Method performance was tested on factors such as crystallization chamber, day of experiment and region of interest of the patterns. Two different organic treatments and two different treatments of the non-organic regime can be grouped together in each of three consecutive seasons. When the k-nearest-neighbor classification method was applied, approximately 84% of Runal samples and 95% of Titlis samples were classified correctly into organic and non-organic origin using cross-validation. Crystallization with additive offers an interesting complementary fingerprint method for organic wheat samples. When the method is applied to winter wheat from the DOK trial, organic and non-organic treated samples can be differentiated significantly based on pattern recognition. Therefore crystallization with additives seems to be a promising tool in organic wheat authentication. © 2014 Society of Chemical Industry.
Dynamic frame resizing with convolutional neural network for efficient video compression
NASA Astrophysics Data System (ADS)
Kim, Jaehwan; Park, Youngo; Choi, Kwang Pyo; Lee, JongSeok; Jeon, Sunyoung; Park, JeongHoon
2017-09-01
In the past, video codecs such as vc-1 and H.263 used a technique to encode reduced-resolution video and restore original resolution from the decoder for improvement of coding efficiency. The techniques of vc-1 and H.263 Annex Q are called dynamic frame resizing and reduced-resolution update mode, respectively. However, these techniques have not been widely used due to limited performance improvements that operate well only under specific conditions. In this paper, video frame resizing (reduced/restore) technique based on machine learning is proposed for improvement of coding efficiency. The proposed method features video of low resolution made by convolutional neural network (CNN) in encoder and reconstruction of original resolution using CNN in decoder. The proposed method shows improved subjective performance over all the high resolution videos which are dominantly consumed recently. In order to assess subjective quality of the proposed method, Video Multi-method Assessment Fusion (VMAF) which showed high reliability among many subjective measurement tools was used as subjective metric. Moreover, to assess general performance, diverse bitrates are tested. Experimental results showed that BD-rate based on VMAF was improved by about 51% compare to conventional HEVC. Especially, VMAF values were significantly improved in low bitrate. Also, when the method is subjectively tested, it had better subjective visual quality in similar bit rate.
Stable isotope analyses-A method to distinguish intensively farmed from wild frogs.
Dittrich, Carolin; Struck, Ulrich; Rödel, Mark-Oliver
2017-04-01
Consumption of frog legs is increasing worldwide, with potentially dramatic effects for ecosystems. More and more functioning frog farms are reported to exist. However, due to the lack of reliable methods to distinguish farmed from wild-caught individuals, the origin of frogs in the international trade is often uncertain. Here, we present a new methodological approach to this problem. We investigated the isotopic composition of legally traded frog legs from suppliers in Vietnam and Indonesia. Muscle and bone tissue samples were examined for δ 15 N, δ 13 C, and δ 18 O stable isotope compositions, to elucidate the conditions under which the frogs grew up. We used DNA barcoding (16S rRNA) to verify species identities. We identified three traded species ( Hoplobatrachus rugulosus, Fejervarya cancrivora and Limnonectes macrodon ); species identities were partly deviating from package labeling. Isotopic values of δ 15 N and δ 18 O showed significant differences between species and country of origin. Based on low δ 15 N composition and generally little variation in stable isotope values, our results imply that frogs from Vietnam were indeed farmed. In contrast, the frogs from the Indonesian supplier likely grew up under natural conditions, indicated by higher δ 15 N values and stronger variability in the stable isotope composition. Our results indicate that stable isotope analyses seem to be a useful tool to distinguish between naturally growing and intensively farmed frogs. We believe that this method can be used to improve the control in the international trade of frog legs, as well as for other biological products, thus supporting farming activities and decreasing pressure on wild populations. However, we examined different species from different countries and had no access to samples of individuals with confirmed origin and living conditions. Therefore, we suggest improving this method further with individuals of known origin and history, preferably including samples of the respective nutritive bases.
The origin of spurious solutions in computational electromagnetics
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan; Wu, Jie; Povinelli, L. A.
1995-01-01
The origin of spurious solutions in computational electromagnetics, which violate the divergence equations, is deeply rooted in a misconception about the first-order Maxwell's equations and in an incorrect derivation and use of the curl-curl equations. The divergence equations must be always included in the first-order Maxwell's equations to maintain the ellipticity of the system in the space domain and to guarantee the uniqueness of the solution and/or the accuracy of the numerical solutions. The div-curl method and the least-squares method provide rigorous derivation of the equivalent second-order Maxwell's equations and their boundary conditions. The node-based least-squares finite element method (LSFEM) is recommended for solving the first-order full Maxwell equations directly. Examples of the numerical solutions by LSFEM for time-harmonic problems are given to demonstrate that the LSFEM is free of spurious solutions.
Bello, Alessandra; Bianchi, Federica; Careri, Maria; Giannetto, Marco; Mori, Giovanni; Musci, Marilena
2007-11-05
A new NIR method based on multivariate calibration for determination of ethanol in industrially packed wholemeal bread was developed and validated. GC-FID was used as reference method for the determination of actual ethanol concentration of different samples of wholemeal bread with proper content of added ethanol, ranging from 0 to 3.5% (w/w). Stepwise discriminant analysis was carried out on the NIR dataset, in order to reduce the number of original variables by selecting those that were able to discriminate between the samples of different ethanol concentrations. With the so selected variables a multivariate calibration model was then obtained by multiple linear regression. The prediction power of the linear model was optimized by a new "leave one out" method, so that the number of original variables resulted further reduced.
Study on Building Extraction from High-Resolution Images Using Mbi
NASA Astrophysics Data System (ADS)
Ding, Z.; Wang, X. Q.; Li, Y. L.; Zhang, S. S.
2018-04-01
Building extraction from high resolution remote sensing images is a hot research topic in the field of photogrammetry and remote sensing. However, the diversity and complexity of buildings make building extraction methods still face challenges in terms of accuracy, efficiency, and so on. In this study, a new building extraction framework based on MBI and combined with image segmentation techniques, spectral constraint, shadow constraint, and shape constraint is proposed. In order to verify the proposed method, worldview-2, GF-2, GF-1 remote sensing images covered Xiamen Software Park were used for building extraction experiments. Experimental results indicate that the proposed method improve the original MBI significantly, and the correct rate is over 86 %. Furthermore, the proposed framework reduces the false alarms by 42 % on average compared to the performance of the original MBI.
Improving mixing efficiency of a polymer micromixer by use of a plastic shim divider
NASA Astrophysics Data System (ADS)
Li, Lei; Lee, L. James; Castro, Jose M.; Yi, Allen Y.
2010-03-01
In this paper, a critical modification to a polymer based affordable split-and-recombination static micromixer is described. To evaluate the improvement, both the original and the modified design were carefully investigated using an experimental setup and numerical modeling approach. The structure of the micromixer was designed to take advantage of the process capabilities of both ultraprecision micromachining and microinjection molding process. Specifically, the original and the modified design were numerically simulated using commercial finite element method software ANSYS CFX to assist the re-designing of the micromixers. The simulation results have shown that both designs are capable of performing mixing while the modified design has a much improved performance. Mixing experiments with two different fluids were carried out using the original and the modified mixers again showed a significantly improved mixing uniformity by the latter. The measured mixing coefficient for the original design was 0.11, and for the improved design it was 0.065. The developed manufacturing process based on ultraprecision machining and microinjection molding processes for device fabrication has the advantage of high-dimensional precision, low cost and manufacturing flexibility.
Lichtenhan, JT; Hartsock, J; Dornhoffer, JR; Donovan, KM; Salt, AN
2016-01-01
Background Administering pharmaceuticals to the scala tympani of the inner ear is a common approach to study cochlear physiology and mechanics. We present here a novel method for in vivo drug delivery in a controlled manner to sealed ears. New method Injections of ototoxic solutions were applied from a pipette sealed into a fenestra in the cochlear apex, progressively driving solutions along the length of scala tympani toward the cochlear aqueduct at the base. Drugs can be delivered rapidly or slowly. In this report we focus on slow delivery in which the injection rate is automatically adjusted to account for varying cross sectional area of the scala tympani, therefore driving a solution front at uniform rate. Results Objective measurements originating from finely spaced, low- to high-characteristic cochlear frequency places were sequentially affected. Comparison with existing methods(s): Controlled administration of pharmaceuticals into the cochlear apex overcomes a number of serious limitations of previously established methods such as cochlear perfusions with an injection pipette in the cochlear base: The drug concentration achieved is more precisely controlled, drug concentrations remain in scala tympani and are not rapidly washed out by cerebrospinal fluid flow, and the entire length of the cochlear spiral can be treated quickly or slowly with time. Conclusions Controlled administration of solutions into the cochlear apex can be a powerful approach to sequentially effect objective measurements originating from finely spaced cochlear regions and allows, for the first time, the spatial origin of CAPs to be objectively defined. PMID:27506463
Using false colors to protect visual privacy of sensitive content
NASA Astrophysics Data System (ADS)
Ćiftçi, Serdar; Korshunov, Pavel; Akyüz, Ahmet O.; Ebrahimi, Touradj
2015-03-01
Many privacy protection tools have been proposed for preserving privacy. Tools for protection of visual privacy available today lack either all or some of the important properties that are expected from such tools. Therefore, in this paper, we propose a simple yet effective method for privacy protection based on false color visualization, which maps color palette of an image into a different color palette, possibly after a compressive point transformation of the original pixel data, distorting the details of the original image. This method does not require any prior face detection or other sensitive regions detection and, hence, unlike typical privacy protection methods, it is less sensitive to inaccurate computer vision algorithms. It is also secure as the look-up tables can be encrypted, reversible as table look-ups can be inverted, flexible as it is independent of format or encoding, adjustable as the final result can be computed by interpolating the false color image with the original using different degrees of interpolation, less distracting as it does not create visually unpleasant artifacts, and selective as it preserves better semantic structure of the input. Four different color scales and four different compression functions, one which the proposed method relies, are evaluated via objective (three face recognition algorithms) and subjective (50 human subjects in an online-based study) assessments using faces from FERET public dataset. The evaluations demonstrate that DEF and RBS color scales lead to the strongest privacy protection, while compression functions add little to the strength of privacy protection. Statistical analysis also shows that recognition algorithms and human subjects perceive the proposed protection similarly
Choi, Hyungwon; Kim, Sinae; Fermin, Damian; Tsou, Chih-Chiang; Nesvizhskii, Alexey I
2015-11-03
We introduce QPROT, a statistical framework and computational tool for differential protein expression analysis using protein intensity data. QPROT is an extension of the QSPEC suite, originally developed for spectral count data, adapted for the analysis using continuously measured protein-level intensity data. QPROT offers a new intensity normalization procedure and model-based differential expression analysis, both of which account for missing data. Determination of differential expression of each protein is based on the standardized Z-statistic based on the posterior distribution of the log fold change parameter, guided by the false discovery rate estimated by a well-known Empirical Bayes method. We evaluated the classification performance of QPROT using the quantification calibration data from the clinical proteomic technology assessment for cancer (CPTAC) study and a recently published Escherichia coli benchmark dataset, with evaluation of FDR accuracy in the latter. QPROT is a statistical framework with computational software tool for comparative quantitative proteomics analysis. It features various extensions of QSPEC method originally built for spectral count data analysis, including probabilistic treatment of missing values in protein intensity data. With the increasing popularity of label-free quantitative proteomics data, the proposed method and accompanying software suite will be immediately useful for many proteomics laboratories. This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015 Elsevier B.V. All rights reserved.
Origin of spin reorientation transitions in antiferromagnetic MnPt-based alloys
NASA Astrophysics Data System (ADS)
Chang, P.-H.; Zhuravlev, I. A.; Belashchenko, K. D.
2018-04-01
Antiferromagnetic MnPt exhibits a spin reorientation transition (SRT) as a function of temperature, and off-stoichiometric Mn-Pt alloys also display SRTs as a function of concentration. The magnetocrystalline anisotropy in these alloys is studied using first-principles calculations based on the coherent potential approximation and the disordered local moment method. The anisotropy is fairly small and sensitive to the variations in composition and temperature due to the cancellation of large contributions from different parts of the Brillouin zone. Concentration and temperature-driven SRTs are found in reasonable agreement with experimental data. Contributions from specific band-structure features are identified and used to explain the origin of the SRTs.
NASA Astrophysics Data System (ADS)
Dong, Lieqian; Wang, Deying; Zhang, Yimeng; Zhou, Datong
2017-09-01
Signal enhancement is a necessary step in seismic data processing. In this paper we utilize the complementary ensemble empirical mode decomposition (CEEMD) and complex curvelet transform (CCT) methods to separate signal from random noise further to improve the signal to noise (S/N) ratio. Firstly, the original data with noise is decomposed into a series of intrinsic mode function (IMF) profiles with the aid of CEEMD. Then the IMFs with noise are transformed into CCT domain. By choosing different thresholds which are based on the noise level difference of each IMF profile, the noise in original data can be suppressed. Finally, we illustrate the effectiveness of the approach by simulated and field datasets.
A full potential inverse method based on a density linearization scheme for wing design
NASA Technical Reports Server (NTRS)
Shankar, V.
1982-01-01
A mixed analysis inverse procedure based on the full potential equation in conservation form was developed to recontour a given base wing to produce density linearization scheme in applying the pressure boundary condition in terms of the velocity potential. The FL030 finite volume analysis code was modified to include the inverse option. The new surface shape information, associated with the modified pressure boundary condition, is calculated at a constant span station based on a mass flux integration. The inverse method is shown to recover the original shape when the analysis pressure is not altered. Inverse calculations for weakening of a strong shock system and for a laminar flow control (LFC) pressure distribution are presented. Two methods for a trailing edge closure model are proposed for further study.
NASA Astrophysics Data System (ADS)
Zhou, Chuan; Chan, Heang-Ping; Chightai, Aamer; Wei, Jun; Hadjiiski, Lubomir M.; Agarwal, Prachi; Kuriakose, Jean W.; Kazerooni, Ella A.
2013-03-01
Automatic tracking and segmentation of the coronary arterial tree is the basic step for computer-aided analysis of coronary disease. The goal of this study is to develop an automated method to identify the origins of the left coronary artery (LCA) and right coronary artery (RCA) as the seed points for the tracking of the coronary arterial trees. The heart region and the contrast-filled structures in the heart region are first extracted using morphological operations and EM estimation. To identify the ascending aorta, we developed a new multiscale aorta search method (MAS) method in which the aorta is identified based on a-priori knowledge of its circular shape. Because the shape of the ascending aorta in the cCTA axial view is roughly a circle but its size can vary over a wide range for different patients, multiscale circularshape priors are used to search for the best matching circular object in each CT slice, guided by the Hausdorff distance (HD) as the matching indicator. The location of the aorta is identified by finding the minimum HD in the heart region over the set of multiscale circular priors. An adaptive region growing method is then used to extend the above initially identified aorta down to the aortic valves. The origins at the aortic sinus are finally identified by a morphological gray level top-hat operation applied to the region-grown aorta with morphological structuring element designed for coronary arteries. For the 40 test cases, the aorta was correctly identified in 38 cases (95%). The aorta can be grown to the aortic root in 36 cases, and 36 LCA origins and 34 RCA origins can be identified within 10 mm of the locations marked by radiologists.
Tămăşan, M; Ozyegin, L S; Oktar, F N; Simon, V
2013-07-01
The study reports the preparation and characterization of powders consisting of the different phases of calcium phosphates that were obtained from the naturally derived raw materials of sea-shell origins reacted with H3PO4. Species of sea origin, such as corals and nacres, attracted a special interest in bone tissue engineering area. Nacre shells are built up of calcium carbonate in aragonite form crystallized in an organic matrix. In this work two natural marine origin materials (shells of echinoderm Sputnik sea urchin - Phyllacanthus imperialis and Trochidae Infundibulum concavus mollusk) were involved in the developing powders of calcium phosphate based biomaterials (as raw materials for bone-scaffolds) by hotplate and ultrasound methods. Thermal analyses of the as-prepared materials were made for an assessment of the thermal behavior and heat treatment temperatures. Samples from both sea shells each of them prepared by the above mentioned methods were subjected to thermal treatments at 450 °C and 850 °C in order to evaluate the crystalline transformations of the calcium phosphate structures in the heating process. By X-ray diffraction analyses various calcium phosphate phases were identified. In Sputnik sea urchins originated samples were found predominantly brushite and calcite as a small secondary phase, while in Trochidae I. concavus samples mainly monetite and HA phases were identified. Thermal treatment at 850 °C resulted flat-plate whitlockite crystals - β-MgTCP [(Ca, Mg)3 (PO4)2] for both samples regardless the preparation method (ultrasound or hotplate) or the targeted Ca/P molar ratio according with XRD patterns. Scanning electron microscopy and Fourier transformed infrared spectroscopy were involved more in the characterization of these materials and the good correlations of the results of these methods were made. Copyright © 2013 Elsevier B.V. All rights reserved.
Chemical and Genetic Discrimination of Cistanches Herba Based on UPLC-QTOF/MS and DNA Barcoding
Zheng, Sihao; Jiang, Xue; Wu, Labin; Wang, Zenghui; Huang, Linfang
2014-01-01
Cistanches Herba (Rou Cong Rong), known as “Ginseng of the desert”, has a striking curative effect on strength and nourishment, especially in kidney reinforcement to strengthen yang. However, the two plant origins of Cistanches Herba, Cistanche deserticola and Cistanche tubulosa, vary in terms of pharmacological action and chemical components. To discriminate the plant origin of Cistanches Herba, a combined method system of chemical and genetic –UPLC-QTOF/MS technology and DNA barcoding–were firstly employed in this study. The results indicated that three potential marker compounds (isomer of campneoside II, cistanoside C, and cistanoside A) were obtained to discriminate the two origins by PCA and OPLS-DA analyses. DNA barcoding enabled to differentiate two origins accurately. NJ tree showed that two origins clustered into two clades. Our findings demonstrate that the two origins of Cistanches Herba possess different chemical compositions and genetic variation. This is the first reported evaluation of two origins of Cistanches Herba, and the finding will facilitate quality control and its clinical application. PMID:24854031
Hylemetry versus Biometry: a new method to certificate the lithography authenticity
NASA Astrophysics Data System (ADS)
Schirripa Spagnolo, Giuseppe; Cozzella, Lorenzo; Simonetti, Carla
2011-06-01
When we buy an artwork object a certificate of authenticity contain specific details about the artwork. Unfortunately, these certificates are often exchanged between similar artworks: the same document is supplied by the seller to certificate the originality. In this way the buyer will have a copy of an original certificate to attest that the "not original artwork" is an original one. A solution for this problem would be to insert a system that links together the certificate and a specific artwork. To do this it is necessary, for a single artwork, to find unique, unrepeatable, and unchangeable characteristics. In this paper we propose a new lithography certification based on the color spots distribution, which compose the lithography itself. Due to the high resolution acquisition media available today, it is possible using analysis method typical of speckle metrology. In particular, in verification phase it is only necessary acquiring the same portion of lithography, extracting the verification information, using the private key to obtain the same information from the certificate and confronting the two information using a comparison threshold. Due to the possible rotation and translation it is applied image correlation solutions, used in speckle metrology, to determine translation and rotation error and correct allow to verifying extracted and acquired images in the best situation, for granting correct originality verification.
TH-AB-BRA-04: Dosimetric Evaluation of MR-Guided HDR Brachytherapy Planning for Cervical Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamio, Y; Barkati, M; Beliveau-Nadeau, D
2016-06-15
Purpose: To perform a retrospective study on 16 patients that had both CT and T2-weighted MR scans done at first fraction using the Utrecht CT/MR applicator (Elekta Brachytherapy) in order to evaluate uncertainties associated with an MR-only planning workflow. Methods: MR-workflow uncertainties were classified in three categories: reconstruction, registration and contouring. A systematic comparison of the CT and MR contouring, manual reconstruction and optimization process was performed to evaluate the impact of these uncertainties on the recommended GEC ESTRO DVH parameters: D90% and V100% for HR-CTV as well as D2cc for bladder, rectum, sigmoid colon and small bowel. This comparisonmore » was done using the following four steps: 1. Catheter reconstruction done on MR images with original CT-plan contours and dwell times. 2. OAR contours adjusted on MR images with original CT-plan reconstruction and dwell times. 3. Both reconstruction and contours done on MR images with original CT-plan dwell times. 4. Entire MR-based workflow optimized dwell times reimported to the original CT-plan. Results: The MR-based reconstruction process showed average D2cc deviations of 4.5 ± 3.0%, 1.5 ± 2.0%, 2.5 ± 2.0% and 2.0 ± 1.0% for the bladder, rectum, sigmoid colon and small bowels respectively with a maximum of 10%, 6%, 6% and 4%. The HR-CTV’s D90% and V100% average deviations was found to be 4.0 ± 3.0%, and 2.0 ± 2.0% respectively with a maximum of 10% and 6%. Adjusting contours on MR-images was found to have a similar impact. Finally, the optimized MR-based workflow dwell times were found to still give acceptable plans when re-imported to the original CT-plan which validated the entire workflow. Conclusion: This work illustrates a systematic validation method for centers wanting to move towards an MR-only workflow. This work will be expanded to model based reconstruction, PD-weighted images and other types of applicators.« less
Adaptive distance metric learning for diffusion tensor image segmentation.
Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C N; Chu, Winnie C W
2014-01-01
High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework.
Adaptive Distance Metric Learning for Diffusion Tensor Image Segmentation
Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C. N.; Chu, Winnie C. W.
2014-01-01
High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework. PMID:24651858
NASA Astrophysics Data System (ADS)
Dmitrieva, S. O.; Frontasyeva, M. V.; Dmitriev, A. A.; Dmitriev, A. Yu.
2017-01-01
The work is dedicated to the determination of the origin of archaeological finds from medieval glass using the method of neutron activation analysis (NAA). Among such objects we can discover not only things produced in ancient Russian glassmaking workshops but also imported from Byzantium. The authors substantiate the ancient Russian origin of the medieval glass bracelets of pre-Mongol period, found on the ancient Dubna settlement. The conclusions are based on data about the glass chemical composition obtained as a result of NAA of 10 fragments of bracelets at the IBR-2 reactor (Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research).
Dynamic baseline detection method for power data network service
NASA Astrophysics Data System (ADS)
Chen, Wei
2017-08-01
This paper proposes a dynamic baseline Traffic detection Method which is based on the historical traffic data for the Power data network. The method uses Cisco's NetFlow acquisition tool to collect the original historical traffic data from network element at fixed intervals. This method uses three dimensions information including the communication port, time, traffic (number of bytes or number of packets) t. By filtering, removing the deviation value, calculating the dynamic baseline value, comparing the actual value with the baseline value, the method can detect whether the current network traffic is abnormal.
Update of the Accounting Surface Along the Lower Colorado River
Wiele, Stephen M.; Leake, Stanley A.; Owen-Joyce, Sandra J.; McGuire, Emmet H.
2008-01-01
The accounting-surface method was developed in the 1990s by the U.S. Geological Survey, in cooperation with the Bureau of Reclamation, to identify wells outside the flood plain of the lower Colorado River that yield water that will be replaced by water from the river. This method was needed to identify which wells require an entitlement for diversion of water from the Colorado River and need to be included in accounting for consumptive use of Colorado River water as outlined in the Consolidated Decree of the United States Supreme Court in Arizona v. California. The method is based on the concept of a river aquifer and an accounting surface within the river aquifer. The study area includes the valley adjacent to the lower Colorado River and parts of some adjacent valleys in Arizona, California, Nevada, and Utah and extends from the east end of Lake Mead south to the southerly international boundary with Mexico. Contours for the original accounting surface were hand drawn based on the shape of the aquifer, water-surface elevations in the Colorado River and drainage ditches, and hydrologic judgment. This report documents an update of the original accounting surface based on updated water-surface elevations in the Colorado River and drainage ditches and the use of simple, physically based ground-water flow models to calculate the accounting surface in four areas adjacent to the free-flowing river.
A Method of DTM Construction Based on Quadrangular Irregular Networks and Related Error Analysis
Kang, Mengjun
2015-01-01
A new method of DTM construction based on quadrangular irregular networks (QINs) that considers all the original data points and has a topological matrix is presented. A numerical test and a real-world example are used to comparatively analyse the accuracy of QINs against classical interpolation methods and other DTM representation methods, including SPLINE, KRIGING and triangulated irregular networks (TINs). The numerical test finds that the QIN method is the second-most accurate of the four methods. In the real-world example, DTMs are constructed using QINs and the three classical interpolation methods. The results indicate that the QIN method is the most accurate method tested. The difference in accuracy rank seems to be caused by the locations of the data points sampled. Although the QIN method has drawbacks, it is an alternative method for DTM construction. PMID:25996691
MWM-Array Characterization of Mechanical Damage and Corrosion
DOT National Transportation Integrated Search
2011-02-09
The MWM-Array is an inductive sensor that operates like a transformer in a plane. The MWMArray is based on the original MWM(R) (Meandering Winding Magnetometer) developed at MIT in the 1980s. A rapid multivariate inverse method converts impedance dat...
ERIC Educational Resources Information Center
Adamson, Charles
A two-semester English-for-Special-Purposes (ESP) course designed for nursing students at a Japanese university is described, including the origins and development of the course, text development, and teaching methods. The content-based course was designed to meet licensure requirements for English language training, emphasized listening and…
Gu, Xuan; Zhang, Xiao-qin; Song, Xiao-na; Zang, Yi-mei; Li Yan-peng; Ma, Chang-hua; Zhao, Bai-xiao; Liu, Chun-sheng
2014-12-01
The fruit of Lycium ruthenicum is a common folk medicine in China. Now it is popular for its antioxidative effect and other medical functions. The adulterants of the herb confuse consumers. In order to identify a new adulterant of L. ruthenicum, a research was performed based on NCBI Nucleotide Database ITS Sequence, combined analysis of the origin and morphology of the adulterant to traceable varieties. Total genomic DNA was isolated from the materials, and nuclear DNA ITS sequences were amplified and sequenced; DNA fragments were collated and matched by using ContingExpress. Similarity identification of BLAST analysis was performed. Besides, the distribution of plant origin and morphology were considered to further identification and verification. Families and genera were identified by molecular identification method. The adulterant was identified as plant belonging to Berberis. Origin analysis narrowed the range of sample identification. Seven different kinds of plants in Berberis were potential sources of the sample. Adulterants variety was traced by morphological analysis. The united molecular identification-origin-morphology research proves to be a preceding way to medical herbs traceability with time-saving and economic advantages and the results showed the new adulterant of L. ruthenicum was B. kaschgarica. The main differences between B. kaschgarica and L. ruthenicum are as follows: in terms of the traits, the surface of B. kaschgarica is smooth and crispy, and that of L. ruthenicum is shrinkage, solid and hard. In microscopic characteristics, epicarp cells of B. aschgarica thickening like a string of beads, stone cells as the rectangle, and the stone cell walls of L. ruthenicum is wavy, obvious grain layer. In molecular sequences, the length of ITS sequence of B. kaschgarica is 606 bp, L. ruthenicum is 654 bp, the similarity of the two sequences is 53.32%.
NASA Astrophysics Data System (ADS)
Pratama, S. P.; Yunus, A.; Purwanto, E.; Widyastuti, Y.
2018-03-01
Graptophyllum pictum is one of medical plants which has important chemical content to treat diseases. Leaf, bark and flower can be used to facilitate menstruation, treat hemorrhoid, constipation, ulcers, ulcers, swelling, and earache. G. pictum is difficult to propagated by seedling due to the long duration of seed formation, thusvegetative propagation is done by stem cutting. The aims of this study are to obtain optimum combination of cutting origin and organic plant growth regulator in various consentration for the growth of Daun Ungu through stem cutting method. This research was conducted at Research center for Medicinal Plant and Traditional DrugTanjungsari, Tegal Gede, Karanganyar in June to August 2016. Origin of cuttings and organic plant growth regulator were used as treatments factor. A completely randomized design (RAL) is used and data were analyzed by F test (ANOVA) with a confidence level of 95%. Any significant differences among treatment followed with Duncan test at a = 5%. The research indicates that longest root was resulted from the treatment of 0,5 ml/l of organic plant growth regulator. The treatment of 1 ml/l is able to increase the fresh and dry weight of root, treatment of 1,5 ml/l of organic plant growth regulator was able to increase the percentage of growing shoots. Treatment of base part as origin of cuttings increases the length, fresh weight and and dry weight of shoot, increase the number of leaves. Interaction treatment between 1 ml/l consentration of organic plant growth regulator and central part origin of cuttings is capable of increasing the leaf area, whereas treatment without organic plant growth regulator and base part as planting material affects the smallest leaf area.
Takalo, Jouni; Timonen, Jussi; Sampo, Jouni; Rantala, Maaria; Siltanen, Samuli; Lassas, Matti
2014-11-01
A novel method is presented for distinguishing postal stamp forgeries and counterfeit banknotes from genuine samples. The method is based on analyzing differences in paper fibre networks. The main tool is a curvelet-based algorithm for measuring overall fibre orientation distribution and quantifying anisotropy. Using a couple of more appropriate parameters makes it possible to distinguish forgeries from genuine originals as concentrated point clouds in two- or three-dimensional parameter space. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
De Bernardis, E.; Farassat, F.
1989-01-01
Using a time domain method based on the Ffowcs Williams-Hawkings equation, a reliable explanation is provided for the origin of singularities observed in the numerical prediction of supersonic propeller noise. In the last few years Tam and, more recently, Amiet have analyzed the phenomenon from different points of view. The method proposed here offers a clear interpretation of the singularities based on a new description of sources, relating to the behavior of lines where the propeller blade surface exhibit slope discontinuity.
Using Grain-Size Distribution Methods for Estimation of Air Permeability.
Wang, Tiejun; Huang, Yuanyang; Chen, Xunhong; Chen, Xi
2016-01-01
Knowledge of air permeability (ka ) at dry conditions is critical for the use of air flow models in porous media; however, it is usually difficult and time consuming to measure ka at dry conditions. It is thus desirable to estimate ka at dry conditions from other readily obtainable properties. In this study, the feasibility of using information derived from grain-size distributions (GSDs) for estimating ka at dry conditions was examined. Fourteen GSD-based equations originally developed for estimating saturated hydraulic conductivity were tested using ka measured at dry conditions in both undisturbed and disturbed river sediment samples. On average, the estimated ka from all the equations, except for the method of Slichter, differed by less than ± 4 times from the measured ka for both undisturbed and disturbed groups. In particular, for the two sediment groups, the results given by the methods of Terzaghi and Hazen-modified were comparable to the measured ka . In addition, two methods (e.g., Barr and Beyer) for the undisturbed samples and one method (e.g., Hazen-original) for the undisturbed samples were also able to produce comparable ka estimates. Moreover, after adjusting the values of the coefficient C in the GSD-based equations, the estimation of ka was significantly improved with the differences between the measured and estimated ka less than ±4% on average (except for the method of Barr). As demonstrated by this study, GSD-based equations may provide a promising and efficient way to estimate ka at dry conditions. © 2015, National Ground Water Association.
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.
2005-01-01
The manual application of formal methods in system specification has produced successes, but in the end, despite any claims and assertions by practitioners, there is no provable relationship between a manually derived system specification or formal model and the customer's original requirements. Complex parallel and distributed system present the worst case implications for today s dearth of viable approaches for achieving system dependability. No avenue other than formal methods constitutes a serious contender for resolving the problem, and so recognition of requirements-based programming has come at a critical juncture. We describe a new, NASA-developed automated requirement-based programming method that can be applied to certain classes of systems, including complex parallel and distributed systems, to achieve a high degree of dependability.
Robust High-Capacity Audio Watermarking Based on FFT Amplitude Modification
NASA Astrophysics Data System (ADS)
Fallahpour, Mehdi; Megías, David
This paper proposes a novel robust audio watermarking algorithm to embed data and extract it in a bit-exact manner based on changing the magnitudes of the FFT spectrum. The key point is selecting a frequency band for embedding based on the comparison between the original and the MP3 compressed/decompressed signal and on a suitable scaling factor. The experimental results show that the method has a very high capacity (about 5kbps), without significant perceptual distortion (ODG about -0.25) and provides robustness against common audio signal processing such as added noise, filtering and MPEG compression (MP3). Furthermore, the proposed method has a larger capacity (number of embedded bits to number of host bits rate) than recent image data hiding methods.
ERIC Educational Resources Information Center
Henning, Margaret; Chi, Chunheui; Khanna, Sunil K.
2011-01-01
Objective: The purpose of this study was to evaluate the socio-cultural variables that may influence teachers' adoption of classroom-based HIV/AIDS education within the school setting and among school types in Zambia's Lusaka Province. Method: Mixed methods were used to collect original data. Using semi-structured interviews (n=11) and a survey…
The numerical modelling of MHD astrophysical flows with chemistry
NASA Astrophysics Data System (ADS)
Kulikov, I.; Chernykh, I.; Protasov, V.
2017-10-01
The new code for numerical simulation of magnetic hydrodynamical astrophysical flows with consideration of chemical reactions is given in the paper. At the heart of the code - the new original low-dissipation numerical method based on a combination of operator splitting approach and piecewise-parabolic method on the local stencil. The chemodynamics of the hydrogen while the turbulent formation of molecular clouds is modeled.
Yamada, Toru; Umeyama, Shinji; Matsuda, Keiji
2012-01-01
In conventional functional near-infrared spectroscopy (fNIRS), systemic physiological fluctuations evoked by a body's motion and psychophysiological changes often contaminate fNIRS signals. We propose a novel method for separating functional and systemic signals based on their hemodynamic differences. Considering their physiological origins, we assumed a negative and positive linear relationship between oxy- and deoxyhemoglobin changes of functional and systemic signals, respectively. Their coefficients are determined by an empirical procedure. The proposed method was compared to conventional and multi-distance NIRS. The results were as follows: (1) Nonfunctional tasks evoked substantial oxyhemoglobin changes, and comparatively smaller deoxyhemoglobin changes, in the same direction by conventional NIRS. The systemic components estimated by the proposed method were similar to the above finding. The estimated functional components were very small. (2) During finger-tapping tasks, laterality in the functional component was more distinctive using our proposed method than that by conventional fNIRS. The systemic component indicated task-evoked changes, regardless of the finger used to perform the task. (3) For all tasks, the functional components were highly coincident with signals estimated by multi-distance NIRS. These results strongly suggest that the functional component obtained by the proposed method originates in the cerebral cortical layer. We believe that the proposed method could improve the reliability of fNIRS measurements without any modification in commercially available instruments. PMID:23185590
A novel validation and calibration method for motion capture systems based on micro-triangulation.
Nagymáté, Gergely; Tuchband, Tamás; Kiss, Rita M
2018-06-06
Motion capture systems are widely used to measure human kinematics. Nevertheless, users must consider system errors when evaluating their results. Most validation techniques for these systems are based on relative distance and displacement measurements. In contrast, our study aimed to analyse the absolute volume accuracy of optical motion capture systems by means of engineering surveying reference measurement of the marker coordinates (uncertainty: 0.75 mm). The method is exemplified on an 18 camera OptiTrack Flex13 motion capture system. The absolute accuracy was defined by the root mean square error (RMSE) between the coordinates measured by the camera system and by engineering surveying (micro-triangulation). The original RMSE of 1.82 mm due to scaling error was managed to be reduced to 0.77 mm while the correlation of errors to their distance from the origin reduced from 0.855 to 0.209. A simply feasible but less accurate absolute accuracy compensation method using tape measure on large distances was also tested, which resulted in similar scaling compensation compared to the surveying method or direct wand size compensation by a high precision 3D scanner. The presented validation methods can be less precise in some respects as compared to previous techniques, but they address an error type, which has not been and cannot be studied with the previous validation methods. Copyright © 2018 Elsevier Ltd. All rights reserved.
Arcila, Dahiana; Alexander Pyron, R; Tyler, James C; Ortí, Guillermo; Betancur-R, Ricardo
2015-01-01
Time-calibrated phylogenies based on molecular data provide a framework for comparative studies. Calibration methods to combine fossil information with molecular phylogenies are, however, under active development, often generating disagreement about the best way to incorporate paleontological data into these analyses. This study provides an empirical comparison of the most widely used approach based on node-dating priors for relaxed clocks implemented in the programs BEAST and MrBayes, with two recently proposed improvements: one using a new fossilized birth-death process model for node dating (implemented in the program DPPDiv), and the other using a total-evidence or tip-dating method (implemented in MrBayes and BEAST). These methods are applied herein to tetraodontiform fishes, a diverse group of living and extinct taxa that features one of the most extensive fossil records among teleosts. Previous estimates of time-calibrated phylogenies of tetraodontiforms using node-dating methods reported disparate estimates for their age of origin, ranging from the late Jurassic to the early Paleocene (ca. 150-59Ma). We analyzed a comprehensive dataset with 16 loci and 210 morphological characters, including 131 taxa (95 extant and 36 fossil species) representing all families of fossil and extant tetraodontiforms, under different molecular clock calibration approaches. Results from node-dating methods produced consistently younger ages than the tip-dating approaches. The older ages inferred by tip dating imply an unlikely early-late Jurassic (ca. 185-119Ma) origin for this order and the existence of extended ghost lineages in their fossil record. Node-based methods, by contrast, produce time estimates that are more consistent with the stratigraphic record, suggesting a late Cretaceous (ca. 86-96Ma) origin. We show that the precision of clade age estimates using tip dating increases with the number of fossils analyzed and with the proximity of fossil taxa to the node under assessment. This study suggests that current implementations of tip dating may overestimate ages of divergence in calibrated phylogenies. It also provides a comprehensive phylogenetic framework for tetraodontiform systematics and future comparative studies. Copyright © 2014 Elsevier Inc. All rights reserved.
Revised Methods for Characterizing Stream Habitat in the National Water-Quality Assessment Program
Fitzpatrick, Faith A.; Waite, Ian R.; D'Arconte, Patricia J.; Meador, Michael R.; Maupin, Molly A.; Gurtz, Martin E.
1998-01-01
Stream habitat is characterized in the U.S. Geological Survey's National Water-Quality Assessment (NAWQA) Program as part of an integrated physical, chemical, and biological assessment of the Nation's water quality. The goal of stream habitat characterization is to relate habitat to other physical, chemical, and biological factors that describe water-quality conditions. To accomplish this goal, environmental settings are described at sites selected for water-quality assessment. In addition, spatial and temporal patterns in habitat are examined at local, regional, and national scales. This habitat protocol contains updated methods for evaluating habitat in NAWQA Study Units. Revisions are based on lessons learned after 6 years of applying the original NAWQA habitat protocol to NAWQA Study Unit ecological surveys. Similar to the original protocol, these revised methods for evaluating stream habitat are based on a spatially hierarchical framework that incorporates habitat data at basin, segment, reach, and microhabitat scales. This framework provides a basis for national consistency in collection techniques while allowing flexibility in habitat assessment within individual Study Units. Procedures are described for collecting habitat data at basin and segment scales; these procedures include use of geographic information system data bases, topographic maps, and aerial photographs. Data collected at the reach scale include channel, bank, and riparian characteristics.
Reassessment of roles of oxygen and ultraviolet light in Precambrian evolution
NASA Technical Reports Server (NTRS)
Margulis, L.; Rambler, M.; Walker, J. C. G.
1976-01-01
It is argued that the transition to an oxidizing atmosphere preceded the origin of eukaryotic cells, which in turn must have preceded the origin of metazoa. Moreover, the number of methods by which organisms can protect themselves from harmful UV radiation is sufficiently large to suggest that solar UV, even when the atmosphere was anaerobic, was not such as to control the distribution and diversification of life. An alternative explanation for the late and sudden appearance of metazoa in lower Cambrian sediments is proposed, which is related to the mechanisms by which fully mature eukaryotic cells probably originated. There was probably a protracted evolution of modern genetic systems based on mitosis in cells which acquired organelles (e.g., plastids and mitochondria) by hereditary endosymbiosis. The origin of hard parts underlies the Cambrian explosion of metazoans.
An IR-Based Approach Utilizing Query Expansion for Plagiarism Detection in MEDLINE.
Nawab, Rao Muhammad Adeel; Stevenson, Mark; Clough, Paul
2017-01-01
The identification of duplicated and plagiarized passages of text has become an increasingly active area of research. In this paper, we investigate methods for plagiarism detection that aim to identify potential sources of plagiarism from MEDLINE, particularly when the original text has been modified through the replacement of words or phrases. A scalable approach based on Information Retrieval is used to perform candidate document selection-the identification of a subset of potential source documents given a suspicious text-from MEDLINE. Query expansion is performed using the ULMS Metathesaurus to deal with situations in which original documents are obfuscated. Various approaches to Word Sense Disambiguation are investigated to deal with cases where there are multiple Concept Unique Identifiers (CUIs) for a given term. Results using the proposed IR-based approach outperform a state-of-the-art baseline based on Kullback-Leibler Distance.
Le Pogam, Adrien; Hatt, Mathieu; Descourt, Patrice; Boussion, Nicolas; Tsoumpas, Charalampos; Turkheimer, Federico E.; Prunier-Aesch, Caroline; Baulieu, Jean-Louis; Guilloteau, Denis; Visvikis, Dimitris
2011-01-01
Purpose Partial volume effects (PVE) are consequences of the limited spatial resolution in emission tomography leading to under-estimation of uptake in tissues of size similar to the point spread function (PSF) of the scanner as well as activity spillover between adjacent structures. Among PVE correction methodologies, a voxel-wise mutual multi-resolution analysis (MMA) was recently introduced. MMA is based on the extraction and transformation of high resolution details from an anatomical image (MR/CT) and their subsequent incorporation into a low resolution PET image using wavelet decompositions. Although this method allows creating PVE corrected images, it is based on a 2D global correlation model which may introduce artefacts in regions where no significant correlation exists between anatomical and functional details. Methods A new model was designed to overcome these two issues (2D only and global correlation) using a 3D wavelet decomposition process combined with a local analysis. The algorithm was evaluated on synthetic, simulated and patient images, and its performance was compared to the original approach as well as the geometric transfer matrix (GTM) method. Results Quantitative performance was similar to the 2D global model and GTM in correlated cases. In cases where mismatches between anatomical and functional information were present the new model outperformed the 2D global approach, avoiding artefacts and significantly improving quality of the corrected images and their quantitative accuracy. Conclusions A new 3D local model was proposed for a voxel-wise PVE correction based on the original mutual multi-resolution analysis approach. Its evaluation demonstrated an improved and more robust qualitative and quantitative accuracy compared to the original MMA methodology, particularly in the absence of full correlation between anatomical and functional information. PMID:21978037
Difference equation state approximations for nonlinear hereditary control problems
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1984-01-01
Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems. Previously announced in STAR as N83-33589
The Bootstrap, the Jackknife, and the Randomization Test: A Sampling Taxonomy.
Rodgers, J L
1999-10-01
A simple sampling taxonomy is defined that shows the differences between and relationships among the bootstrap, the jackknife, and the randomization test. Each method has as its goal the creation of an empirical sampling distribution that can be used to test statistical hypotheses, estimate standard errors, and/or create confidence intervals. Distinctions between the methods can be made based on the sampling approach (with replacement versus without replacement) and the sample size (replacing the whole original sample versus replacing a subset of the original sample). The taxonomy is useful for teaching the goals and purposes of resampling schemes. An extension of the taxonomy implies other possible resampling approaches that have not previously been considered. Univariate and multivariate examples are presented.
Enhancing multiple-point geostatistical modeling: 1. Graph theory and pattern adjustment
NASA Astrophysics Data System (ADS)
Tahmasebi, Pejman; Sahimi, Muhammad
2016-03-01
In recent years, higher-order geostatistical methods have been used for modeling of a wide variety of large-scale porous media, such as groundwater aquifers and oil reservoirs. Their popularity stems from their ability to account for qualitative data and the great flexibility that they offer for conditioning the models to hard (quantitative) data, which endow them with the capability for generating realistic realizations of porous formations with very complex channels, as well as features that are mainly a barrier to fluid flow. One group of such models consists of pattern-based methods that use a set of data points for generating stochastic realizations by which the large-scale structure and highly-connected features are reproduced accurately. The cross correlation-based simulation (CCSIM) algorithm, proposed previously by the authors, is a member of this group that has been shown to be capable of simulating multimillion cell models in a matter of a few CPU seconds. The method is, however, sensitive to pattern's specifications, such as boundaries and the number of replicates. In this paper the original CCSIM algorithm is reconsidered and two significant improvements are proposed for accurately reproducing large-scale patterns of heterogeneities in porous media. First, an effective boundary-correction method based on the graph theory is presented by which one identifies the optimal cutting path/surface for removing the patchiness and discontinuities in the realization of a porous medium. Next, a new pattern adjustment method is proposed that automatically transfers the features in a pattern to one that seamlessly matches the surrounding patterns. The original CCSIM algorithm is then combined with the two methods and is tested using various complex two- and three-dimensional examples. It should, however, be emphasized that the methods that we propose in this paper are applicable to other pattern-based geostatistical simulation methods.
NASA Astrophysics Data System (ADS)
Sun, Wenqing; Zheng, Bin; Huang, Xia; Qian, Wei
2017-03-01
Deep learning is a trending promising method in medical image analysis area, but how to efficiently prepare the input image for the deep learning algorithms remains a challenge. In this paper, we introduced a novel artificial multichannel region of interest (ROI) generation procedure for convolutional neural networks (CNN). From LIDC database, we collected 54880 benign nodule samples and 59848 malignant nodule samples based on the radiologists' annotations. The proposed CNN consists of three pairs of convolutional layers and two fully connected layers. For each original ROI, two new ROIs were generated: one contains the segmented nodule which highlighted the nodule shape, and the other one contains the gradient of the original ROI which highlighted the textures. By combining the three channel images into a pseudo color ROI, the CNN was trained and tested on the new multichannel ROIs (multichannel ROI II). For the comparison, we generated another type of multichannel image by replacing the gradient image channel with a ROI contains whitened background region (multichannel ROI I). With the 5-fold cross validation evaluation method, the CNN using multichannel ROI II achieved the ROI based area under the curve (AUC) of 0.8823+/-0.0177, compared to the AUC of 0.8484+/-0.0204 generated by the original ROI. By calculating the average of ROI scores from one nodule, the lesion based AUC using multichannel ROI was 0.8793+/-0.0210. By comparing the convolved features maps from CNN using different types of ROIs, it can be noted that multichannel ROI II contains more accurate nodule shapes and surrounding textures.
Revisiting the Logan plot to account for non-negligible blood volume in brain tissue.
Schain, Martin; Fazio, Patrik; Mrzljak, Ladislav; Amini, Nahid; Al-Tawil, Nabil; Fitzer-Attas, Cheryl; Bronzova, Juliana; Landwehrmeyer, Bernhard; Sampaio, Christina; Halldin, Christer; Varrone, Andrea
2017-08-18
Reference tissue-based quantification of brain PET data does not typically include correction for signal originating from blood vessels, which is known to result in biased outcome measures. The bias extent depends on the amount of radioactivity in the blood vessels. In this study, we seek to revisit the well-established Logan plot and derive alternative formulations that provide estimation of distribution volume ratios (DVRs) that are corrected for the signal originating from the vasculature. New expressions for the Logan plot based on arterial input function and reference tissue were derived, which included explicit terms for whole blood radioactivity. The new methods were evaluated using PET data acquired using [ 11 C]raclopride and [ 18 F]MNI-659. The two-tissue compartment model (2TCM), with which signal originating from blood can be explicitly modeled, was used as a gold standard. DVR values obtained for [ 11 C]raclopride using the either blood-based or reference tissue-based Logan plot were systematically underestimated compared to 2TCM, and for [ 18 F]MNI-659, a proportionality bias was observed, i.e., the bias varied across regions. The biases disappeared when optimal blood-signal correction was used for respective tracer, although for the case of [ 18 F]MNI-659 a small but systematic overestimation of DVR was still observed. The new method appears to remove the bias introduced due to absence of correction for blood volume in regular graphical analysis and can be considered in clinical studies. Further studies are however required to derive a generic mapping between plasma and whole-blood radioactivity levels.
Peng, Xiang; King, Irwin
2008-01-01
The Biased Minimax Probability Machine (BMPM) constructs a classifier which deals with the imbalanced learning tasks. It provides a worst-case bound on the probability of misclassification of future data points based on reliable estimates of means and covariance matrices of the classes from the training data samples, and achieves promising performance. In this paper, we develop a novel yet critical extension training algorithm for BMPM that is based on Second-Order Cone Programming (SOCP). Moreover, we apply the biased classification model to medical diagnosis problems to demonstrate its usefulness. By removing some crucial assumptions in the original solution to this model, we make the new method more accurate and robust. We outline the theoretical derivatives of the biased classification model, and reformulate it into an SOCP problem which could be efficiently solved with global optima guarantee. We evaluate our proposed SOCP-based BMPM (BMPMSOCP) scheme in comparison with traditional solutions on medical diagnosis tasks where the objectives are to focus on improving the sensitivity (the accuracy of the more important class, say "ill" samples) instead of the overall accuracy of the classification. Empirical results have shown that our method is more effective and robust to handle imbalanced classification problems than traditional classification approaches, and the original Fractional Programming-based BMPM (BMPMFP).
Ziółkowska, Angelika; Wąsowicz, Erwin; Jeleń, Henryk H
2016-12-15
Among methods to detect wine adulteration, profiling volatiles is one with a great potential regarding robustness, analysis time and abundance of information for subsequent data treatment. Volatile fraction fingerprinting by solid-phase microextraction with direct analysis by mass spectrometry without compounds separation (SPME-MS) was used for differentiation of white as well as red wines. The aim was to differentiate between varieties used for wine production and to also differentiate wines by country of origin. The results obtained were compared to SPME-GC/MS analysis in which compounds were resolved by gas chromatography. For both approaches the same type of statistical procedure was used to compare samples: principal component analysis (PCA) followed by linear discriminant analysis (LDA). White wines (38) and red wines (41) representing different grape varieties and various regions of origin were analysed. SPME-MS proved to be advantageous in use due to better discrimination and higher sample throughput. Copyright © 2016 Elsevier Ltd. All rights reserved.
Hondrogiannis, Ellen; Rotta, Kathryn; Zapf, Charles M
2013-03-01
Sixteen elements found in 37 vanilla samples from Madagascar, Uganda, India, Indonesia (all Vanilla planifolia species), and Papa New Guinea (Vanilla tahitensis species) were measured by wavelength dispersive X-ray fluorescence (WDXRF) spectroscopy for the purpose of determining the elemental concentrations to discriminate among the origins. Pellets were prepared of the samples and elemental concentrations were calculated based on calibration curves created using 4 Natl. Inst. of Standards and Technology (NIST) standards. Discriminant analysis was used to successfully classify the vanilla samples by their species and their geographical region. Our method allows for higher throughput in the rapid screening of vanilla samples in less time than analytical methods currently available. Wavelength dispersive X-ray fluorescence spectroscopy and discriminant function analysis were used to classify vanilla from different origins resulting in a model that could potentially serve to rapidly validate these samples before purchasing from a producer. © 2013 Institute of Food Technologists®
Life Origination Hydrate Hypothesis (LOH-Hypothesis)
Ostrovskii, Victor; Kadyshevich, Elena
2012-01-01
The paper develops the Life Origination Hydrate Hypothesis (LOH-hypothesis), according to which living-matter simplest elements (LMSEs, which are N-bases, riboses, nucleosides, nucleotides), DNA- and RNA-like molecules, amino-acids, and proto-cells repeatedly originated on the basis of thermodynamically controlled, natural, and inevitable processes governed by universal physical and chemical laws from CH4, niters, and phosphates under the Earth's surface or seabed within the crystal cavities of the honeycomb methane-hydrate structure at low temperatures; the chemical processes passed slowly through all successive chemical steps in the direction that is determined by a gradual decrease in the Gibbs free energy of reacting systems. The hypothesis formulation method is based on the thermodynamic directedness of natural movement and consists ofan attempt to mentally backtrack on the progression of nature and thus reveal principal milestones alongits route. The changes in Gibbs free energy are estimated for different steps of the living-matter origination process; special attention is paid to the processes of proto-cell formation. Just the occurrence of the gas-hydrate periodic honeycomb matrix filled with LMSEs almost completely in its final state accounts for size limitation in the DNA functional groups and the nonrandom location of N-bases in the DNA chains. The slowness of the low-temperature chemical transformations and their “thermodynamic front” guide the gross process of living matter origination and its successive steps. It is shown that the hypothesis is thermodynamically justified and testable and that many observed natural phenomena count in its favor. PMID:25382120
Popping, Bert; De Dominicis, Emiliano; Dante, Mario; Nocetti, Marco
2017-02-16
Parmigiano Reggiano is an Italian product with a protected designation of origin (P.D.O.). It is an aged hard cheese made from raw milk. P.D.O. products are protected by European regulations. Approximately 3 million wheels are produced each year, and the product attracts a relevant premium price due to its quality and all around the world well known typicity. Due to the high demand that exceeds the production, several fraudulent products can be found on the market. The rate of fraud is estimated between 20% and 40%, the latter predominantly in the grated form. We have developed a non-target method based on Liquid Chomatography-High Resolution Mass Spectrometry (LC-HRMS) that allows the discrimination of Parmigiano Reggiano from non-authentic products with milk from different geographical origins or products, where other aspects of the production process do not comply with the rules laid down in the production specifications for Parmeggiano Reggiano. Based on a database created with authentic samples provided by the Consortium of Parmigiano Reggiano Cheese, a reliable classification model was built. The overall classification capabilities of this non-targeted method was verified on 32 grated cheese samples. The classification was 87.5% accurate.
Brinckmann, J A
2013-11-01
Pharmacopoeial monographs providing specifications for composition, identity, purity, quality, and strength of a botanical are developed based on analysis of presumably authenticated botanical reference materials. The specimens should represent the quality traditionally specified for the intended use, which may require different standards for medicinal versus food use. Development of quality standards monographs may occur through collaboration between a sponsor company or industry association and a pharmacopoeial expert committee. The sponsor may base proposed standards and methods on their own preferred botanical supply which may, or may not, be geo-authentic and/or correspond to qualities defined in traditional medicine formularies and pharmacopoeias. Geo-authentic botanicals are those with specific germplasm, cultivated or collected in their traditional production regions, of a specified biological age at maturity, with specific production techniques and processing methods. Consequences of developing new monographs that specify characteristics of an 'introduced' cultivated species or of a material obtained from one unique origin could lead to exclusion of geo-authentic herbs and may have therapeutic implications for clinical practice. In this review, specifications of selected medicinal plants with either a geo-authentic or geographical indication designation are discussed and compared against official pharmacopoeial standards for same genus and species regardless of origin. Copyright © 2012 John Wiley & Sons, Ltd.
Leiomyosarcoma: One disease or distinct biologic entities based on site of origin?
Worhunsky, David J; Gupta, Mihir; Gholami, Sepideh; Tran, Thuy B; Ganjoo, Kristen N; van de Rijn, Matt; Visser, Brendan C; Norton, Jeffrey A; Poultsides, George A
2015-06-01
Leiomyosarcoma (LMS) can originate from the retroperitoneum, uterus, extremity, and trunk. It is unclear whether tumors of different origin represent discrete entities. We compared clinicopathologic features and outcomes following surgical resection of LMS stratified by site of origin. Patients with LMS undergoing resection at a single institution were retrospectively reviewed. Clinicopathologic variables were compared across sites. Survival was calculated using the Kaplan-Meier method and compared using log-rank and Cox regression analyses. From 1983 to 2011, 138 patients underwent surgical resection for LMS. Retroperitoneal and uterine LMS were larger, higher grade, and more commonly associated with synchronous metastases. However, disease-specific survival, recurrence-free survival, and recurrence patterns were not significantly different across the four sites. Synchronous metastases (HR 3.20, P < 0.001), but not site of origin, size, grade, or margin status, were independently associated with worse DSS. A significant number of recurrences and disease-related deaths were noted beyond 5 years. Although larger and higher grade, retroperitoneal and uterine LMS share similar survival and recurrence patterns with their trunk and extremity counterparts. LMS of various anatomic sites may not represent distinct disease processes based on clinical outcomes. The presence of metastatic disease remains the most important prognostic factor for LMS. © 2015 Wiley Periodicals, Inc.
Review and evaluation of models that produce trip tables from ground counts : interim report.
DOT National Transportation Integrated Search
1996-01-01
This research effort was motivated by the desires of planning agencies to seek alternative methods of deriving current or base year Origin-Destination (O-D) trip tables without adopting conventional O-D surveys that are expensive, time consuming and ...
The combination of scanning electron and scanning probe microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sapozhnikov, I. D.; Gorbenko, O. M., E-mail: gorolga64@gmail.com; Felshtyn, M. L.
2016-06-17
We suggest the SPM module to combine SEM and SPM methods for studying surfaces. The module is based on the original mechanical moving and scanning system. The examples of studies of the steel surface microstructure in both SEM and SPM modes are presented.
Zhao, Yuancun; Chen, Xiaogang; Yang, Yiwen; Zhao, Xiaohong; Zhang, Shu; Gao, Zehua; Fang, Ting; Wang, Yufang; Zhang, Ji
2018-05-07
Diatom examination has always been used for the diagnosis of drowning in forensic practice. However, traditional examination of the microscopic features of diatom frustules is time-consuming and requires taxonomic expertise. In this study, we demonstrate a potential DNA-based method of inferring suspected drowning site using pyrosequencing (PSQ) of the V7 region of 18S ribosome DNA (18S rDNA) as a diatom DNA barcode. By employing a sparse representation-based AdvISER-M-PYRO algorithm, the original PSQ signals of diatom DNA mixtures were deciphered to determine the corresponding taxa of the composite diatoms. Additionally, we evaluated the possibility of correlating water samples to collection sites by analyzing the PSQ signal profiles of diatom mixtures contained in the water samples via multidimensional scaling. The results suggest that diatomaceous PSQ profile analysis could be used as a cost-effective method to deduce the geographical origin of an environmental bio-sample.
Kolostova, Katarina; Zhang, Yong; Hoffman, Robert M; Bobek, Vladimir
2014-09-01
In the present study, we demonstrate an animal model and recently introduced size-based exclusion method for circulating tumor cells (CTCs) isolation. The methodology enables subsequent in vitro CTC-culture and characterization. Human lung cancer cell line H460, expressing red fluorescent protein (H460-RFP), was orthotopically implanted in nude mice. CTCs were isolated by a size-based filtration method and successfully cultured in vitro on the separating membrane (MetaCell®), analyzed by means of time-lapse imaging. The cultured CTCs were heterogeneous in size and morphology even though they originated from a single tumor. The outer CTC-membranes were blebbing in general. Abnormal mitosis resulting in three daughter cells was frequently observed. The expression of RFP ensured that the CTCs originated from lung tumor. These readily isolatable, identifiable and cultivable CTCs can be used to characterize individual patient cancers and for screening of more effective treatment.
NASA Astrophysics Data System (ADS)
Zhang, Qiong; Peng, Cong; Lu, Yiming; Wang, Hao; Zhu, Kaiguang
2018-04-01
A novel technique is developed to level airborne geophysical data using principal component analysis based on flight line difference. In the paper, flight line difference is introduced to enhance the features of levelling error for airborne electromagnetic (AEM) data and improve the correlation between pseudo tie lines. Thus we conduct levelling to the flight line difference data instead of to the original AEM data directly. Pseudo tie lines are selected distributively cross profile direction, avoiding the anomalous regions. Since the levelling errors of selective pseudo tie lines show high correlations, principal component analysis is applied to extract the local levelling errors by low-order principal components reconstruction. Furthermore, we can obtain the levelling errors of original AEM data through inverse difference after spatial interpolation. This levelling method does not need to fly tie lines and design the levelling fitting function. The effectiveness of this method is demonstrated by the levelling results of survey data, comparing with the results from tie-line levelling and flight-line correlation levelling.
Steganalysis based on JPEG compatibility
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav; Du, Rui
2001-11-01
In this paper, we introduce a new forensic tool that can reliably detect modifications in digital images, such as distortion due to steganography and watermarking, in images that were originally stored in the JPEG format. The JPEG compression leave unique fingerprints and serves as a fragile watermark enabling us to detect changes as small as modifying the LSB of one randomly chosen pixel. The detection of changes is based on investigating the compatibility of 8x8 blocks of pixels with JPEG compression with a given quantization matrix. The proposed steganalytic method is applicable to virtually all steganongraphic and watermarking algorithms with the exception of those that embed message bits into the quantized JPEG DCT coefficients. The method can also be used to estimate the size of the secret message and identify the pixels that carry message bits. As a consequence of our steganalysis, we strongly recommend avoiding using images that have been originally stored in the JPEG format as cover-images for spatial-domain steganography.
Amidžić Klarić, Daniela; Klarić, Ilija; Mornar, Ana; Velić, Darko; Velić, Natalija
2015-08-01
This study brings out the data on the content of 21 mineral and heavy metal in 15 blackberry wines made of conventionally and organically grown blackberries. The objective of this study was to classify the blackberry wine samples based on their mineral composition and the applied cultivation method of the starting raw material by using chemometric analysis. The metal content of Croatian blackberry wine samples was determined by AAS after dry ashing. The comparison between an organic and conventional group of investigated blackberry wines showed statistically significant difference in concentrations of Si and Li, where the organic group contained higher concentrations of these compounds. According to multivariate data analysis, the model based on the original metal content data set finally included seven original variables (K, Fe, Mn, Cu, Ba, Cd and Cr) and gave a satisfactory separation of two applied cultivation methods of the starting raw material.
Hoggart, Clive J; Venturini, Giulia; Mangino, Massimo; Gomez, Felicia; Ascari, Giulia; Zhao, Jing Hua; Teumer, Alexander; Winkler, Thomas W; Tšernikova, Natalia; Luan, Jian'an; Mihailov, Evelin; Ehret, Georg B; Zhang, Weihua; Lamparter, David; Esko, Tõnu; Macé, Aurelien; Rüeger, Sina; Bochud, Pierre-Yves; Barcella, Matteo; Dauvilliers, Yves; Benyamin, Beben; Evans, David M; Hayward, Caroline; Lopez, Mary F; Franke, Lude; Russo, Alessia; Heid, Iris M; Salvi, Erika; Vendantam, Sailaja; Arking, Dan E; Boerwinkle, Eric; Chambers, John C; Fiorito, Giovanni; Grallert, Harald; Guarrera, Simonetta; Homuth, Georg; Huffman, Jennifer E; Porteous, David; Moradpour, Darius; Iranzo, Alex; Hebebrand, Johannes; Kemp, John P; Lammers, Gert J; Aubert, Vincent; Heim, Markus H; Martin, Nicholas G; Montgomery, Grant W; Peraita-Adrados, Rosa; Santamaria, Joan; Negro, Francesco; Schmidt, Carsten O; Scott, Robert A; Spector, Tim D; Strauch, Konstantin; Völzke, Henry; Wareham, Nicholas J; Yuan, Wei; Bell, Jordana T; Chakravarti, Aravinda; Kooner, Jaspal S; Peters, Annette; Matullo, Giuseppe; Wallaschofski, Henri; Whitfield, John B; Paccaud, Fred; Vollenweider, Peter; Bergmann, Sven; Beckmann, Jacques S; Tafti, Mehdi; Hastie, Nicholas D; Cusi, Daniele; Bochud, Murielle; Frayling, Timothy M; Metspalu, Andres; Jarvelin, Marjo-Riitta; Scherag, André; Smith, George Davey; Borecki, Ingrid B; Rousson, Valentin; Hirschhorn, Joel N; Rivolta, Carlo; Loos, Ruth J F; Kutalik, Zoltán
2014-07-01
The phenotypic effect of some single nucleotide polymorphisms (SNPs) depends on their parental origin. We present a novel approach to detect parent-of-origin effects (POEs) in genome-wide genotype data of unrelated individuals. The method exploits increased phenotypic variance in the heterozygous genotype group relative to the homozygous groups. We applied the method to >56,000 unrelated individuals to search for POEs influencing body mass index (BMI). Six lead SNPs were carried forward for replication in five family-based studies (of ∼4,000 trios). Two SNPs replicated: the paternal rs2471083-C allele (located near the imprinted KCNK9 gene) and the paternal rs3091869-T allele (located near the SLC2A10 gene) increased BMI equally (beta = 0.11 (SD), P<0.0027) compared to the respective maternal alleles. Real-time PCR experiments of lymphoblastoid cell lines from the CEPH families showed that expression of both genes was dependent on parental origin of the SNPs alleles (P<0.01). Our scheme opens new opportunities to exploit GWAS data of unrelated individuals to identify POEs and demonstrates that they play an important role in adult obesity.
Hoggart, Clive J.; Venturini, Giulia; Mangino, Massimo; Gomez, Felicia; Ascari, Giulia; Zhao, Jing Hua; Teumer, Alexander; Winkler, Thomas W.; Tšernikova, Natalia; Luan, Jian'an; Mihailov, Evelin; Ehret, Georg B.; Zhang, Weihua; Lamparter, David; Esko, Tõnu; Macé, Aurelien; Rüeger, Sina; Bochud, Pierre-Yves; Barcella, Matteo; Dauvilliers, Yves; Benyamin, Beben; Evans, David M.; Hayward, Caroline; Lopez, Mary F.; Franke, Lude; Russo, Alessia; Heid, Iris M.; Salvi, Erika; Vendantam, Sailaja; Arking, Dan E.; Boerwinkle, Eric; Chambers, John C.; Fiorito, Giovanni; Grallert, Harald; Guarrera, Simonetta; Homuth, Georg; Huffman, Jennifer E.; Porteous, David; Moradpour, Darius; Iranzo, Alex; Hebebrand, Johannes; Kemp, John P.; Lammers, Gert J.; Aubert, Vincent; Heim, Markus H.; Martin, Nicholas G.; Montgomery, Grant W.; Peraita-Adrados, Rosa; Santamaria, Joan; Negro, Francesco; Schmidt, Carsten O.; Scott, Robert A.; Spector, Tim D.; Strauch, Konstantin; Völzke, Henry; Wareham, Nicholas J.; Yuan, Wei; Bell, Jordana T.; Chakravarti, Aravinda; Kooner, Jaspal S.; Peters, Annette; Matullo, Giuseppe; Wallaschofski, Henri; Whitfield, John B.; Paccaud, Fred; Vollenweider, Peter; Bergmann, Sven; Beckmann, Jacques S.; Tafti, Mehdi; Hastie, Nicholas D.; Cusi, Daniele; Bochud, Murielle; Frayling, Timothy M.; Metspalu, Andres; Jarvelin, Marjo-Riitta; Scherag, André; Smith, George Davey; Borecki, Ingrid B.; Rousson, Valentin; Hirschhorn, Joel N.; Rivolta, Carlo; Loos, Ruth J. F.; Kutalik, Zoltán
2014-01-01
The phenotypic effect of some single nucleotide polymorphisms (SNPs) depends on their parental origin. We present a novel approach to detect parent-of-origin effects (POEs) in genome-wide genotype data of unrelated individuals. The method exploits increased phenotypic variance in the heterozygous genotype group relative to the homozygous groups. We applied the method to >56,000 unrelated individuals to search for POEs influencing body mass index (BMI). Six lead SNPs were carried forward for replication in five family-based studies (of ∼4,000 trios). Two SNPs replicated: the paternal rs2471083-C allele (located near the imprinted KCNK9 gene) and the paternal rs3091869-T allele (located near the SLC2A10 gene) increased BMI equally (beta = 0.11 (SD), P<0.0027) compared to the respective maternal alleles. Real-time PCR experiments of lymphoblastoid cell lines from the CEPH families showed that expression of both genes was dependent on parental origin of the SNPs alleles (P<0.01). Our scheme opens new opportunities to exploit GWAS data of unrelated individuals to identify POEs and demonstrates that they play an important role in adult obesity. PMID:25078964
Weighted least squares phase unwrapping based on the wavelet transform
NASA Astrophysics Data System (ADS)
Chen, Jiafeng; Chen, Haiqin; Yang, Zhengang; Ren, Haixia
2007-01-01
The weighted least squares phase unwrapping algorithm is a robust and accurate method to solve phase unwrapping problem. This method usually leads to a large sparse linear equation system. Gauss-Seidel relaxation iterative method is usually used to solve this large linear equation. However, this method is not practical due to its extremely slow convergence. The multigrid method is an efficient algorithm to improve convergence rate. However, this method needs an additional weight restriction operator which is very complicated. For this reason, the multiresolution analysis method based on the wavelet transform is proposed. By applying the wavelet transform, the original system is decomposed into its coarse and fine resolution levels and an equivalent equation system with better convergence condition can be obtained. Fast convergence in separate coarse resolution levels speeds up the overall system convergence rate. The simulated experiment shows that the proposed method converges faster and provides better result than the multigrid method.
Zhou, Fei; Zhao, Yajing; Peng, Jiyu; Jiang, Yirong; Li, Maiquan; Jiang, Yuan; Lu, Baiyi
2017-07-01
Osmanthus fragrans flowers are used as folk medicine and additives for teas, beverages and foods. The metabolites of O. fragrans flowers from different geographical origins were inconsistent in some extent. Chromatography and mass spectrometry combined with multivariable analysis methods provides an approach for discriminating the origin of O. fragrans flowers. To discriminate the Osmanthus fragrans var. thunbergii flowers from different origins with the identified metabolites. GC-MS and UPLC-PDA were conducted to analyse the metabolites in O. fragrans var. thunbergii flowers (in total 150 samples). Principal component analysis (PCA), soft independent modelling of class analogy analysis (SIMCA) and random forest (RF) analysis were applied to group the GC-MS and UPLC-PDA data. GC-MS identified 32 compounds common to all samples while UPLC-PDA/QTOF-MS identified 16 common compounds. PCA of the UPLC-PDA data generated a better clustering than PCA of the GC-MS data. Ten metabolites (six from GC-MS and four from UPLC-PDA) were selected as effective compounds for discrimination by PCA loadings. SIMCA and RF analysis were used to build classification models, and the RF model, based on the four effective compounds (caffeic acid derivative, acteoside, ligustroside and compound 15), yielded better results with the classification rate of 100% in the calibration set and 97.8% in the prediction set. GC-MS and UPLC-PDA combined with multivariable analysis methods can discriminate the origin of Osmanthus fragrans var. thunbergii flowers. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
The COSMIC-DANCE project: Unravelling the origin of the mass function
NASA Astrophysics Data System (ADS)
Bouy, H.; Bertin, E.; Sarro, L. M.; Barrado, D.; Berihuete, A.; Olivares, J.; Moraux, E.; Bouvier, J.; Tamura, M.; Cuillandre, J.-C.; Beletsky, Y.; Wright, N.; Huelamo, N.; Allen, L.; Solano, E.; Brandner, B.
2017-03-01
The COSMIC-DANCE project is an observational program aiming at understanding the origin and evolution of ultracool objects by measuring the mass function and internal dynamics of young nearby associations down to the fragmentation limit. The least massive members of young nearby associations are identified using modern statistical methods in a multi-dimensional space made of optical and infrared luminosities and colors and proper motions. The photometry and astrometry are obtained by combining ground and in some case space based archival observations with new observations, covering between one and two decades.
Molecular imaging in neuroendocrine tumors: molecular uptake mechanisms and clinical results.
Koopmans, Klaas P; Neels, Oliver N; Kema, Ido P; Elsinga, Philip H; Links, Thera P; de Vries, Elisabeth G E; Jager, Pieter L
2009-09-01
Neuroendocrine tumors can originate almost everywhere in the body and consist of a great variety of subtypes. This paper focuses on molecular imaging methods using nuclear medicine techniques in neuroendocrine tumors, coupling molecular uptake mechanisms of radiotracers with clinical results. A non-systematic review is presented on receptor based and metabolic imaging methods. Receptor-based imaging covers the molecular backgrounds of somatostatin, vaso-intestinal peptide (VIP), bombesin and cholecystokinin (CCK) receptors and their link with nuclear imaging. Imaging methods based on specific metabolic properties include meta-iodo-benzylguanide (MIBG) and dimercapto-sulphuric acid (DMSA-V) scintigraphy as well as more modern positron emission tomography (PET)-based methods using radio-labeled analogues of amino acids, glucose, dihydroxyphenylalanine (DOPA), dopamine and tryptophan. Diagnostic sensitivities are presented for each imaging method and for each neuroendocrine tumor subtype. Finally, a Forest plot analysis of diagnostic performance is presented for each tumor type in order to provide a comprehensive overview for clinical use.
Takei, Takaaki; Ikeda, Mitsuru; Imai, Kuniharu; Yamauchi-Kawaura, Chiyo; Kato, Katsuhiko; Isoda, Haruo
2013-09-01
The automated contrast-detail (C-D) analysis methods developed so-far cannot be expected to work well on images processed with nonlinear methods, such as noise reduction methods. Therefore, we have devised a new automated C-D analysis method by applying support vector machine (SVM), and tested for its robustness to nonlinear image processing. We acquired the CDRAD (a commercially available C-D test object) images at a tube voltage of 120 kV and a milliampere-second product (mAs) of 0.5-5.0. A partial diffusion equation based technique was used as noise reduction method. Three radiologists and three university students participated in the observer performance study. The training data for our SVM method was the classification data scored by the one radiologist for the CDRAD images acquired at 1.6 and 3.2 mAs and their noise-reduced images. We also compared the performance of our SVM method with the CDRAD Analyser algorithm. The mean C-D diagrams (that is a plot of the mean of the smallest visible hole diameter vs. hole depth) obtained from our devised SVM method agreed well with the ones averaged across the six human observers for both original and noise-reduced CDRAD images, whereas the mean C-D diagrams from the CDRAD Analyser algorithm disagreed with the ones from the human observers for both original and noise-reduced CDRAD images. In conclusion, our proposed SVM method for C-D analysis will work well for the images processed with the non-linear noise reduction method as well as for the original radiographic images.
Hauge, Cindy Horst; Jacobs-Knight, Jacque; Jensen, Jamie L; Burgess, Katherine M; Puumala, Susan E; Wilton, Georgiana; Hanson, Jessica D
2015-06-01
The purpose of this study was to use a mixed-methods approach to determine the validity and reliability of measurements used within an alcohol-exposed pregnancy prevention program for American Indian women. To develop validity, content experts provided input into the survey measures, and a "think aloud" methodology was conducted with 23 American Indian women. After revising the measurements based on this input, a test-retest was conducted with 79 American Indian women who were randomized to complete either the original measurements or the new, modified measurements. The test-retest revealed that some of the questions performed better for the modified version, whereas others appeared to be more reliable for the original version. The mixed-methods approach was a useful methodology for gathering feedback on survey measurements from American Indian participants and in indicating specific survey questions that needed to be modified for this population. © The Author(s) 2015.
Correlations of stock price fluctuations under multi-scale and multi-threshold scenarios
NASA Astrophysics Data System (ADS)
Sui, Guo; Li, Huajiao; Feng, Sida; Liu, Xueyong; Jiang, Meihui
2018-01-01
The multi-scale method is widely used in analyzing time series of financial markets and it can provide market information for different economic entities who focus on different periods. Through constructing multi-scale networks of price fluctuation correlation in the stock market, we can detect the topological relationship between each time series. Previous research has not addressed the problem that the original fluctuation correlation networks are fully connected networks and more information exists within these networks that is currently being utilized. Here we use listed coal companies as a case study. First, we decompose the original stock price fluctuation series into different time scales. Second, we construct the stock price fluctuation correlation networks at different time scales. Third, we delete the edges of the network based on thresholds and analyze the network indicators. Through combining the multi-scale method with the multi-threshold method, we bring to light the implicit information of fully connected networks.
On iterative processes in the Krylov-Sonneveld subspaces
NASA Astrophysics Data System (ADS)
Ilin, Valery P.
2016-10-01
The iterative Induced Dimension Reduction (IDR) methods are considered for solving large systems of linear algebraic equations (SLAEs) with nonsingular nonsymmetric matrices. These approaches are investigated by many authors and are charachterized sometimes as the alternative to the classical processes of Krylov type. The key moments of the IDR algorithms consist in the construction of the embedded Sonneveld subspaces, which have the decreasing dimensions and use the orthogonalization to some fixed subspace. Other independent approaches for research and optimization of the iterations are based on the augmented and modified Krylov subspaces by using the aggregation and deflation procedures with present various low rank approximations of the original matrices. The goal of this paper is to show, that IDR method in Sonneveld subspaces present an original interpretation of the modified algorithms in the Krylov subspaces. In particular, such description is given for the multi-preconditioned semi-conjugate direction methods which are actual for the parallel algebraic domain decomposition approaches.
NASA Astrophysics Data System (ADS)
Liu, J.; Lan, T.; Qin, H.
2017-10-01
Traditional data cleaning identifies dirty data by classifying original data sequences, which is a class-imbalanced problem since the proportion of incorrect data is much less than the proportion of correct ones for most diagnostic systems in Magnetic Confinement Fusion (MCF) devices. When using machine learning algorithms to classify diagnostic data based on class-imbalanced training set, most classifiers are biased towards the major class and show very poor classification rates on the minor class. By transforming the direct classification problem about original data sequences into a classification problem about the physical similarity between data sequences, the class-balanced effect of Time-Domain Global Similarity (TDGS) method on training set structure is investigated in this paper. Meanwhile, the impact of improved training set structure on data cleaning performance of TDGS method is demonstrated with an application example in EAST POlarimetry-INTerferometry (POINT) system.
Evaluation of radiographic interpretation competence of veterinary students in Finland.
Koskinen, Heli I; Snellman, Marjatta
2009-01-01
In the evaluation of the clinical competence of veterinary students, many different definitions and methods are approved. Due to the increasing discussion of the quality of outcomes produced by newly graduated veterinarians, methods for the evaluation of clinical competencies should also be evaluated. In this study, this was done by comparing two qualitative evaluation schemes: the well-known structure of observed learning outcome (SOLO) taxonomy and a modification of this taxonomy. A case-based final radiologic examination was selected and the investigation was performed by classifying students' outcomes. These classes were finally put next to original (quantitative) scores and the statistical calculations were initiated. Significant correlations between taxonomies (0.53) and the modified taxonomy and original scores (0.66) were found and some qualitative similarities between evaluation methods were observed. In addition, some supplements were recommended for the structure of evaluation schemes, especially for the structure of the modified SOLO taxonomy.
Verification of learner’s differences by team-based learning in biochemistry classes
2017-01-01
Purpose We tested the effect of team-based learning (TBL) on medical education through the second-year premedical students’ TBL scores in biochemistry classes over 5 years. Methods We analyzed the results based on test scores before and after the students’ debate. The groups of students for statistical analysis were divided as follows: group 1 comprised the top-ranked students, group 3 comprised the low-ranked students, and group 2 comprised the medium-ranked students. Therefore, group T comprised 382 students (the total number of students in group 1, 2, and 3). To calibrate the difficulty of the test, original scores were converted into standardized scores. We determined the differences of the tests using Student t-test, and the relationship between scores before, and after the TBL using linear regression tests. Results Although there was a decrease in the lowest score, group T and 3 showed a significant increase in both original and standardized scores; there was also an increase in the standardized score of group 3. There was a positive correlation between the pre- and the post-debate scores in group T, and 2. And the beta values of the pre-debate scores and “the changes between the pre- and post-debate scores” were statistically significant in both original and standardized scores. Conclusion TBL is one of the educational methods for helping students improve their grades, particularly those of low-ranked students. PMID:29207457
A Protocol for Epigenetic Imprinting Analysis with RNA-Seq Data.
Zou, Jinfeng; Xiang, Daoquan; Datla, Raju; Wang, Edwin
2018-01-01
Genomic imprinting is an epigenetic regulatory mechanism that operates through expression of certain genes from maternal or paternal in a parent-of-origin-specific manner. Imprinted genes have been identified in diverse biological systems that are implicated in some human diseases and in embryonic and seed developmental programs in plants. The molecular underpinning programs and mechanisms involved in imprinting are yet to be explored in depth in plants. The recent advances in RNA-Seq-based methods and technologies offer an opportunity to systematically analyze epigenetic imprinting that operates at the whole genome level in the model and crop plants. We are interested using Arabidopsis model system, to investigate gene expression patterns associated with parent of origin and their implications to imprinting during embryo and seed development. Toward this, we have generated early embryo development RNA-Seq-based transcriptome datasets in F1s from a genetic cross between two diverse Arabidopsis thaliana ecotypes Col-0 and Tsu-1. With the data, we developed a protocol for evaluating the maternal and paternal contributions of genes during the early stages of embryo development after fertilization. This protocol is also designed to consider the contamination from other potential seed tissues, sequencing quality, proper processing of sequenced reads and variant calling, and appropriate inference of the parental contributions based on the parent-of-origin-specific single-nucleotide polymorphisms within the expressed genes. The approach, methods and the protocol developed in this study can be used for evaluating the effects of epigenetic imprinting in plants.
Bovine origin Staphylococcus aureus: A new zoonotic agent?
Rao, Relangi Tulasi; Jayakumar, Kannan; Kumar, Pavitra
2017-10-01
The study aimed to assess the nature of animal origin Staphylococcus aureus strains. The study has zoonotic importance and aimed to compare virulence between two different hosts, i.e., bovine and ovine origin. Conventional polymerase chain reaction-based methods used for the characterization of S. aureus strains and chick embryo model employed for the assessment of virulence capacity of strains. All statistical tests carried on R program, version 3.0.4. After initial screening and molecular characterization of the prevalence of S. aureus found to be 42.62% in bovine origin samples and 28.35% among ovine origin samples. Meanwhile, the methicillin-resistant S. aureus prevalence is found to be meager in both the hosts. Among the samples, only 6.8% isolates tested positive for methicillin resistance. The biofilm formation quantified and the variation compared among the host. A Welch two-sample t -test found to be statistically significant, t=2.3179, df=28.103, and p=0.02795. Chicken embryo model found effective to test the pathogenicity of the strains. The study helped to conclude healthy bovines can act as S. aureus reservoirs. Bovine origin S. aureus strains are more virulent than ovine origin strains. Bovine origin strains have high probability to become zoonotic pathogen. Further, gene knock out studies may be conducted to conclude zoonocity of the bovine origin strains.
A GPU-accelerated implicit meshless method for compressible flows
NASA Astrophysics Data System (ADS)
Zhang, Jia-Le; Ma, Zhi-Hua; Chen, Hong-Quan; Cao, Cheng
2018-05-01
This paper develops a recently proposed GPU based two-dimensional explicit meshless method (Ma et al., 2014) by devising and implementing an efficient parallel LU-SGS implicit algorithm to further improve the computational efficiency. The capability of the original 2D meshless code is extended to deal with 3D complex compressible flow problems. To resolve the inherent data dependency of the standard LU-SGS method, which causes thread-racing conditions destabilizing numerical computation, a generic rainbow coloring method is presented and applied to organize the computational points into different groups by painting neighboring points with different colors. The original LU-SGS method is modified and parallelized accordingly to perform calculations in a color-by-color manner. The CUDA Fortran programming model is employed to develop the key kernel functions to apply boundary conditions, calculate time steps, evaluate residuals as well as advance and update the solution in the temporal space. A series of two- and three-dimensional test cases including compressible flows over single- and multi-element airfoils and a M6 wing are carried out to verify the developed code. The obtained solutions agree well with experimental data and other computational results reported in the literature. Detailed analysis on the performance of the developed code reveals that the developed CPU based implicit meshless method is at least four to eight times faster than its explicit counterpart. The computational efficiency of the implicit method could be further improved by ten to fifteen times on the GPU.
Virtual reconstruction of glenoid bone defects using a statistical shape model.
Plessers, Katrien; Vanden Berghe, Peter; Van Dijck, Christophe; Wirix-Speetjens, Roel; Debeer, Philippe; Jonkers, Ilse; Vander Sloten, Jos
2018-01-01
Description of the native shape of a glenoid helps surgeons to preoperatively plan the position of a shoulder implant. A statistical shape model (SSM) can be used to virtually reconstruct a glenoid bone defect and to predict the inclination, version, and center position of the native glenoid. An SSM-based reconstruction method has already been developed for acetabular bone reconstruction. The goal of this study was to evaluate the SSM-based method for the reconstruction of glenoid bone defects and the prediction of native anatomic parameters. First, an SSM was created on the basis of 66 healthy scapulae. Then, artificial bone defects were created in all scapulae and reconstructed using the SSM-based reconstruction method. For each bone defect, the reconstructed surface was compared with the original surface. Furthermore, the inclination, version, and glenoid center point of the reconstructed surface were compared with the original parameters of each scapula. For small glenoid bone defects, the healthy surface of the glenoid was reconstructed with a root mean square error of 1.2 ± 0.4 mm. Inclination, version, and glenoid center point were predicted with an accuracy of 2.4° ± 2.1°, 2.9° ± 2.2°, and 1.8 ± 0.8 mm, respectively. The SSM-based reconstruction method is able to accurately reconstruct the native glenoid surface and to predict the native anatomic parameters. Based on this outcome, statistical shape modeling can be considered a successful technique for use in the preoperative planning of shoulder arthroplasty. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Optical measurements of absorption changes in two-layered diffusive media
NASA Astrophysics Data System (ADS)
Fabbri, Francesco; Sassaroli, Angelo; Henry, Michael E.; Fantini, Sergio
2004-04-01
We have used Monte Carlo simulations for a two-layered diffusive medium to investigate the effect of a superficial layer on the measurement of absorption variations from optical diffuse reflectance data processed by using: (a) a multidistance, frequency-domain method based on diffusion theory for a semi-infinite homogeneous medium; (b) a differential-pathlength-factor method based on a modified Lambert-Beer law for a homogeneous medium and (c) a two-distance, partial-pathlength method based on a modified Lambert-Beer law for a two-layered medium. Methods (a) and (b) lead to a single value for the absorption variation, whereas method (c) yields absorption variations for each layer. In the simulations, the optical coefficients of the medium were representative of those of biological tissue in the near-infrared. The thickness of the first layer was in the range 0.3-1.4 cm, and the source-detector distances were in the range 1-5 cm, which is typical of near-infrared diffuse reflectance measurements in tissue. The simulations have shown that (1) method (a) is mostly sensitive to absorption changes in the underlying layer, provided that the thickness of the superficial layer is ~0.6 cm or less; (2) method (b) is significantly affected by absorption changes in the superficial layer and (3) method (c) yields the absorption changes for both layers with a relatively good accuracy of ~4% for the superficial layer and ~10% for the underlying layer (provided that the absorption changes are less than 20-30% of the baseline value). We have applied all three methods of data analysis to near-infrared data collected on the forehead of a human subject during electroconvulsive therapy. Our results suggest that the multidistance method (a) and the two-distance partial-pathlength method (c) may better decouple the contributions to the optical signals that originate in deeper tissue (brain) from those that originate in more superficial tissue layers.
Brain vascular image enhancement based on gradient adjust with split Bregman
NASA Astrophysics Data System (ADS)
Liang, Xiao; Dong, Di; Hui, Hui; Zhang, Liwen; Fang, Mengjie; Tian, Jie
2016-04-01
Light Sheet Microscopy is a high-resolution fluorescence microscopic technique which enables to observe the mouse brain vascular network clearly with immunostaining. However, micro-vessels are stained with few fluorescence antibodies and their signals are much weaker than large vessels, which make micro-vessels unclear in LSM images. In this work, we developed a vascular image enhancement method to enhance micro-vessel details which should be useful for vessel statistics analysis. Since gradient describes the edge information of the vessel, the main idea of our method is to increase the gradient values of the enhanced image to improve the micro-vessels contrast. Our method contained two steps: 1) calculate the gradient image of LSM image, and then amplify high gradient values of the original image to enhance the vessel edge and suppress low gradient values to remove noises. Then we formulated a new L1-norm regularization optimization problem to find an image with the expected gradient while keeping the main structure information of the original image. 2) The split Bregman iteration method was used to deal with the L1-norm regularization problem and generate the final enhanced image. The main advantage of the split Bregman method is that it has both fast convergence and low memory cost. In order to verify the effectiveness of our method, we applied our method to a series of mouse brain vascular images acquired from a commercial LSM system in our lab. The experimental results showed that our method could greatly enhance micro-vessel edges which were unclear in the original images.
Harmony search method: theory and applications.
Gao, X Z; Govindasamy, V; Xu, H; Wang, X; Zenger, K
2015-01-01
The Harmony Search (HS) method is an emerging metaheuristic optimization algorithm, which has been employed to cope with numerous challenging tasks during the past decade. In this paper, the essential theory and applications of the HS algorithm are first described and reviewed. Several typical variants of the original HS are next briefly explained. As an example of case study, a modified HS method inspired by the idea of Pareto-dominance-based ranking is also presented. It is further applied to handle a practical wind generator optimal design problem.
Root Gravitropism: Quantification, Challenges, and Solutions.
Muller, Lukas; Bennett, Malcolm J; French, Andy; Wells, Darren M; Swarup, Ranjan
2018-01-01
Better understanding of root traits such as root angle and root gravitropism will be crucial for development of crops with improved resource use efficiency. This chapter describes a high-throughput, automated image analysis method to trace Arabidopsis (Arabidopsis thaliana) seedling roots grown on agar plates. The method combines a "particle-filtering algorithm with a graph-based method" to trace the center line of a root and can be adopted for the analysis of several root parameters such as length, curvature, and stimulus from original root traces.
Object recognition in images via a factor graph model
NASA Astrophysics Data System (ADS)
He, Yong; Wang, Long; Wu, Zhaolin; Zhang, Haisu
2018-04-01
Object recognition in images suffered from huge search space and uncertain object profile. Recently, the Bag-of- Words methods are utilized to solve these problems, especially the 2-dimension CRF(Conditional Random Field) model. In this paper we suggest the method based on a general and flexible fact graph model, which can catch the long-range correlation in Bag-of-Words by constructing a network learning framework contrasted from lattice in CRF. Furthermore, we explore a parameter learning algorithm based on the gradient descent and Loopy Sum-Product algorithms for the factor graph model. Experimental results on Graz 02 dataset show that, the recognition performance of our method in precision and recall is better than a state-of-art method and the original CRF model, demonstrating the effectiveness of the proposed method.
Algorithms for the automatic generation of 2-D structured multi-block grids
NASA Technical Reports Server (NTRS)
Schoenfeld, Thilo; Weinerfelt, Per; Jenssen, Carl B.
1995-01-01
Two different approaches to the fully automatic generation of structured multi-block grids in two dimensions are presented. The work aims to simplify the user interactivity necessary for the definition of a multiple block grid topology. The first approach is based on an advancing front method commonly used for the generation of unstructured grids. The original algorithm has been modified toward the generation of large quadrilateral elements. The second method is based on the divide-and-conquer paradigm with the global domain recursively partitioned into sub-domains. For either method each of the resulting blocks is then meshed using transfinite interpolation and elliptic smoothing. The applicability of these methods to practical problems is demonstrated for typical geometries of fluid dynamics.
Tallman, Melissa; Amenta, Nina; Delson, Eric; Frost, Stephen R.; Ghosh, Deboshmita; Klukkert, Zachary S.; Morrow, Andrea; Sawyer, Gary J.
2014-01-01
Diagenetic distortion can be a major obstacle to collecting quantitative shape data on paleontological specimens, especially for three-dimensional geometric morphometric analysis. Here we utilize the recently -published algorithmic symmetrization method of fossil reconstruction and compare it to the more traditional reflection & averaging approach. In order to have an objective test of this method, five casts of a female cranium of Papio hamadryas kindae were manually deformed while the plaster hardened. These were subsequently “retrodeformed” using both algorithmic symmetrization and reflection & averaging and then compared to the original, undeformed specimen. We found that in all cases, algorithmic retrodeformation improved the shape of the deformed cranium and in four out of five cases, the algorithmically symmetrized crania were more similar in shape to the original crania than the reflected & averaged reconstructions. In three out of five cases, the difference between the algorithmically symmetrized crania and the original cranium could be contained within the magnitude of variation among individuals in a single subspecies of Papio. Instances of asymmetric distortion, such as breakage on one side, or bending in the axis of symmetry, were well handled, whereas symmetrical distortion remained uncorrected. This technique was further tested on a naturally deformed and fossilized cranium of Paradolichopithecus arvernensis. Results, based on a principal components analysis and Procrustes distances, showed that the algorithmically symmetrized Paradolichopithecus cranium was more similar to other, less-deformed crania from the same species than was the original. These results illustrate the efficacy of this method of retrodeformation by algorithmic symmetrization for the correction of asymmetrical distortion in fossils. Symmetrical distortion remains a problem for all currently developed methods of retrodeformation. PMID:24992483
NASA Astrophysics Data System (ADS)
Xiangfeng, Zhang; Hong, Jiang
2018-03-01
In this paper, the full vector LCD method is proposed to solve the misjudgment problem caused by the change of the working condition. First, the signal from different working condition is decomposed by LCD, to obtain the Intrinsic Scale Component (ISC)whose instantaneous frequency with physical significance. Then, calculate of the cross correlation coefficient between ISC and the original signal, signal denoising based on the principle of mutual information minimum. At last, calculate the sum of absolute Vector mutual information of the sample under different working condition and the denoised ISC as the characteristics to classify by use of Support vector machine (SVM). The wind turbines vibration platform gear box experiment proves that this method can identify fault characteristics under different working conditions. The advantage of this method is that it reduce dependence of man’s subjective experience, identify fault directly from the original data of vibration signal. It will has high engineering value.
NASA Technical Reports Server (NTRS)
Yun, Hee-Mann (Inventor); DiCarlo, James A. (Inventor)
2014-01-01
Methods are disclosed for producing architectural preforms and high-temperature composite structures containing high-strength ceramic fibers with reduced preforming stresses within each fiber, with an in-situ grown coating on each fiber surface, with reduced boron within the bulk of each fiber, and with improved tensile creep and rupture resistance properties tier each fiber. The methods include the steps of preparing an original sample of a preform formed from a pre-selected high-strength silicon carbide ceramic fiber type, placing the original sample in a processing furnace under a pre-selected preforming stress state and thermally treating the sample in the processing furnace at a pre-selected processing temperature and hold time in a processing gas having a pre-selected composition, pressure, and flow rate. For the high-temperature composite structures, the method includes additional steps of depositing a thin interphase coating on the surface of each fiber and forming a ceramic or carbon-based matrix within the sample.
Undoing an Epidemiological Paradox: The Tobacco Industry’s Targeting of US Immigrants
Acevedo-Garcia, Dolores; Barbeau, Elizabeth; Bishop, Jennifer Anne; Pan, Jocelyn; Emmons, Karen M.
2004-01-01
Objectives. We sought to ascertain whether the tobacco industry has conceptualized the US immigrant population as a separate market. Methods. We conducted a content analysis of major tobacco industry documents. Results. The tobacco industry has engaged in 3 distinct marketing strategies aimed at US immigrants: geographically based marketing directed toward immigrant communities, segmentation based on immigrants’ assimilation status, and coordinated marketing focusing on US immigrant groups and their countries of origin. Conclusions. Public health researchers should investigate further the tobacco industry’s characterization of the assimilated and non-assimilated immigrant markets, and its specific strategies for targeting these groups, in order to develop informed national and international tobacco control countermarketing strategies designed to protect immigrant populations and their countries of origin. PMID:15569972
Feedback linearization of singularly perturbed systems based on canonical similarity transformations
NASA Astrophysics Data System (ADS)
Kabanov, A. A.
2018-05-01
This paper discusses the problem of feedback linearization of a singularly perturbed system in a state-dependent coefficient form. The result is based on the introduction of a canonical similarity transformation. The transformation matrix is constructed from separate blocks for fast and slow part of an original singularly perturbed system. The transformed singular perturbed system has a linear canonical form that significantly simplifies a control design problem. Proposed similarity transformation allows accomplishing linearization of the system without considering the virtual output (as it is needed for normal form method), a technique of a transition from phase coordinates of the transformed system to state variables of the original system is simpler. The application of the proposed approach is illustrated through example.
Overview of field gamma spectrometries based on Si-photomultiplier
NASA Astrophysics Data System (ADS)
Denisov, Viktor; Korotaev, Valery; Titov, Aleksandr; Blokhina, Anastasia; Kleshchenok, Maksim
2017-05-01
Design of optical-electronic devices and systems involves the selection of such technical patterns that under given initial requirements and conditions are optimal according to certain criteria. The original characteristic of the OES for any purpose, defining its most important feature ability is a threshold detection. Based on this property, will be achieved the required functional quality of the device or system. Therefore, the original criteria and optimization methods have to subordinate to the idea of a better detectability. Generally reduces to the problem of optimal selection of the expected (predetermined) signals in the predetermined observation conditions. Thus the main purpose of optimization of the system when calculating its detectability is the choice of circuits and components that provide the most effective selection of a target.
Speech transformations based on a sinusoidal representation
NASA Astrophysics Data System (ADS)
Quatieri, T. E.; McAulay, R. J.
1986-05-01
A new speech analysis/synthesis technique is presented which provides the basis for a general class of speech transformation including time-scale modification, frequency scaling, and pitch modification. These modifications can be performed with a time-varying change, permitting continuous adjustment of a speaker's fundamental frequency and rate of articulation. The method is based on a sinusoidal representation of the speech production mechanism that has been shown to produce synthetic speech that preserves the waveform shape and is essentially perceptually indistinguishable from the original. Although the analysis/synthesis system originally was designed for single-speaker signals, it is equally capable of recovering and modifying nonspeech signals such as music; multiple speakers, marine biologic sounds, and speakers in the presence of interferences such as noise and musical backgrounds.
About the mechanism of ERP-system pilot test
NASA Astrophysics Data System (ADS)
Mitkov, V. V.; Zimin, V. V.
2018-05-01
In the paper the mathematical problem of defining the scope of pilot test is stated, which is a task of quadratic programming. The procedure of the problem solving includes the method of network programming based on the structurally similar network representation of the criterion and constraints and which reduces the original problem to a sequence of simpler evaluation tasks. The evaluation tasks are solved by the method of dichotomous programming.
High-quality compressive ghost imaging
NASA Astrophysics Data System (ADS)
Huang, Heyan; Zhou, Cheng; Tian, Tian; Liu, Dongqi; Song, Lijun
2018-04-01
We propose a high-quality compressive ghost imaging method based on projected Landweber regularization and guided filter, which effectively reduce the undersampling noise and improve the resolution. In our scheme, the original object is reconstructed by decomposing of regularization and denoising steps instead of solving a minimization problem in compressive reconstruction process. The simulation and experimental results show that our method can obtain high ghost imaging quality in terms of PSNR and visual observation.
Review of analytical methods for the quantification of iodine in complex matrices.
Shelor, C Phillip; Dasgupta, Purnendu K
2011-09-19
Iodine is an essential element of human nutrition. Nearly a third of the global population has insufficient iodine intake and is at risk of developing Iodine Deficiency Disorders (IDD). Most countries have iodine supplementation and monitoring programs. Urinary iodide (UI) is the biomarker used for epidemiological studies; only a few methods are currently used routinely for analysis. These methods either require expensive instrumentation with qualified personnel (inductively coupled plasma-mass spectrometry, instrumental nuclear activation analysis) or oxidative sample digestion to remove potential interferences prior to analysis by a kinetic colorimetric method originally introduced by Sandell and Kolthoff ~75 years ago. The Sandell-Kolthoff (S-K) method is based on the catalytic effect of iodide on the reaction between Ce(4+) and As(3+). No available technique fully fits the needs of developing countries; research into inexpensive reliable methods and instrumentation are needed. There have been multiple reviews of methods used for epidemiological studies and specific techniques. However, a general review of iodine determination on a wide-ranging set of complex matrices is not available. While this review is not comprehensive, we cover the principal developments since the original development of the S-K method. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Noh, Hae Young; Kiremidjian, Anne S.
2011-04-01
This paper introduces a data compression method using the K-SVD algorithm and its application to experimental ambient vibration data for structural health monitoring purposes. Because many damage diagnosis algorithms that use system identification require vibration measurements of multiple locations, it is necessary to transmit long threads of data. In wireless sensor networks for structural health monitoring, however, data transmission is often a major source of battery consumption. Therefore, reducing the amount of data to transmit can significantly lengthen the battery life and reduce maintenance cost. The K-SVD algorithm was originally developed in information theory for sparse signal representation. This algorithm creates an optimal over-complete set of bases, referred to as a dictionary, using singular value decomposition (SVD) and represents the data as sparse linear combinations of these bases using the orthogonal matching pursuit (OMP) algorithm. Since ambient vibration data are stationary, we can segment them and represent each segment sparsely. Then only the dictionary and the sparse vectors of the coefficients need to be transmitted wirelessly for restoration of the original data. We applied this method to ambient vibration data measured from a four-story steel moment resisting frame. The results show that the method can compress the data efficiently and restore the data with very little error.
Archer, Charles J.; Blocksome, Michael A.
2012-12-11
Methods, parallel computers, and computer program products are disclosed for remote direct memory access. Embodiments include transmitting, from an origin DMA engine on an origin compute node to a plurality target DMA engines on target compute nodes, a request to send message, the request to send message specifying a data to be transferred from the origin DMA engine to data storage on each target compute node; receiving, by each target DMA engine on each target compute node, the request to send message; preparing, by each target DMA engine, to store data according to the data storage reference and the data length, including assigning a base storage address for the data storage reference; sending, by one or more of the target DMA engines, an acknowledgment message acknowledging that all the target DMA engines are prepared to receive a data transmission from the origin DMA engine; receiving, by the origin DMA engine, the acknowledgement message from the one or more of the target DMA engines; and transferring, by the origin DMA engine, data to data storage on each of the target compute nodes according to the data storage reference using a single direct put operation.
Flexible-Wing-Based Micro Air Vehicles
NASA Technical Reports Server (NTRS)
Ifju, Peter G.; Jenkins, David A.; Ettinger, Scott; Lian, Yong-Sheng; Shyy, Wei; Waszak, Martin R.
2002-01-01
This paper documents the development and evaluation of an original flexible-wing-based Micro Air Vehicle (MAV) technology that reduces adverse effects of gusty wind conditions and unsteady aerodynamics, exhibits desirable flight stability, and enhances structural durability. The flexible wing concept has been demonstrated on aircraft with wingspans ranging from 18 inches to 5 inches. Salient features of the flexible-wing-based MAV, including the vehicle concept, flexible wing design, novel fabrication methods, aerodynamic assessment, and flight data analysis are presented.
Simplex volume analysis for finding endmembers in hyperspectral imagery
NASA Astrophysics Data System (ADS)
Li, Hsiao-Chi; Song, Meiping; Chang, Chein-I.
2015-05-01
Using maximal simplex volume as an optimal criterion for finding endmembers is a common approach and has been widely studied in the literature. Interestingly, very little work has been reported on how simplex volume is calculated. It turns out that the issue of calculating simplex volume is much more complicated and involved than what we may think. This paper investigates this issue from two different aspects, geometric structure and eigen-analysis. The geometric structure is derived from its simplex structure whose volume can be calculated by multiplying its base with its height. On the other hand, eigen-analysis takes advantage of the Cayley-Menger determinant to calculate the simplex volume. The major issue of this approach is that when the matrix is ill-rank where determinant is desired. To deal with this problem two methods are generally considered. One is to perform data dimensionality reduction to make the matrix to be of full rank. The drawback of this method is that the original volume has been shrunk and the found volume of a dimensionality-reduced simplex is not the real original simplex volume. Another is to use singular value decomposition (SVD) to find singular values for calculating simplex volume. The dilemma of this method is its instability in numerical calculations. This paper explores all of these three methods in simplex volume calculation. Experimental results show that geometric structure-based method yields the most reliable simplex volume.
Fan, Chong; Wu, Chaoyun; Li, Grand; Ma, Jun
2017-01-01
To solve the problem on inaccuracy when estimating the point spread function (PSF) of the ideal original image in traditional projection onto convex set (POCS) super-resolution (SR) reconstruction, this paper presents an improved POCS SR algorithm based on PSF estimation of low-resolution (LR) remote sensing images. The proposed algorithm can improve the spatial resolution of the image and benefit agricultural crop visual interpolation. The PSF of the high-resolution (HR) image is unknown in reality. Therefore, analysis of the relationship between the PSF of the HR image and the PSF of the LR image is important to estimate the PSF of the HR image by using multiple LR images. In this study, the linear relationship between the PSFs of the HR and LR images can be proven. In addition, the novel slant knife-edge method is employed, which can improve the accuracy of the PSF estimation of LR images. Finally, the proposed method is applied to reconstruct airborne digital sensor 40 (ADS40) three-line array images and the overlapped areas of two adjacent GF-2 images by embedding the estimated PSF of the HR image to the original POCS SR algorithm. Experimental results show that the proposed method yields higher quality of reconstructed images than that produced by the blind SR method and the bicubic interpolation method. PMID:28208837
Region-based multifocus image fusion for the precise acquisition of Pap smear images.
Tello-Mijares, Santiago; Bescós, Jesús
2018-05-01
A multifocus image fusion method to obtain a single focused image from a sequence of microscopic high-magnification Papanicolau source (Pap smear) images is presented. These images, captured each in a different position of the microscope lens, frequently show partially focused cells or parts of cells, which makes them unpractical for the direct application of image analysis techniques. The proposed method obtains a focused image with a high preservation of original pixels information while achieving a negligible visibility of the fusion artifacts. The method starts by identifying the best-focused image of the sequence; then, it performs a mean-shift segmentation over this image; the focus level of the segmented regions is evaluated in all the images of the sequence, and best-focused regions are merged in a single combined image; finally, this image is processed with an adaptive artifact removal process. The combination of a region-oriented approach, instead of block-based approaches, and a minimum modification of the value of focused pixels in the original images achieve a highly contrasted image with no visible artifacts, which makes this method especially convenient for the medical imaging domain. The proposed method is compared with several state-of-the-art alternatives over a representative dataset. The experimental results show that our proposal obtains the best and more stable quality indicators. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
A method of power analysis based on piecewise discrete Fourier transform
NASA Astrophysics Data System (ADS)
Xin, Miaomiao; Zhang, Yanchi; Xie, Da
2018-04-01
The paper analyzes the existing feature extraction methods. The characteristics of discrete Fourier transform and piecewise aggregation approximation are analyzed. Combining with the advantages of the two methods, a new piecewise discrete Fourier transform is proposed. And the method is used to analyze the lighting power of a large customer in this paper. The time series feature maps of four different cases are compared with the original data, discrete Fourier transform, piecewise aggregation approximation and piecewise discrete Fourier transform. This new method can reflect both the overall trend of electricity change and its internal changes in electrical analysis.
Prediction of Elderly Anthropometric Dimension Based On Age, Gender, Origin, and Body Mass Index
NASA Astrophysics Data System (ADS)
Indah, P.; Sari, A. D.; Suryoputro, M. R.; Purnomo, H.
2016-01-01
Introduction: Studies have indicated that elderly anthropometric dimensions will different for each person. To determine whether there are differences in the anthropometric data of Javanese elderly, this study will analyze whether the variables of age, gender, origin, and body mass index (BMI) have been associated with elderly anthropometric dimensions. Age will be divided into elderly and old categories, gender will divide into male and female, origins were divided into Yogyakarta and Central Java, and for BMI only use the normal category. Method: Anthropometric studies were carried out on 45 elderly subjects in Sleman,Yogyakarta. Results and Discussion: The results showed that some elderly anthropometric dimensions were influenced by age, origin, and body mass index but gender doesn't significantly affect the elderly anthropometric dimensions that exist in the area of Sleman. The analysis has provided important aid when designing products that intended to the Javanese elderly Population.
Analyzing the dynamics of DNA replication in Mammalian cells using DNA combing.
Bialic, Marta; Coulon, Vincent; Drac, Marjorie; Gostan, Thierry; Schwob, Etienne
2015-01-01
How cells duplicate their chromosomes is a key determinant of cell identity and genome stability. DNA replication can initiate from more than 100,000 sites distributed along mammalian chromosomes, yet a given cell uses only a subset of these origins due to inefficient origin activation and regulation by developmental or environmental cues. An impractical consequence of cell-to-cell variations in origin firing is that population-based techniques do not accurately describe how chromosomes are replicated in single cells. DNA combing is a biophysical DNA fiber stretching method which permits visualization of ongoing DNA synthesis along Mb-sized single-DNA molecules purified from cells that were previously pulse-labeled with thymidine analogues. This allows quantitative measurements of several salient features of chromosome replication dynamics, such as fork velocity, fork asymmetry, inter-origin distances, and global instant fork density. In this chapter we describe how to obtain this information from asynchronous cultures of mammalian cells.
NASA Astrophysics Data System (ADS)
Mukherjee, Bijoy K.; Metia, Santanu
2009-10-01
The paper is divided into three parts. The first part gives a brief introduction to the overall paper, to fractional order PID (PIλDμ) controllers and to Genetic Algorithm (GA). In the second part, first it has been studied how the performance of an integer order PID controller deteriorates when implemented with lossy capacitors in its analog realization. Thereafter it has been shown that the lossy capacitors can be effectively modeled by fractional order terms. Then, a novel GA based method has been proposed to tune the controller parameters such that the original performance is retained even though realized with the same lossy capacitors. Simulation results have been presented to validate the usefulness of the method. Some Ziegler-Nichols type tuning rules for design of fractional order PID controllers have been proposed in the literature [11]. In the third part, a novel GA based method has been proposed which shows how equivalent integer order PID controllers can be obtained which will give performance level similar to those of the fractional order PID controllers thereby removing the complexity involved in the implementation of the latter. It has been shown with extensive simulation results that the equivalent integer order PID controllers more or less retain the robustness and iso-damping properties of the original fractional order PID controllers. Simulation results also show that the equivalent integer order PID controllers are more robust than the normal Ziegler-Nichols tuned PID controllers.
Li, Jing; Hong, Wenxue
2014-12-01
The feature extraction and feature selection are the important issues in pattern recognition. Based on the geometric algebra representation of vector, a new feature extraction method using blade coefficient of geometric algebra was proposed in this study. At the same time, an improved differential evolution (DE) feature selection method was proposed to solve the elevated high dimension issue. The simple linear discriminant analysis was used as the classifier. The result of the 10-fold cross-validation (10 CV) classification of public breast cancer biomedical dataset was more than 96% and proved superior to that of the original features and traditional feature extraction method.
Enhanced simulation software for rocket turbopump, turbulent, annular liquid seals
NASA Technical Reports Server (NTRS)
Padavala, Satya; Palazzolo, Alan
1994-01-01
One of the main objectives of this work is to develop a new dynamic analysis for liquid annular seals with arbitrary profile and to analyze a general distorted interstage seal of the space shuttle main engine high pressure oxygen turbopump (SSME-ATD-HPOTP). The dynamic analysis developed is based on a method originally proposed by Nelson and Nguyen. A simpler scheme based on cubic splines is found to be computationally more efficient and has better convergence properties at higher eccentricities. The first order solution of the original analysis is modified by including a more exact solution that takes into account the variation of perturbed variables along the circumference. A new set of equations for dynamic analysis are derived based on this more general model. A unified solution procedure that is valid for both Moody's and Hirs' friction models is presented. Dynamic analysis is developed for three different models: constant properties, variable properties, and thermal effects with variable properties. Arbitrarily varying seal profiles in both axial and circumferential directions are considered. An example case of an elliptical seal with varying degrees of axial curvature is analyzed in detail. A case study based on predicted clearances of an interstage seal of the SSME-ATD-HPOTP is presented. Dynamic coefficients based on external specified load are introduced to analyze seals that support a preload. The other objective of this work is to study the effect of large rotor displacements of SSME-ATD-HPOTP on the dynamics of the annular seal and the resulting transient motion. One task is to identify the magnitude of motion of the rotor about the centered position and establish limits of effectiveness of using current linear models. This task is accomplished by solving the bulk flow model seal governing equations directly for transient seal forces for any given type of motion, including motion with large eccentricities. Based on the above study, an equivalence is established between linearized coefficients based transient motion and the same motion as predicted by the original governing equations. An innovative method is developed to model nonlinearities in an annular seal based on dynamic coefficients computed at various static eccentricities. This method is thoroughly tested for various types of transient motion using bulk flow model results as a benchmark.
Parallelization of Lower-Upper Symmetric Gauss-Seidel Method for Chemically Reacting Flow
NASA Technical Reports Server (NTRS)
Yoon, Seokkwan; Jost, Gabriele; Chang, Sherry
2005-01-01
Development of technologies for exploration of the solar system has revived an interest in computational simulation of chemically reacting flows since planetary probe vehicles exhibit non-equilibrium phenomena during the atmospheric entry of a planet or a moon as well as the reentry to the Earth. Stability in combustion is essential for new propulsion systems. Numerical solution of real-gas flows often increases computational work by an order-of-magnitude compared to perfect gas flow partly because of the increased complexity of equations to solve. Recently, as part of Project Columbia, NASA has integrated a cluster of interconnected SGI Altix systems to provide a ten-fold increase in current supercomputing capacity that includes an SGI Origin system. Both the new and existing machines are based on cache coherent non-uniform memory access architecture. Lower-Upper Symmetric Gauss-Seidel (LU-SGS) relaxation method has been implemented into both perfect and real gas flow codes including Real-Gas Aerodynamic Simulator (RGAS). However, the vectorized RGAS code runs inefficiently on cache-based shared-memory machines such as SGI system. Parallelization of a Gauss-Seidel method is nontrivial due to its sequential nature. The LU-SGS method has been vectorized on an oblique plane in INS3D-LU code that has been one of the base codes for NAS Parallel benchmarks. The oblique plane has been called a hyperplane by computer scientists. It is straightforward to parallelize a Gauss-Seidel method by partitioning the hyperplanes once they are formed. Another way of parallelization is to schedule processors like a pipeline using software. Both hyperplane and pipeline methods have been implemented using openMP directives. The present paper reports the performance of the parallelized RGAS code on SGI Origin and Altix systems.
García-Remesal, M; Maojo, V; Billhardt, H; Crespo, J
2010-01-01
Bringing together structured and text-based sources is an exciting challenge for biomedical informaticians, since most relevant biomedical sources belong to one of these categories. In this paper we evaluate the feasibility of integrating relational and text-based biomedical sources using: i) an original logical schema acquisition method for textual databases developed by the authors, and ii) OntoFusion, a system originally designed by the authors for the integration of relational sources. We conducted an integration experiment involving a test set of seven differently structured sources covering the domain of genetic diseases. We used our logical schema acquisition method to generate schemas for all textual sources. The sources were integrated using the methods and tools provided by OntoFusion. The integration was validated using a test set of 500 queries. A panel of experts answered a questionnaire to evaluate i) the quality of the extracted schemas, ii) the query processing performance of the integrated set of sources, and iii) the relevance of the retrieved results. The results of the survey show that our method extracts coherent and representative logical schemas. Experts' feedback on the performance of the integrated system and the relevance of the retrieved results was also positive. Regarding the validation of the integration, the system successfully provided correct results for all queries in the test set. The results of the experiment suggest that text-based sources including a logical schema can be regarded as equivalent to structured databases. Using our method, previous research and existing tools designed for the integration of structured databases can be reused - possibly subject to minor modifications - to integrate differently structured sources.
Functional Coatings or Films for Hard-Tissue Applications
Wang, Guocheng; Zreiqat, Hala
2010-01-01
Metallic biomaterials like stainless steel, Co-based alloy, Ti and its alloys are widely used as artificial hip joints, bone plates and dental implants due to their excellent mechanical properties and endurance. However, there are some surface-originated problems associated with the metallic implants: corrosion and wear in biological environments resulting in ions release and formation of wear debris; poor implant fixation resulting from lack of osteoconductivity and osteoinductivity; implant-associated infections due to the bacterial adhesion and colonization at the implantation site. For overcoming these surface-originated problems, a variety of surface modification techniques have been used on metallic implants, including chemical treatments, physical methods and biological methods. This review surveys coatings that serve to provide properties of anti-corrosion and anti-wear, biocompatibility and bioactivity, and antibacterial activity. PMID:28883319
Goto, Masami; Kunimatsu, Akira; Shojima, Masaaki; Abe, Osamu; Aoki, Shigeki; Hayashi, Naoto; Mori, Harushi; Ino, Kenji; Yano, Keiichi; Saito, Nobuhito; Ohtomo, Kuni
2013-03-25
We present a case in which the origin of the branching vessel at the aneurysm neck was observed at the wrong place on the volume rendering method (VR) with 3D time-of-flight MRA (3D-TOF-MRA) with 3-Tesla MR system. In 3D-TOF-MRA, it is often difficult to observe the origin of the branching vessel, but it is unusual for it to be observed in the wrong place. In the planning of interventional treatment and surgical procedures, false recognition, as in the unique case in the present report, is a serious problem. Decisions based only on VR with 3D-TOF-MRA can be a cause of suboptimal selection in clinical treatment.
NASA Astrophysics Data System (ADS)
Yan, Zhiqiang; Yan, Xingpeng; Jiang, Xiaoyu; Gao, Hui; Wen, Jun
2017-11-01
An integral imaging based light field display method is proposed by use of holographic diffuser, and enhanced viewing resolution is gained over conventional integral imaging systems. The holographic diffuser is fabricated with controlled diffusion characteristics, which interpolates the discrete light field of the reconstructed points to approximate the original light field. The viewing resolution can thus be improved and independent of the limitation imposed by Nyquist sampling frequency. An integral imaging system with low Nyquist sampling frequency is constructed, and reconstructed scenes of high viewing resolution using holographic diffuser are demonstrated, verifying the feasibility of the method.
A waste characterisation procedure for ADM1 implementation based on degradation kinetics.
Girault, R; Bridoux, G; Nauleau, F; Poullain, C; Buffet, J; Steyer, J-P; Sadowski, A G; Béline, F
2012-09-01
In this study, a procedure accounting for degradation kinetics was developed to split the total COD of a substrate into each input state variable required for Anaerobic Digestion Model n°1. The procedure is based on the combination of batch experimental degradation tests ("anaerobic respirometry") and numerical interpretation of the results obtained (optimisation of the ADM1 input state variable set). The effects of the main operating parameters, such as the substrate to inoculum ratio in batch experiments and the origin of the inoculum, were investigated. Combined with biochemical fractionation of the total COD of substrates, this method enabled determination of an ADM1-consistent input state variable set for each substrate with affordable identifiability. The substrate to inoculum ratio in the batch experiments and the origin of the inoculum influenced input state variables. However, based on results modelled for a CSTR fed with the substrate concerned, these effects were not significant. Indeed, if the optimal ranges of these operational parameters are respected, uncertainty in COD fractionation is mainly limited to temporal variability of the properties of the substrates. As the method is based on kinetics and is easy to implement for a wide range of substrates, it is a very promising way to numerically predict the effect of design parameters on the efficiency of an anaerobic CSTR. This method thus promotes the use of modelling for the design and optimisation of anaerobic processes. Copyright © 2012 Elsevier Ltd. All rights reserved.
Chung, Kuo-Liang; Hsu, Tsu-Chun; Huang, Chi-Chao
2017-10-01
In this paper, we propose a novel and effective hybrid method, which joins the conventional chroma subsampling and the distortion-minimization-based luma modification together, to improve the quality of the reconstructed RGB full-color image. Assume the input RGB full-color image has been transformed to a YUV image, prior to compression. For each 2×2 UV block, one 4:2:0 subsampling is applied to determine the one subsampled U and V components, U s and V s . Based on U s , V s , and the corresponding 2×2 original RGB block, a main theorem is provided to determine the ideally modified 2×2 luma block in constant time such that the color peak signal-to-noise ratio (CPSNR) quality distortion between the original 2×2 RGB block and the reconstructed 2×2 RGB block can be minimized in a globally optimal sense. Furthermore, the proposed hybrid method and the delivered theorem are adjusted to tackle the digital time delay integration images and the Bayer mosaic images whose Bayer CFA structure has been widely used in modern commercial digital cameras. Based on the IMAX, Kodak, and screen content test image sets, the experimental results demonstrate that in high efficiency video coding, the proposed hybrid method has substantial quality improvement, in terms of the CPSNR quality, visual effect, CPSNR-bitrate trade-off, and Bjøntegaard delta PSNR performance, of the reconstructed RGB images when compared with existing chroma subsampling schemes.
Chaillet, Nils; Nyström, Marjatta; Demirjian, Arto
2005-09-01
Dental maturity was studied with 9577 dental panoramic tomograms of healthy subjects from 8 countries, aged between 2 and 25 years of age. Demirjian's method based on 7 teeth was used for determining dental maturity scores, establishing gender-specific tables of maturity scores and development graphs. The aim of this study was to give dental maturity standards when the ethnic origin is unknown and to compare the efficiency and applicability of this method to forensic sciences and dental clinicians. The second aim was to compare the dental maturity of these different populations. We noted an high efficiency for International Demirjian's method at 99% CI (0.85% of misclassified and a mean accuracy between 2 to 18 years +/- 2.15 years), which makes it useful for forensic purposes. Nevertheless, this international method is less accurate than Demirjian's method developed for a specific country, because of the inter-ethnic variability obtained by the addition of 8 countries in the dental database. There are inter-ethnic differences classified in three major groups. Australians have the fastest dental maturation and Koreans have the slowest.
Kernel Method Based Human Model for Enhancing Interactive Evolutionary Optimization
Zhao, Qiangfu; Liu, Yong
2015-01-01
A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly. PMID:25879050
An Adaptive MR-CT Registration Method for MRI-guided Prostate Cancer Radiotherapy
Zhong, Hualiang; Wen, Ning; Gordon, James; Elshaikh, Mohamed A; Movsas, Benjamin; Chetty, Indrin J.
2015-01-01
Magnetic Resonance images (MRI) have superior soft tissue contrast compared with CT images. Therefore, MRI might be a better imaging modality to differentiate the prostate from surrounding normal organs. Methods to accurately register MRI to simulation CT images are essential, as we transition the use of MRI into the routine clinic setting. In this study, we present a finite element method (FEM) to improve the performance of a commercially available, B-spline-based registration algorithm in the prostate region. Specifically, prostate contours were delineated independently on ten MRI and CT images using the Eclipse treatment planning system. Each pair of MRI and CT images was registered with the B-spline-based algorithm implemented in the VelocityAI system. A bounding box that contains the prostate volume in the CT image was selected and partitioned into a tetrahedral mesh. An adaptive finite element method was then developed to adjust the displacement vector fields (DVFs) of the B-spline-based registrations within the box. The B-spline and FEM-based registrations were evaluated based on the variations of prostate volume and tumor centroid, the unbalanced energy of the generated DVFs, and the clarity of the reconstructed anatomical structures. The results showed that the volumes of the prostate contours warped with the B-spline-based DVFs changed 10.2% on average, relative to the volumes of the prostate contours on the original MR images. This discrepancy was reduced to 1.5% for the FEM-based DVFs. The average unbalanced energy was 2.65 and 0.38 mJ/cm3, and the prostate centroid deviation was 0.37 and 0.28 cm, for the B-spline and FEM-based registrations, respectively. Different from the B-spline-warped MR images, the FEM-warped MR images have clear boundaries between prostates and bladders, and their internal prostatic structures are consistent with those of the original MR images. In summary, the developed adaptive FEM method preserves the prostate volume during the transformation between the MR and CT images and improves the accuracy of the B-spline registrations in the prostate region. The approach will be valuable for development of high-quality MRI-guided radiation therapy. PMID:25775937
An adaptive MR-CT registration method for MRI-guided prostate cancer radiotherapy
NASA Astrophysics Data System (ADS)
Zhong, Hualiang; Wen, Ning; Gordon, James J.; Elshaikh, Mohamed A.; Movsas, Benjamin; Chetty, Indrin J.
2015-04-01
Magnetic Resonance images (MRI) have superior soft tissue contrast compared with CT images. Therefore, MRI might be a better imaging modality to differentiate the prostate from surrounding normal organs. Methods to accurately register MRI to simulation CT images are essential, as we transition the use of MRI into the routine clinic setting. In this study, we present a finite element method (FEM) to improve the performance of a commercially available, B-spline-based registration algorithm in the prostate region. Specifically, prostate contours were delineated independently on ten MRI and CT images using the Eclipse treatment planning system. Each pair of MRI and CT images was registered with the B-spline-based algorithm implemented in the VelocityAI system. A bounding box that contains the prostate volume in the CT image was selected and partitioned into a tetrahedral mesh. An adaptive finite element method was then developed to adjust the displacement vector fields (DVFs) of the B-spline-based registrations within the box. The B-spline and FEM-based registrations were evaluated based on the variations of prostate volume and tumor centroid, the unbalanced energy of the generated DVFs, and the clarity of the reconstructed anatomical structures. The results showed that the volumes of the prostate contours warped with the B-spline-based DVFs changed 10.2% on average, relative to the volumes of the prostate contours on the original MR images. This discrepancy was reduced to 1.5% for the FEM-based DVFs. The average unbalanced energy was 2.65 and 0.38 mJ cm-3, and the prostate centroid deviation was 0.37 and 0.28 cm, for the B-spline and FEM-based registrations, respectively. Different from the B-spline-warped MR images, the FEM-warped MR images have clear boundaries between prostates and bladders, and their internal prostatic structures are consistent with those of the original MR images. In summary, the developed adaptive FEM method preserves the prostate volume during the transformation between the MR and CT images and improves the accuracy of the B-spline registrations in the prostate region. The approach will be valuable for the development of high-quality MRI-guided radiation therapy.
Tchebichef moment based restoration of Gaussian blurred images.
Kumar, Ahlad; Paramesran, Raveendran; Lim, Chern-Loon; Dass, Sarat C
2016-11-10
With the knowledge of how edges vary in the presence of a Gaussian blur, a method that uses low-order Tchebichef moments is proposed to estimate the blur parameters: sigma (σ) and size (w). The difference between the Tchebichef moments of the original and the reblurred images is used as feature vectors to train an extreme learning machine for estimating the blur parameters (σ,w). The effectiveness of the proposed method to estimate the blur parameters is examined using cross-database validation. The estimated blur parameters from the proposed method are used in the split Bregman-based image restoration algorithm. A comparative analysis of the proposed method with three existing methods using all the images from the LIVE database is carried out. The results show that the proposed method in most of the cases performs better than the three existing methods in terms of the visual quality evaluated using the structural similarity index.
An improved parallel fuzzy connected image segmentation method based on CUDA.
Wang, Liansheng; Li, Dong; Huang, Shaohui
2016-05-12
Fuzzy connectedness method (FC) is an effective method for extracting fuzzy objects from medical images. However, when FC is applied to large medical image datasets, its running time will be greatly expensive. Therefore, a parallel CUDA version of FC (CUDA-kFOE) was proposed by Ying et al. to accelerate the original FC. Unfortunately, CUDA-kFOE does not consider the edges between GPU blocks, which causes miscalculation of edge points. In this paper, an improved algorithm is proposed by adding a correction step on the edge points. The improved algorithm can greatly enhance the calculation accuracy. In the improved method, an iterative manner is applied. In the first iteration, the affinity computation strategy is changed and a look up table is employed for memory reduction. In the second iteration, the error voxels because of asynchronism are updated again. Three different CT sequences of hepatic vascular with different sizes were used in the experiments with three different seeds. NVIDIA Tesla C2075 is used to evaluate our improved method over these three data sets. Experimental results show that the improved algorithm can achieve a faster segmentation compared to the CPU version and higher accuracy than CUDA-kFOE. The calculation results were consistent with the CPU version, which demonstrates that it corrects the edge point calculation error of the original CUDA-kFOE. The proposed method has a comparable time cost and has less errors compared to the original CUDA-kFOE as demonstrated in the experimental results. In the future, we will focus on automatic acquisition method and automatic processing.
Lin, Hai-jun; Zhang, Hui-fang; Gao, Ya-qi; Li, Xia; Yang, Fan; Zhou, Yan-fei
2014-12-01
The hyperspectral reflectance of Populus euphratica, Tamarix hispida, Haloxylon ammodendron and Calligonum mongolicum in the lower reaches of Tarim River and Turpan Desert Botanical Garden was measured by using the HR-768 field-portable spectroradiometer. The method of continuum removal, first derivative reflectance and second derivative reflectance were used to deal with the original spectral data of four tree species. The method of Mahalanobis Distance was used to select the bands with significant differences in the original spectral data and transform spectral data to identify the different tree species. The progressive discrimination analyses were used to test the selective bands used to identify different tree species. The results showed that The Mahalanobis Distance method was an effective method in feature band extraction. The bands for identifying different tree species were most near-infrared bands. The recognition accuracy of four methods was 85%, 93.8%, 92.4% and 95.5% respectively. Spectrum transform could improve the recognition accuracy. The recognition accuracy of different research objects and different spectrum transform methods were different. The research provided evidence for desert tree species classification, monitoring biodiversity and the analysis of area in desert by using large scale remote sensing method.
Capiau, Sara; Wilk, Leah S; De Kesel, Pieter M M; Aalders, Maurice C G; Stove, Christophe P
2018-02-06
The hematocrit (Hct) effect is one of the most important hurdles currently preventing more widespread implementation of quantitative dried blood spot (DBS) analysis in a routine context. Indeed, the Hct may affect both the accuracy of DBS methods as well as the interpretation of DBS-based results. We previously developed a method to determine the Hct of a DBS based on its hemoglobin content using noncontact diffuse reflectance spectroscopy. Despite the ease with which the analysis can be performed (i.e., mere scanning of the DBS) and the good results that were obtained, the method did require a complicated algorithm to derive the total hemoglobin content from the DBS's reflectance spectrum. As the total hemoglobin was calculated as the sum of oxyhemoglobin, methemoglobin, and hemichrome, the three main hemoglobin derivatives formed in DBS upon aging, the reflectance spectrum needed to be unmixed to determine the quantity of each of these derivatives. We now simplified the method by only using the reflectance at a single wavelength, located at a quasi-isosbestic point in the reflectance curve. At this wavelength, assuming 1-to-1 stoichiometry of the aging reaction, the reflectance is insensitive to the hemoglobin degradation and only scales with the total amount of hemoglobin and, hence, the Hct. This simplified method was successfully validated. At each quality control level as well as at the limits of quantitation (i.e., 0.20 and 0.67) bias, intra- and interday imprecision were within 10%. Method reproducibility was excellent based on incurred sample reanalysis and surpassed the reproducibility of the original method. Furthermore, the influence of the volume spotted, the measurement location within the spot, as well as storage time and temperature were evaluated, showing no relevant impact of these parameters. Application to 233 patient samples revealed a good correlation between the Hct determined on whole blood and the predicted Hct determined on venous DBS. The bias obtained with Bland and Altman analysis was -0.015 and the limits of agreement were -0.061 and 0.031, indicating that the simplified, noncontact Hct prediction method even outperforms the original method. In addition, using caffeine as a model compound, it was demonstrated that this simplified Hct prediction method can effectively be used to implement a Hct-dependent correction factor to DBS-based results to alleviate the Hct bias.
Learning outside the Primary Classroom
ERIC Educational Resources Information Center
Sedgwick, Fred
2012-01-01
In "Learning Outside the Primary Classroom," the educationalist and writer Fred Sedgwick explores in a practical way the many opportunities for intense learning that children and teachers can find outside the confines of the usual learning environment, the classroom. This original work is based on tried and tested methods from UK primary…
Determining the relative susceptibility of four prion protein genotypes to atypical scrapie
USDA-ARS?s Scientific Manuscript database
Atypical scrapie is a sheep prion (PrPSc) disease whose epidemiology is consistent with a sporadic origin and is associated with specific polymorphisms of the normal cellular prion protein (PrPC). We describe a mass spectrometry-based method of detecting and quantifying the polymorphisms of sheep P...
Scientific and Humanistic Evaluations of Follow Through.
ERIC Educational Resources Information Center
House, Ernest R.
The thesis of this paper is that the humanistic mode of inquiry is underemployed in evaluation studies and the future evaluation of Follow Through could profitably use humanistic approaches. The original Follow Through evaluation was based on the assumption that the world consists of a single system explainable by appropriate methods; the…
A Writing-Intensive, Methods-Based Laboratory Course for Undergraduates
ERIC Educational Resources Information Center
Colabroy, Keri L.
2011-01-01
Engaging undergraduate students in designing and executing original research should not only be accompanied by technique training but also intentional instruction in the critical analysis and writing of scientific literature. The course described here takes a rigorous approach to scientific reading and writing using primary literature as the model…
Using component technology to facilitate external software reuse in ground-based planning systems
NASA Technical Reports Server (NTRS)
Chase, A.
2003-01-01
APGEN (Activity Plan GENerator - 314), a multi-mission planning tool, must interface with external software to vest serve its users. AP-GEN's original method for incorporating external software, the User-Defined library mechanism, has been very successful in allowing APGEN users access to external software functionality.
Fetal Origins of Child Non-Right-Handedness and Mental Health
ERIC Educational Resources Information Center
Rodriguez, Alina; Waldenstrom, Ulla
2008-01-01
Background: Environmental risk during fetal development for non-right-handedness, an index of brain asymmetry, and its relevance for child mental health is not fully understood. Methods: A Swedish population-based prospective pregnancy-offspring cohort was followed-up when children were five years old (N = 1714). Prenatal environmental risk…
Grassroots Journalism in the City: Cleveland's Neighborhood Newspapers. Monograph No. 6.
ERIC Educational Resources Information Center
Jeffres, Leo W.; And Others
The first section of this monograph on community newspapers describes the patterns and trends of "grassroots journalism" in the Cleveland, Ohio, area. Based on interviews with 37 newspaper editors, the following topics are covered: origins and history, goals, organization and structure, method of production, advertising, content,…
Extraction of decision rules via imprecise probabilities
NASA Astrophysics Data System (ADS)
Abellán, Joaquín; López, Griselda; Garach, Laura; Castellano, Javier G.
2017-05-01
Data analysis techniques can be applied to discover important relations among features. This is the main objective of the Information Root Node Variation (IRNV) technique, a new method to extract knowledge from data via decision trees. The decision trees used by the original method were built using classic split criteria. The performance of new split criteria based on imprecise probabilities and uncertainty measures, called credal split criteria, differs significantly from the performance obtained using the classic criteria. This paper extends the IRNV method using two credal split criteria: one based on a mathematical parametric model, and other one based on a non-parametric model. The performance of the method is analyzed using a case study of traffic accident data to identify patterns related to the severity of an accident. We found that a larger number of rules is generated, significantly supplementing the information obtained using the classic split criteria.
[Discussion on the botanical origin of Isatidis radix and Isatidis folium based on DNA barcoding].
Sun, Zhi-Ying; Pang, Xiao-Hui
2013-12-01
This paper aimed to investigate the botanical origins of Isatidis Radix and Isatidis Folium, and clarify the confusion of its classification. The second internal transcribed spacer (ITS2) of ribosomal DNA, the chloroplast matK gene of 22 samples from some major production areas were amplified and sequenced. Sequence assembly and consensus sequence generation were performed using the CodonCode Aligner. Phylogenetic study was performed using MEGA 4.0 software in accordance with the Kimura 2-Parameter (K2P) model, and the phylogenetic tree was constructed using the neighbor-joining methods. The results showed that the length of ITS2 sequence of the botanical origins of Isatidis Radix and Isatidis Folium was 191 bp. The sequence showed that some samples had several SNP sites, and some samples had heterozygosis sites. In the NJ tree, based on ITS2 sequence, the studied samples were separated into two groups, and one of them was gathered with Isatis tinctoria L. The studied samples also were divided into two groups obviously based on the chloroplast matK gene. In conclusion, our results support that the botanical origins of Isatidis Radix and Isatidis Folium are Isatis indigotica Fortune, and Isatis indigotica and Isatis tinctoria are two distinct species. This study doesn't support the opinion about the combination of these two species in Flora of China.
Wavelet-based reversible watermarking for authentication
NASA Astrophysics Data System (ADS)
Tian, Jun
2002-04-01
In the digital information age, digital content (audio, image, and video) can be easily copied, manipulated, and distributed. Copyright protection and content authentication of digital content has become an urgent problem to content owners and distributors. Digital watermarking has provided a valuable solution to this problem. Based on its application scenario, most digital watermarking methods can be divided into two categories: robust watermarking and fragile watermarking. As a special subset of fragile watermark, reversible watermark (which is also called lossless watermark, invertible watermark, erasable watermark) enables the recovery of the original, unwatermarked content after the watermarked content has been detected to be authentic. Such reversibility to get back unwatermarked content is highly desired in sensitive imagery, such as military data and medical data. In this paper we present a reversible watermarking method based on an integer wavelet transform. We look into the binary representation of each wavelet coefficient and embed an extra bit to expandable wavelet coefficient. The location map of all expanded coefficients will be coded by JBIG2 compression and these coefficient values will be losslessly compressed by arithmetic coding. Besides these two compressed bit streams, an SHA-256 hash of the original image will also be embedded for authentication purpose.
EFFECTIVE REMOVAL METHOD OF ILLEGAL PARKING BICYCLES BASED ON THE QUANTITATIVE CHANGE AFTER REMOVAL
NASA Astrophysics Data System (ADS)
Toi, Satoshi; Kajita, Yoshitaka; Nishikawa, Shuichirou
This study aims to find an effective removal method of illegal parking bicycles based on the analysis on the numerical change of illegal bicycles. And then, we built the time and space quantitative distribution model of illegal parking bicycles after removal, considering the logistic increase of illegal parking bicycles, several behaviors concerning of direct return or indirect return to the original parking place and avoidance of the original parking place, based on the investigation of real condition of illegal bicycle parking at TENJIN area in FUKUOKA city. Moreover, we built the simulation model including above-mentioned model, and calculated the number of illegal parking bicycles when we change the removal frequency and the number of removal at one time. The next interesting four results were obtained. (1) Recovery speed from removal the illegal parking bicycles differs by each zone. (2) Thorough removal is effective to keep the number of illegal parking bicycles lower level. (3) Removal at one zone causes the increase of bicycles at other zones where the level of illegal parking is lower. (4) The relationship between effects and costs of removing the illegal parking bicycles was clarified.
Duecker, Daniel-André; Geist, A. René; Hengeler, Michael; Kreuzer, Edwin; Pick, Marc-André; Rausch, Viktor; Solowjow, Eugen
2017-01-01
Self-localization is one of the most challenging problems for deploying micro autonomous underwater vehicles (μAUV) in confined underwater environments. This paper extends a recently-developed self-localization method that is based on the attenuation of electro-magnetic waves, to the μAUV domain. We demonstrate a compact, low-cost architecture that is able to perform all signal processing steps present in the original method. The system is passive with one-way signal transmission and scales to possibly large μAUV fleets. It is based on the spherical localization concept. We present results from static and dynamic position estimation experiments and discuss the tradeoffs of the system. PMID:28445419
Enhanced image fusion using directional contrast rules in fuzzy transform domain.
Nandal, Amita; Rosales, Hamurabi Gamboa
2016-01-01
In this paper a novel image fusion algorithm based on directional contrast in fuzzy transform (FTR) domain is proposed. Input images to be fused are first divided into several non-overlapping blocks. The components of these sub-blocks are fused using directional contrast based fuzzy fusion rule in FTR domain. The fused sub-blocks are then transformed into original size blocks using inverse-FTR. Further, these inverse transformed blocks are fused according to select maximum based fusion rule for reconstructing the final fused image. The proposed fusion algorithm is both visually and quantitatively compared with other standard and recent fusion algorithms. Experimental results demonstrate that the proposed method generates better results than the other methods.
Duecker, Daniel-André; Geist, A René; Hengeler, Michael; Kreuzer, Edwin; Pick, Marc-André; Rausch, Viktor; Solowjow, Eugen
2017-04-26
Self-localization is one of the most challenging problems for deploying micro autonomous underwater vehicles ( μ AUV) in confined underwater environments. This paper extends a recently-developed self-localization method that is based on the attenuation of electro-magnetic waves, to the μ AUV domain. We demonstrate a compact, low-cost architecture that is able to perform all signal processing steps present in the original method. The system is passive with one-way signal transmission and scales to possibly large μ AUV fleets. It is based on the spherical localization concept. We present results from static and dynamic position estimation experiments and discuss the tradeoffs of the system.
Differential Binary Encoding Method for Calibrating Image Sensors Based on IOFBs
Fernández, Pedro R.; Lázaro-Galilea, José Luis; Gardel, Alfredo; Espinosa, Felipe; Bravo, Ignacio; Cano, Ángel
2012-01-01
Image transmission using incoherent optical fiber bundles (IOFBs) requires prior calibration to obtain the spatial in-out fiber correspondence necessary to reconstruct the image captured by the pseudo-sensor. This information is recorded in a Look-Up Table called the Reconstruction Table (RT), used later for reordering the fiber positions and reconstructing the original image. This paper presents a very fast method based on image-scanning using spaces encoded by a weighted binary code to obtain the in-out correspondence. The results demonstrate that this technique yields a remarkable reduction in processing time and the image reconstruction quality is very good compared to previous techniques based on spot or line scanning, for example. PMID:22666023
Novel citation-based search method for scientific literature: application to meta-analyses.
Janssens, A Cecile J W; Gwinn, M
2015-10-13
Finding eligible studies for meta-analysis and systematic reviews relies on keyword-based searching as the gold standard, despite its inefficiency. Searching based on direct citations is not sufficiently comprehensive. We propose a novel strategy that ranks articles on their degree of co-citation with one or more "known" articles before reviewing their eligibility. In two independent studies, we aimed to reproduce the results of literature searches for sets of published meta-analyses (n = 10 and n = 42). For each meta-analysis, we extracted co-citations for the randomly selected 'known' articles from the Web of Science database, counted their frequencies and screened all articles with a score above a selection threshold. In the second study, we extended the method by retrieving direct citations for all selected articles. In the first study, we retrieved 82% of the studies included in the meta-analyses while screening only 11% as many articles as were screened for the original publications. Articles that we missed were published in non-English languages, published before 1975, published very recently, or available only as conference abstracts. In the second study, we retrieved 79% of included studies while screening half the original number of articles. Citation searching appears to be an efficient and reasonably accurate method for finding articles similar to one or more articles of interest for meta-analysis and reviews.
Singularity computations. [finite element methods for elastoplastic flow
NASA Technical Reports Server (NTRS)
Swedlow, J. L.
1978-01-01
Direct descriptions of the structure of a singularity would describe the radial and angular distributions of the field quantities as explicitly as practicable along with some measure of the intensity of the singularity. This paper discusses such an approach based on recent development of numerical methods for elastoplastic flow. Attention is restricted to problems where one variable or set of variables is finite at the origin of the singularity but a second set is not.
Kim, Junho; Maeng, Ju Heon; Lim, Jae Seok; Son, Hyeonju; Lee, Junehawk; Lee, Jeong Ho; Kim, Sangwoo
2016-10-15
Advances in sequencing technologies have remarkably lowered the detection limit of somatic variants to a low frequency. However, calling mutations at this range is still confounded by many factors including environmental contamination. Vector contamination is a continuously occurring issue and is especially problematic since vector inserts are hardly distinguishable from the sample sequences. Such inserts, which may harbor polymorphisms and engineered functional mutations, can result in calling false variants at corresponding sites. Numerous vector-screening methods have been developed, but none could handle contamination from inserts because they are focusing on vector backbone sequences alone. We developed a novel method-Vecuum-that identifies vector-originated reads and resultant false variants. Since vector inserts are generally constructed from intron-less cDNAs, Vecuum identifies vector-originated reads by inspecting the clipping patterns at exon junctions. False variant calls are further detected based on the biased distribution of mutant alleles to vector-originated reads. Tests on simulated and spike-in experimental data validated that Vecuum could detect 93% of vector contaminants and could remove up to 87% of variant-like false calls with 100% precision. Application to public sequence datasets demonstrated the utility of Vecuum in detecting false variants resulting from various types of external contamination. Java-based implementation of the method is available at http://vecuum.sourceforge.net/ CONTACT: swkim@yuhs.acSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Lichtenhan, J T; Hartsock, J; Dornhoffer, J R; Donovan, K M; Salt, A N
2016-11-01
Administering pharmaceuticals to the scala tympani of the inner ear is a common approach to study cochlear physiology and mechanics. We present here a novel method for in vivo drug delivery in a controlled manner to sealed ears. Injections of ototoxic solutions were applied from a pipette sealed into a fenestra in the cochlear apex, progressively driving solutions along the length of scala tympani toward the cochlear aqueduct at the base. Drugs can be delivered rapidly or slowly. In this report we focus on slow delivery in which the injection rate is automatically adjusted to account for varying cross sectional area of the scala tympani, therefore driving a solution front at uniform rate. Objective measurements originating from finely spaced, low- to high-characteristic cochlear frequency places were sequentially affected. Comparison with existing methods(s): Controlled administration of pharmaceuticals into the cochlear apex overcomes a number of serious limitations of previously established methods such as cochlear perfusions with an injection pipette in the cochlear base: The drug concentration achieved is more precisely controlled, drug concentrations remain in scala tympani and are not rapidly washed out by cerebrospinal fluid flow, and the entire length of the cochlear spiral can be treated quickly or slowly with time. Controlled administration of solutions into the cochlear apex can be a powerful approach to sequentially effect objective measurements originating from finely spaced cochlear regions and allows, for the first time, the spatial origin of CAPs to be objectively defined. Copyright © 2016 Elsevier B.V. All rights reserved.
Peeling the onion: ribosomes are ancient molecular fossils.
Hsiao, Chiaolong; Mohan, Srividya; Kalahar, Benson K; Williams, Loren Dean
2009-11-01
We describe a method to establish chronologies of ancient ribosomal evolution. The method uses structure-based and sequence-based comparison of the large subunits (LSUs) of Haloarcula marismortui and Thermus thermophilus. These are the highest resolution ribosome structures available and represent disparate regions of the evolutionary tree. We have sectioned the superimposed LSUs into concentric shells, like an onion, using the site of peptidyl transfer as the origin (the PT-origin). This spherical approximation combined with a shell-by-shell comparison captures significant information along the evolutionary time line revealing, for example, that sequence and conformational similarity of the 23S rRNAs are greatest near the PT-origin and diverge smoothly with distance from it. The results suggest that the conformation and interactions of both RNA and protein can be described as changing, in an observable manner, over evolutionary time. The tendency of macromolecules to assume regular secondary structural elements such as A-form helices with Watson-Crick base pairs (RNA) and alpha-helices and beta-sheets (protein) is low at early time points but increases as time progresses. The conformations of ribosomal protein components near the PT-origin suggest that they may be molecular fossils of the peptide ancestors of ribosomal proteins. Their abbreviated length may have proscribed formation of secondary structure, which is indeed nearly absent from the region of the LSU nearest the PT-origin. Formation and evolution of the early PT center may have involved Mg(2+)-mediated assembly of at least partially single-stranded RNA oligomers or polymers. As one moves from center to periphery, proteins appear to replace magnesium ions. The LSU is known to have undergone large-scale conformation changes upon assembly. The T. thermophilus LSU analyzed here is part of a fully assembled ribosome, whereas the H. marismortui LSU analyzed here is dissociated from other ribosomal components. Large-scale conformational differences in the 23S rRNAs are evident from superimposition and prevent structural alignment of some portions of the rRNAs, including the L1 stalk.
Survey of gene splicing algorithms based on reads.
Si, Xiuhua; Wang, Qian; Zhang, Lei; Wu, Ruo; Ma, Jiquan
2017-11-02
Gene splicing is the process of assembling a large number of unordered short sequence fragments to the original genome sequence as accurately as possible. Several popular splicing algorithms based on reads are reviewed in this article, including reference genome algorithms and de novo splicing algorithms (Greedy-extension, Overlap-Layout-Consensus graph, De Bruijn graph). We also discuss a new splicing method based on the MapReduce strategy and Hadoop. By comparing these algorithms, some conclusions are drawn and some suggestions on gene splicing research are made.
Li, Yang; Li, Guoqing; Wang, Zhenhao
2015-01-01
In order to overcome the problems of poor understandability of the pattern recognition-based transient stability assessment (PRTSA) methods, a new rule extraction method based on extreme learning machine (ELM) and an improved Ant-miner (IAM) algorithm is presented in this paper. First, the basic principles of ELM and Ant-miner algorithm are respectively introduced. Then, based on the selected optimal feature subset, an example sample set is generated by the trained ELM-based PRTSA model. And finally, a set of classification rules are obtained by IAM algorithm to replace the original ELM network. The novelty of this proposal is that transient stability rules are extracted from an example sample set generated by the trained ELM-based transient stability assessment model by using IAM algorithm. The effectiveness of the proposed method is shown by the application results on the New England 39-bus power system and a practical power system--the southern power system of Hebei province.
Longobardi, Francesco; Casiello, Grazia; Centonze, Valentina; Catucci, Lucia; Agostiano, Angela
2017-08-01
Although table grape is one of the most cultivated and consumed fruits worldwide, no study has been reported on its geographical origin or agronomic practice based on stable isotope ratios. This study aimed to evaluate the usefulness of isotopic ratios (i.e. 2 H/ 1 H, 13 C/ 12 C, 15 N/ 14 N and 18 O/ 16 O) as possible markers to discriminate the agronomic practice (conventional versus organic farming) and provenance of table grape. In order to quantitatively evaluate which of the isotopic variables were more discriminating, a t test was carried out, in light of which only δ 13 C and δ 18 O provided statistically significant differences (P ≤ 0.05) for the discrimination of geographical origin and farming method. Principal component analysis (PCA) showed no good separation of samples differing in geographical area and agronomic practice; thus, for classification purposes, supervised approaches were carried out. In particular, general discriminant analysis (GDA) was used, resulting in prediction abilities of 75.0 and 92.2% for the discrimination of farming method and origin respectively. The present findings suggest that stable isotopes (i.e. δ 18 O, δ 2 H and δ 13 C) combined with chemometrics can be successfully applied to discriminate the provenance of table grape. However, the use of bulk nitrogen isotopes was not effective for farming method discrimination. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance
2017-01-01
This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE), which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE). Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image. PMID:29403529
An iterative shrinkage approach to total-variation image restoration.
Michailovich, Oleg V
2011-05-01
The problem of restoration of digital images from their degraded measurements plays a central role in a multitude of practically important applications. A particularly challenging instance of this problem occurs in the case when the degradation phenomenon is modeled by an ill-conditioned operator. In such a situation, the presence of noise makes it impossible to recover a valuable approximation of the image of interest without using some a priori information about its properties. Such a priori information--commonly referred to as simply priors--is essential for image restoration, rendering it stable and robust to noise. Moreover, using the priors makes the recovered images exhibit some plausible features of their original counterpart. Particularly, if the original image is known to be a piecewise smooth function, one of the standard priors used in this case is defined by the Rudin-Osher-Fatemi model, which results in total variation (TV) based image restoration. The current arsenal of algorithms for TV-based image restoration is vast. In this present paper, a different approach to the solution of the problem is proposed based upon the method of iterative shrinkage (aka iterated thresholding). In the proposed method, the TV-based image restoration is performed through a recursive application of two simple procedures, viz. linear filtering and soft thresholding. Therefore, the method can be identified as belonging to the group of first-order algorithms which are efficient in dealing with images of relatively large sizes. Another valuable feature of the proposed method consists in its working directly with the TV functional, rather then with its smoothed versions. Moreover, the method provides a single solution for both isotropic and anisotropic definitions of the TV functional, thereby establishing a useful connection between the two formulae. Finally, a number of standard examples of image deblurring are demonstrated, in which the proposed method can provide restoration results of superior quality as compared to the case of sparse-wavelet deconvolution.
Wenzel, Katrin; Schulze-Rothe, Sarah; Müller, Johannes; Wallukat, Gerd; Haberland, Annekathrin
2018-01-01
Cell-based analytics for the detection of the beta1-adrenoceptor autoantibody (beta1-AAB) are functional, yet difficult to handle, and should be replaced by easily applicable, routine lab methods. Endeavors to develop solid-phase-based assays such as ELISA to exploit epitope moieties for trapping autoantibodies are ongoing. These solid-phase-based assays, however, are often unreliable when used with human patient material, in contrast to animal derived autoantibodies. We therefore tested an immunogen peptide-based ELISA for the detection of beta1-AAB, and compared commercially available goat antibodies against the 2nd extracellular loop of human beta1-adrenoceptor (ADRB1-AB) to autoantibodies enriched from patient material. The functionality of these autoantibodies was tested in a cell based assay for comparison and their structural appearance was investigated using 2D gel electrophoresis. The ELISA showed a limit of detection for ADRB1-AB of about 1.5 nmol antibody/L when spiked in human control serum and only about 25 nmol/L when spiked in species identical (goat) matrix material. When applied to samples of human origin, the ELISA failed to identify the specific beta1-AABs. A low concentration of beta1-AAB, together with structural inconsistency of the patient originated samples as seen from the 2D Gel appearance, might contribute to the failure of the peptide based ELISA technology to detect human beta1-AABs.
CAE "FOCUS" for modelling and simulating electron optics systems: development and application
NASA Astrophysics Data System (ADS)
Trubitsyn, Andrey; Grachev, Evgeny; Gurov, Victor; Bochkov, Ilya; Bochkov, Victor
2017-02-01
Electron optics is a theoretical base of scientific instrument engineering. Mathematical simulation of occurring processes is a base for contemporary design of complicated devices of the electron optics. Problems of the numerical mathematical simulation are effectively solved by CAE system means. CAE "FOCUS" developed by the authors includes fast and accurate methods: boundary element method (BEM) for the electric field calculation, Runge-Kutta- Fieghlberg method for the charged particle trajectory computation controlling an accuracy of calculations, original methods for search of terms for the angular and time-of-flight focusing. CAE "FOCUS" is organized as a collection of modules each of which solves an independent (sub) task. A range of physical and analytical devices, in particular a microfocus X-ray tube of high power, has been developed using this soft.
Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners
Li, Ruipeng; Saad, Yousef
2017-08-01
This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less
Research of facial feature extraction based on MMC
NASA Astrophysics Data System (ADS)
Xue, Donglin; Zhao, Jiufen; Tang, Qinhong; Shi, Shaokun
2017-07-01
Based on the maximum margin criterion (MMC), a new algorithm of statistically uncorrelated optimal discriminant vectors and a new algorithm of orthogonal optimal discriminant vectors for feature extraction were proposed. The purpose of the maximum margin criterion is to maximize the inter-class scatter while simultaneously minimizing the intra-class scatter after the projection. Compared with original MMC method and principal component analysis (PCA) method, the proposed methods are better in terms of reducing or eliminating the statistically correlation between features and improving recognition rate. The experiment results on Olivetti Research Laboratory (ORL) face database shows that the new feature extraction method of statistically uncorrelated maximum margin criterion (SUMMC) are better in terms of recognition rate and stability. Besides, the relations between maximum margin criterion and Fisher criterion for feature extraction were revealed.
Radiant exchange in partially specular architectural environments
NASA Astrophysics Data System (ADS)
Beamer, C. Walter; Muehleisen, Ralph T.
2003-10-01
The radiant exchange method, also known as radiosity, was originally developed for thermal radiative heat transfer applications. Later it was used to model architectural lighting systems, and more recently it has been extended to model acoustic systems. While there are subtle differences in these applications, the basic method is based on solving a system of energy balance equations, and it is best applied to spaces with mainly diffuse reflecting surfaces. The obvious drawback to this method is that it is based around the assumption that all surfaces in the system are diffuse reflectors. Because almost all architectural systems have at least some partially specular reflecting surfaces in the system it is important to extend the radiant exchange method to deal with this type of surface reflection. [Work supported by NSF.
Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Ruipeng; Saad, Yousef
This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less
Fruit fly optimization based least square support vector regression for blind image restoration
NASA Astrophysics Data System (ADS)
Zhang, Jiao; Wang, Rui; Li, Junshan; Yang, Yawei
2014-11-01
The goal of image restoration is to reconstruct the original scene from a degraded observation. It is a critical and challenging task in image processing. Classical restorations require explicit knowledge of the point spread function and a description of the noise as priors. However, it is not practical for many real image processing. The recovery processing needs to be a blind image restoration scenario. Since blind deconvolution is an ill-posed problem, many blind restoration methods need to make additional assumptions to construct restrictions. Due to the differences of PSF and noise energy, blurring images can be quite different. It is difficult to achieve a good balance between proper assumption and high restoration quality in blind deconvolution. Recently, machine learning techniques have been applied to blind image restoration. The least square support vector regression (LSSVR) has been proven to offer strong potential in estimating and forecasting issues. Therefore, this paper proposes a LSSVR-based image restoration method. However, selecting the optimal parameters for support vector machine is essential to the training result. As a novel meta-heuristic algorithm, the fruit fly optimization algorithm (FOA) can be used to handle optimization problems, and has the advantages of fast convergence to the global optimal solution. In the proposed method, the training samples are created from a neighborhood in the degraded image to the central pixel in the original image. The mapping between the degraded image and the original image is learned by training LSSVR. The two parameters of LSSVR are optimized though FOA. The fitness function of FOA is calculated by the restoration error function. With the acquired mapping, the degraded image can be recovered. Experimental results show the proposed method can obtain satisfactory restoration effect. Compared with BP neural network regression, SVR method and Lucy-Richardson algorithm, it speeds up the restoration rate and performs better. Both objective and subjective restoration performances are studied in the comparison experiments.
The origin of the mass of the Nambu-Goldstone bosons
NASA Astrophysics Data System (ADS)
Arraut, Ivan
2018-03-01
We explain the origin of the mass for the Nambu-Goldstone bosons when there is a chemical potential in the action which explicitly breaks the symmetry. The method is based on the number of independent histories for the interaction of the pair of Nambu-Goldstone bosons with the degenerate vacuum (triangle relations). The analysis suggests that under some circumstances, pairs of massive Nambu-Goldstone bosons can become a single degree of freedom with an effective mass defined by the superposition of the individual masses of each boson. Possible mass oscillations for the Nambu-Goldstone bosons are discussed.
2016-01-01
Twenty-five samples of Bulimulus species are studied, partly from localities within their known distribution range, partly based on interceptions where the material originates from localities where the species seem to be recently introduced and non-native. Molecular study of cytochrome oxidase 1 (CO1) reveals the origin of some of these introductions, but is less conclusive for others. Four different methods for species delimitation were applied, which did not result in unambiguous species hypotheses. For a rapid identification of morphologically indistinct species, a more comprehensive database of sequences is needed. PMID:27069787
Alday, Erick A Perez; Colman, Michael A; Langley, Philip; Zhang, Henggui
2017-03-01
Atrial tachy-arrhytmias, such as atrial fibrillation (AF), are characterised by irregular electrical activity in the atria, generally associated with erratic excitation underlain by re-entrant scroll waves, fibrillatory conduction of multiple wavelets or rapid focal activity. Epidemiological studies have shown an increase in AF prevalence in the developed world associated with an ageing society, highlighting the need for effective treatment options. Catheter ablation therapy, commonly used in the treatment of AF, requires spatial information on atrial electrical excitation. The standard 12-lead electrocardiogram (ECG) provides a method for non-invasive identification of the presence of arrhythmia, due to irregularity in the ECG signal associated with atrial activation compared to sinus rhythm, but has limitations in providing specific spatial information. There is therefore a pressing need to develop novel methods to identify and locate the origin of arrhythmic excitation. Invasive methods provide direct information on atrial activity, but may induce clinical complications. Non-invasive methods avoid such complications, but their development presents a greater challenge due to the non-direct nature of monitoring. Algorithms based on the ECG signals in multiple leads (e.g. a 64-lead vest) may provide a viable approach. In this study, we used a biophysically detailed model of the human atria and torso to investigate the correlation between the morphology of the ECG signals from a 64-lead vest and the location of the origin of rapid atrial excitation arising from rapid focal activity and/or re-entrant scroll waves. A focus-location algorithm was then constructed from this correlation. The algorithm had success rates of 93% and 76% for correctly identifying the origin of focal and re-entrant excitation with a spatial resolution of 40 mm, respectively. The general approach allows its application to any multi-lead ECG system. This represents a significant extension to our previously developed algorithms to predict the AF origins in association with focal activities.
Information retrieval based on single-pixel optical imaging with quick-response code
NASA Astrophysics Data System (ADS)
Xiao, Yin; Chen, Wen
2018-04-01
Quick-response (QR) code technique is combined with ghost imaging (GI) to recover original information with high quality. An image is first transformed into a QR code. Then the QR code is treated as an input image in the input plane of a ghost imaging setup. After measurements, traditional correlation algorithm of ghost imaging is utilized to reconstruct an image (QR code form) with low quality. With this low-quality image as an initial guess, a Gerchberg-Saxton-like algorithm is used to improve its contrast, which is actually a post processing. Taking advantage of high error correction capability of QR code, original information can be recovered with high quality. Compared to the previous method, our method can obtain a high-quality image with comparatively fewer measurements, which means that the time-consuming postprocessing procedure can be avoided to some extent. In addition, for conventional ghost imaging, the larger the image size is, the more measurements are needed. However, for our method, images with different sizes can be converted into QR code with the same small size by using a QR generator. Hence, for the larger-size images, the time required to recover original information with high quality will be dramatically reduced. Our method makes it easy to recover a color image in a ghost imaging setup, because it is not necessary to divide the color image into three channels and respectively recover them.
NASA Astrophysics Data System (ADS)
Yadav, Poonam Lata; Singh, Hukum
2018-06-01
To maintain the security of the image encryption and to protect the image from intruders, a new asymmetric cryptosystem based on fractional Hartley Transform (FrHT) and the Arnold transform (AT) is proposed. AT is a method of image cropping and edging in which pixels of the image are reorganized. In this cryptosystem we have used AT so as to extent the information content of the two original images onto the encrypted images so as to increase the safety of the encoded images. We have even used Structured Phase Mask (SPM) and Hybrid Mask (HM) as the encryption keys. The original image is first multiplied with the SPM and HM and then transformed with direct and inverse fractional Hartley transform so as to obtain the encrypted image. The fractional orders of the FrHT and the parameters of the AT correspond to the keys of encryption and decryption methods. If both the keys are correctly used only then the original image would be retrieved. Recommended method helps in strengthening the safety of DRPE by growing the key space and the number of parameters and the method is robust against various attacks. By using MATLAB 8.3.0.52 (R2014a) we calculate the strength of the recommended cryptosystem. A set of simulated results shows the power of the proposed asymmetric cryptosystem.
Variable importance in nonlinear kernels (VINK): classification of digitized histopathology.
Ginsburg, Shoshana; Ali, Sahirzeeshan; Lee, George; Basavanhally, Ajay; Madabhushi, Anant
2013-01-01
Quantitative histomorphometry is the process of modeling appearance of disease morphology on digitized histopathology images via image-based features (e.g., texture, graphs). Due to the curse of dimensionality, building classifiers with large numbers of features requires feature selection (which may require a large training set) or dimensionality reduction (DR). DR methods map the original high-dimensional features in terms of eigenvectors and eigenvalues, which limits the potential for feature transparency or interpretability. Although methods exist for variable selection and ranking on embeddings obtained via linear DR schemes (e.g., principal components analysis (PCA)), similar methods do not yet exist for nonlinear DR (NLDR) methods. In this work we present a simple yet elegant method for approximating the mapping between the data in the original feature space and the transformed data in the kernel PCA (KPCA) embedding space; this mapping provides the basis for quantification of variable importance in nonlinear kernels (VINK). We show how VINK can be implemented in conjunction with the popular Isomap and Laplacian eigenmap algorithms. VINK is evaluated in the contexts of three different problems in digital pathology: (1) predicting five year PSA failure following radical prostatectomy, (2) predicting Oncotype DX recurrence risk scores for ER+ breast cancers, and (3) distinguishing good and poor outcome p16+ oropharyngeal tumors. We demonstrate that subsets of features identified by VINK provide similar or better classification or regression performance compared to the original high dimensional feature sets.
Lüdeke, Catharina H M; Fischer, Markus; LaFon, Patti; Cooper, Kara; Jones, Jessica L
2014-07-01
Vibrio parahaemolyticus is the leading cause of infectious illness associated with seafood consumption in the United States. Molecular fingerprinting of strains has become a valuable research tool for understanding this pathogen. However, there are many subtyping methods available and little information on how they compare to one another. For this study, a collection of 67 oyster and 77 clinical V. parahaemolyticus isolates were analyzed by three subtyping methods--intergenic spacer region (ISR-1), direct genome restriction analysis (DGREA), and pulsed-field gel electrophoresis (PFGE)--to determine the utility of these methods for discriminatory subtyping. ISR-1 analysis, run as previously described, provided the lowest discrimination of all the methods (discriminatory index [DI]=0.8665). However, using a broader analytical range than previously reported, ISR-1 clustered isolates based on origin (oyster versus clinical) and had a DI=0.9986. DGREA provided a DI=0.9993-0.9995, but did not consistently cluster the isolates by any identifiable characteristics (origin, serotype, or virulence genotype) and ∼ 15% of isolates were untypeable by this method. PFGE provided a DI=0.9998 when using the combined pattern analysis of both restriction enzymes, SfiI and NotI. This analysis was more discriminatory than using either enzyme pattern alone and primarily grouped isolates by serotype, regardless of strain origin (clinical or oyster) or presence of currently accepted virulence markers. These results indicate that PFGE and ISR-1 are more reliable methods for subtyping V. parahemolyticus, rather than DGREA. Additionally, ISR-1 may provide an indication of pathogenic potential; however, more detailed studies are needed. These data highlight the diversity within V. parahaemolyticus and the need for appropriate selection of subtyping methods depending on the study objectives.
An experimental sample of the field gamma-spectrometer based on solid state Si-photomultiplier
NASA Astrophysics Data System (ADS)
Denisov, Viktor; Korotaev, Valery; Titov, Aleksandr; Blokhina, Anastasia; Kleshchenok, Maksim
2017-05-01
Design of optical-electronic devices and systems involves the selection of such technical patterns that under given initial requirements and conditions are optimal according to certain criteria. The original characteristic of the OES for any purpose, defining its most important feature ability is a threshold detection. Based on this property, will be achieved the required functional quality of the device or system. Therefore, the original criteria and optimization methods have to subordinate to the idea of a better detectability. Generally reduces to the problem of optimal selection of the expected (predetermined) signals in the predetermined observation conditions. Thus the main purpose of optimization of the system when calculating its detectability is the choice of circuits and components that provide the most effective selection of a target.
Stirman, Shannon Wiltsey; Gamarra, Jennifer; Bartlett, Brooke; Calloway, Amber; Gutner, Cassidy
2017-12-01
This review describes methods used to examine the modifications and adaptations to evidence-based psychological treatments (EBPTs), assesses what is known about the impact of modifications and adaptations to EBPTs, and makes recommendations for future research and clinical care. One hundred eight primary studies and three meta-analyses were identified. All studies examined planned adaptations, and many simultaneously investigated multiple types of adaptations. With the exception of studies on adding or removing specific EBPT elements, few studies compared adapted EBPTs to the original protocols. There was little evidence that adaptations in the studies were detrimental, but there was also limited consistent evidence that adapted protocols outperformed the original protocols, with the exception of adding components to EBPTs. Implications for EBPT delivery and future research are discussed.
Lithology and aggregate quality attributes for the digital geologic map of Colorado
Knepper, Daniel H.; Green, Gregory N.; Langer, William H.
1999-01-01
This geologic map was prepared as a part of a study of digital methods and techniques as applied to complex geologic maps. The geologic map was digitized from the original scribe sheets used to prepare the published Geologic Map of Colorado (Tweto 1979). Consequently the digital version is at 1:500,000 scale using the Lambert Conformal Conic map projection parameters of the state base map. Stable base contact prints of the scribe sheets were scanned on a Tektronix 4991 digital scanner. The scanner automatically converts the scanned image to an ASCII vector format. These vectors were transferred to a VAX minicomputer, where they were then loaded into ARC/INFO. Each vector and polygon was given attributes derived from the original 1979 geologic map.
The application of complex network time series analysis in turbulent heated jets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Charakopoulos, A. K.; Karakasidis, T. E., E-mail: thkarak@uth.gr; Liakopoulos, A.
In the present study, we applied the methodology of the complex network-based time series analysis to experimental temperature time series from a vertical turbulent heated jet. More specifically, we approach the hydrodynamic problem of discriminating time series corresponding to various regions relative to the jet axis, i.e., time series corresponding to regions that are close to the jet axis from time series originating at regions with a different dynamical regime based on the constructed network properties. Applying the transformation phase space method (k nearest neighbors) and also the visibility algorithm, we transformed time series into networks and evaluated the topologicalmore » properties of the networks such as degree distribution, average path length, diameter, modularity, and clustering coefficient. The results show that the complex network approach allows distinguishing, identifying, and exploring in detail various dynamical regions of the jet flow, and associate it to the corresponding physical behavior. In addition, in order to reject the hypothesis that the studied networks originate from a stochastic process, we generated random network and we compared their statistical properties with that originating from the experimental data. As far as the efficiency of the two methods for network construction is concerned, we conclude that both methodologies lead to network properties that present almost the same qualitative behavior and allow us to reveal the underlying system dynamics.« less
Perrin, Karen M; Burke, Somer Goad; O'Connor, Danielle; Walby, Gary; Shippey, Claire; Pitt, Seraphine; McDermott, Robert J; Forthofer, Melinda S
2006-01-01
Background and objectives Disease self-management programs have been a popular approach to reducing morbidity and mortality from chronic disease. Replicating an evidence-based disease management program successfully requires practitioners to ensure fidelity to the original program design. Methods The Florida Health Literacy Study (FHLS) was conducted to investigate the implementation impact of the Pfizer, Inc. Diabetes Mellitus and Hypertension Disease Self-Management Program based on health literacy principles in 14 community health centers in Florida. The intervention components discussed include health educator recruitment and training, patient recruitment, class sessions, utilization of program materials, translation of program manuals, patient retention and follow-up, and technical assistance. Results This report describes challenges associated with achieving a balance between adaptation for cultural relevance and fidelity when implementing the health education program across clinic sites. This balance was necessary to achieve effectiveness of the disease self-management program. The FHLS program was implemented with a high degree of fidelity to the original design and used original program materials. Adaptations identified as advantageous to program participation are discussed, such as implementing alternate methods for recruiting patients and developing staff incentives for participation. Conclusion Effective program implementation depends on the talent, skill and willing participation of clinic staff. Program adaptations that conserve staff time and resources and recognize their contribution can increase program effectiveness without jeopardizing its fidelity. PMID:17067388
The application of complex network time series analysis in turbulent heated jets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Charakopoulos, A. K.; Karakasidis, T. E., E-mail: thkarak@uth.gr; Liakopoulos, A.
2014-06-15
In the present study, we applied the methodology of the complex network-based time series analysis to experimental temperature time series from a vertical turbulent heated jet. More specifically, we approach the hydrodynamic problem of discriminating time series corresponding to various regions relative to the jet axis, i.e., time series corresponding to regions that are close to the jet axis from time series originating at regions with a different dynamical regime based on the constructed network properties. Applying the transformation phase space method (k nearest neighbors) and also the visibility algorithm, we transformed time series into networks and evaluated the topologicalmore » properties of the networks such as degree distribution, average path length, diameter, modularity, and clustering coefficient. The results show that the complex network approach allows distinguishing, identifying, and exploring in detail various dynamical regions of the jet flow, and associate it to the corresponding physical behavior. In addition, in order to reject the hypothesis that the studied networks originate from a stochastic process, we generated random network and we compared their statistical properties with that originating from the experimental data. As far as the efficiency of the two methods for network construction is concerned, we conclude that both methodologies lead to network properties that present almost the same qualitative behavior and allow us to reveal the underlying system dynamics.« less
Zhou, Y.; Ren, Y.; Tang, D.; Bohor, B.
1994-01-01
Kaolinitic tonsteins of altered synsedimentary volcanic ash-fall origin are well developed in the Late Permian coal-bearing formations of eastern Yunnan Province. Because of their unique origin, wide lateral extent, relatively constant thickness and sharp contacts with enclosing strata, great importance has been attached to these isochronous petrographic markers. In order to compare tonsteins with co-existing, non-cineritic claystones and characterize the individuality of tonsteins from different horizons for coal bed correlation, a semi-quantitative method was developed that is based on statistical analyses of the concentration and morphology of zircons and their spatial distribution patterns. This zircon-based analytical method also serves as a means for reconstructing volcanic ash-fall dispersal patterns. The results demonstrate that zircons from claystones of two different origins (i.e., tonstein and non-cineritic claystone) differ greatly in their relative abundances, crystal morphologies and spatial distribution patterns. Tonsteins from the same area but from different horizons are characterized by their own unique statistical patterns in terms of zircon concentration values and morphologic parameters (crystal length, width and the ratio of these values), thus facilitating stratigraphic correlation. Zircons from the same tonstein horizon also show continuous variation in these statistical patterns as a function of areal distribution, making it possible to identify the main path and direction in which the volcanic source materials were transported by prevailing winds. ?? 1994.
Identification of key ancestors of modern germplasm in a breeding program of maize.
Technow, F; Schrag, T A; Schipprack, W; Melchinger, A E
2014-12-01
Probabilities of gene origin computed from the genomic kinships matrix can accurately identify key ancestors of modern germplasms Identifying the key ancestors of modern plant breeding populations can provide valuable insights into the history of a breeding program and provide reference genomes for next generation whole genome sequencing. In an animal breeding context, a method was developed that employs probabilities of gene origin, computed from the pedigree-based additive kinship matrix, for identifying key ancestors. Because reliable and complete pedigree information is often not available in plant breeding, we replaced the additive kinship matrix with the genomic kinship matrix. As a proof-of-concept, we applied this approach to simulated data sets with known ancestries. The relative contribution of the ancestral lines to later generations could be determined with high accuracy, with and without selection. Our method was subsequently used for identifying the key ancestors of the modern Dent germplasm of the public maize breeding program of the University of Hohenheim. We found that the modern germplasm can be traced back to six or seven key ancestors, with one or two of them having a disproportionately large contribution. These results largely corroborated conjectures based on early records of the breeding program. We conclude that probabilities of gene origin computed from the genomic kinships matrix can be used for identifying key ancestors in breeding programs and estimating the proportion of genes contributed by them.
Probabilistic objective functions for margin-less IMRT planning
NASA Astrophysics Data System (ADS)
Bohoslavsky, Román; Witte, Marnix G.; Janssen, Tomas M.; van Herk, Marcel
2013-06-01
We present a method to implement probabilistic treatment planning of intensity-modulated radiation therapy using custom software plugins in a commercial treatment planning system. Our method avoids the definition of safety-margins by directly including the effect of geometrical uncertainties during optimization when objective functions are evaluated. Because the shape of the resulting dose distribution implicitly defines the robustness of the plan, the optimizer has much more flexibility than with a margin-based approach. We expect that this added flexibility helps to automatically strike a better balance between target coverage and dose reduction for surrounding healthy tissue, especially for cases where the planning target volume overlaps organs at risk. Prostate cancer treatment planning was chosen to develop our method, including a novel technique to include rotational uncertainties. Based on population statistics, translations and rotations are simulated independently following a marker-based IGRT correction strategy. The effects of random and systematic errors are incorporated by first blurring and then shifting the dose distribution with respect to the clinical target volume. For simplicity and efficiency, dose-shift invariance and a rigid-body approximation are assumed. Three prostate cases were replanned using our probabilistic objective functions. To compare clinical and probabilistic plans, an evaluation tool was used that explicitly incorporates geometric uncertainties using Monte-Carlo methods. The new plans achieved similar or better dose distributions than the original clinical plans in terms of expected target coverage and rectum wall sparing. Plan optimization times were only about a factor of two higher than in the original clinical system. In conclusion, we have developed a practical planning tool that enables margin-less probability-based treatment planning with acceptable planning times, achieving the first system that is feasible for clinical implementation.
Rapid multi-modality preregistration based on SIFT descriptor.
Chen, Jian; Tian, Jie
2006-01-01
This paper describes the scale invariant feature transform (SIFT) method for rapid preregistration of medical image. This technique originates from Lowe's method wherein preregistration is achieved by matching the corresponding keypoints between two images. The computational complexity has been reduced when we applied SIFT preregistration method before refined registration due to its O(n) exponential calculations. The features of SIFT are highly distinctive and invariant to image scaling and rotation, and partially invariant to change in illumination and contrast, it is robust and repeatable for cursorily matching two images. We also altered the descriptor so our method can deal with multimodality preregistration.
A diagnostic system for articular cartilage using non-destructive pulsed laser irradiation.
Sato, Masato; Ishihara, Miya; Kikuchi, Makoto; Mochida, Joji
2011-07-01
Osteoarthritis involves dysfunction caused by cartilage degeneration, but objective evaluation methodologies based on the original function of the articular cartilage remain unavailable. Evaluations for osteoarthritis are mostly based simply on patient symptoms or the degree of joint space narrowing on X-ray images. Accurate measurement and quantitative evaluation of the mechanical characteristics of the cartilage is important, and the tissue properties of the original articular cartilage must be clarified to understand the pathological condition in detail and to correctly judge the efficacy of treatment. We have developed new methods to measure some essential properties of cartilage: a photoacoustic measurement method; and time-resolved fluorescence spectroscopy. A nanosecond-pulsed laser, which is completely non-destructive, is focused onto the target cartilage and induces a photoacoustic wave that will propagate with attenuation and is affected by the viscoelasticity of the surrounding cartilage. We also investigated whether pulsed laser irradiation and the measurement of excited autofluorescence allow real-time, non-invasive evaluation of tissue characteristics. The decay time, during which the amplitude of the photoacoustic wave is reduced by a factor of 1/e, represents the key numerical value used to characterize and evaluate the viscoelasticity and rheological behavior of the cartilage. Our findings show that time-resolved laser-induced autofluorescence spectroscopy (TR-LIFS) is useful for evaluating tissue-engineered cartilage. Photoacoustic measurement and TR-LIFS, predicated on the interactions between optics and living organs, is a suitable methodology for diagnosis during arthroscopy, allowing quantitative and multidirectional evaluation of the original function of the cartilage based on a variety of parameters. Copyright © 2011 Wiley-Liss, Inc.
Lavoué, Sébastien; Miya, Masaki; Arnegard, Matthew E.; Sullivan, John P.; Hopkins, Carl D.; Nishida, Mutsumi
2012-01-01
One of the most remarkable examples of convergent evolution among vertebrates is illustrated by the independent origins of an active electric sense in South American and African weakly electric fishes, the Gymnotiformes and Mormyroidea, respectively. These groups independently evolved similar complex systems for object localization and communication via the generation and reception of weak electric fields. While good estimates of divergence times are critical to understanding the temporal context for the evolution and diversification of these two groups, their respective ages have been difficult to estimate due to the absence of an informative fossil record, use of strict molecular clock models in previous studies, and/or incomplete taxonomic sampling. Here, we examine the timing of the origins of the Gymnotiformes and the Mormyroidea using complete mitogenome sequences and a parametric Bayesian method for divergence time reconstruction. Under two different fossil-based calibration methods, we estimated similar ages for the independent origins of the Mormyroidea and Gymnotiformes. Our absolute estimates for the origins of these groups either slightly postdate, or just predate, the final separation of Africa and South America by continental drift. The most recent common ancestor of the Mormyroidea and Gymnotiformes was found to be a non-electrogenic basal teleost living more than 85 millions years earlier. For both electric fish lineages, we also estimated similar intervals (16–19 or 22–26 million years, depending on calibration method) between the appearance of electroreception and the origin of myogenic electric organs, providing rough upper estimates for the time periods during which these complex electric organs evolved de novo from skeletal muscle precursors. The fact that the Gymnotiformes and Mormyroidea are of similar age enhances the comparative value of the weakly electric fish system for investigating pathways to evolutionary novelty, as well as the influences of key innovations in communication on the process of species radiation. PMID:22606250
Effective evaluation of privacy protection techniques in visible and thermal imagery
NASA Astrophysics Data System (ADS)
Nawaz, Tahir; Berg, Amanda; Ferryman, James; Ahlberg, Jörgen; Felsberg, Michael
2017-09-01
Privacy protection may be defined as replacing the original content in an image region with a (less intrusive) content having modified target appearance information to make it less recognizable by applying a privacy protection technique. Indeed, the development of privacy protection techniques also needs to be complemented with an established objective evaluation method to facilitate their assessment and comparison. Generally, existing evaluation methods rely on the use of subjective judgments or assume a specific target type in image data and use target detection and recognition accuracies to assess privacy protection. An annotation-free evaluation method that is neither subjective nor assumes a specific target type is proposed. It assesses two key aspects of privacy protection: "protection" and "utility." Protection is quantified as an appearance similarity, and utility is measured as a structural similarity between original and privacy-protected image regions. We performed an extensive experimentation using six challenging datasets (having 12 video sequences), including a new dataset (having six sequences) that contains visible and thermal imagery. The new dataset is made available online for the community. We demonstrate effectiveness of the proposed method by evaluating six image-based privacy protection techniques and also show comparisons of the proposed method over existing methods.
Carbon dioxide emission factors for U.S. coal by origin and destination
Quick, J.C.
2010-01-01
This paper describes a method that uses published data to calculate locally robust CO2 emission factors for U.S. coal. The method is demonstrated by calculating CO2 emission factors by coal origin (223 counties, in 1999) and destination (479 power plants, in 2005). Locally robust CO2 emission factors should improve the accuracy and verification of greenhouse gas emission measurements from individual coal-fired power plants. Based largely on the county origin, average emission factors for U.S. lignite, subbituminous, bituminous, and anthracite coal produced during 1999 were 92.97,91.97,88.20, and 98.91 kg CO2/GJgross, respectively. However, greater variation is observed within these rank classes than between them, which limits the reliability of CO2 emission factors specified by coal rank. Emission factors calculated by destination (power plant) showed greater variation than those listed in the Emissions & Generation Resource Integrated Database (eGRID), which exhibit an unlikely uniformity that is inconsistent with the natural variation of CO2 emission factors for U.S. coal. ?? 2010 American Chemical Society.
MetaPhinder—Identifying Bacteriophage Sequences in Metagenomic Data Sets
Villarroel, Julia; Lund, Ole; Voldby Larsen, Mette; Nielsen, Morten
2016-01-01
Bacteriophages are the most abundant biological entity on the planet, but at the same time do not account for much of the genetic material isolated from most environments due to their small genome sizes. They also show great genetic diversity and mosaic genomes making it challenging to analyze and understand them. Here we present MetaPhinder, a method to identify assembled genomic fragments (i.e.contigs) of phage origin in metagenomic data sets. The method is based on a comparison to a database of whole genome bacteriophage sequences, integrating hits to multiple genomes to accomodate for the mosaic genome structure of many bacteriophages. The method is demonstrated to out-perform both BLAST methods based on single hits and methods based on k-mer comparisons. MetaPhinder is available as a web service at the Center for Genomic Epidemiology https://cge.cbs.dtu.dk/services/MetaPhinder/, while the source code can be downloaded from https://bitbucket.org/genomicepidemiology/metaphinder or https://github.com/vanessajurtz/MetaPhinder. PMID:27684958
MetaPhinder-Identifying Bacteriophage Sequences in Metagenomic Data Sets.
Jurtz, Vanessa Isabell; Villarroel, Julia; Lund, Ole; Voldby Larsen, Mette; Nielsen, Morten
Bacteriophages are the most abundant biological entity on the planet, but at the same time do not account for much of the genetic material isolated from most environments due to their small genome sizes. They also show great genetic diversity and mosaic genomes making it challenging to analyze and understand them. Here we present MetaPhinder, a method to identify assembled genomic fragments (i.e.contigs) of phage origin in metagenomic data sets. The method is based on a comparison to a database of whole genome bacteriophage sequences, integrating hits to multiple genomes to accomodate for the mosaic genome structure of many bacteriophages. The method is demonstrated to out-perform both BLAST methods based on single hits and methods based on k-mer comparisons. MetaPhinder is available as a web service at the Center for Genomic Epidemiology https://cge.cbs.dtu.dk/services/MetaPhinder/, while the source code can be downloaded from https://bitbucket.org/genomicepidemiology/metaphinder or https://github.com/vanessajurtz/MetaPhinder.
NASA Astrophysics Data System (ADS)
Zhang, Kejiang; Kluck, Cheryl; Achari, Gopal
2009-11-01
A ranking system for contaminated sites based on comparative risk methodology using fuzzy Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE) was developed in this article. It combines the concepts of fuzzy sets to represent uncertain site information with the PROMETHEE, a subgroup of Multi-Criteria Decision Making (MCDM) methods. Criteria are identified based on a combination of the attributes (toxicity, exposure, and receptors) associated with the potential human health and ecological risks posed by contaminated sites, chemical properties, site geology and hydrogeology and contaminant transport phenomena. Original site data are directly used avoiding the subjective assignment of scores to site attributes. When the input data are numeric and crisp the PROMETHEE method can be used. The Fuzzy PROMETHEE method is preferred when substantial uncertainties and subjectivities exist in site information. The PROMETHEE and fuzzy PROMETHEE methods are both used in this research to compare the sites. The case study shows that this methodology provides reasonable results.
Zhang, Kejiang; Kluck, Cheryl; Achari, Gopal
2009-11-01
A ranking system for contaminated sites based on comparative risk methodology using fuzzy Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE) was developed in this article. It combines the concepts of fuzzy sets to represent uncertain site information with the PROMETHEE, a subgroup of Multi-Criteria Decision Making (MCDM) methods. Criteria are identified based on a combination of the attributes (toxicity, exposure, and receptors) associated with the potential human health and ecological risks posed by contaminated sites, chemical properties, site geology and hydrogeology and contaminant transport phenomena. Original site data are directly used avoiding the subjective assignment of scores to site attributes. When the input data are numeric and crisp the PROMETHEE method can be used. The Fuzzy PROMETHEE method is preferred when substantial uncertainties and subjectivities exist in site information. The PROMETHEE and fuzzy PROMETHEE methods are both used in this research to compare the sites. The case study shows that this methodology provides reasonable results.
Decerns: A framework for multi-criteria decision analysis
Yatsalo, Boris; Didenko, Vladimir; Gritsyuk, Sergey; ...
2015-02-27
A new framework, Decerns, for multicriteria decision analysis (MCDA) of a wide range of practical problems on risk management is introduced. Decerns framework contains a library of modules that are the basis for two scalable systems: DecernsMCDA for analysis of multicriteria problems, and DecernsSDSS for multicriteria analysis of spatial options. DecernsMCDA includes well known MCDA methods and original methods for uncertainty treatment based on probabilistic approaches and fuzzy numbers. As a result, these MCDA methods are described along with a case study on analysis of multicriteria location problem.
An improved method for measuring the magnetic inhomogeneity shift in hydrogen masers
NASA Technical Reports Server (NTRS)
Reinhardt, V. S.; Peters, H. E.
1975-01-01
The reported method makes it possible to conduct all maser frequency measurements under conditions of low magnetic field intensity for which the hydrogen maser is most stable. Aspects concerning the origin of the magnetic inhomogeneity shift are examined and the available approaches for measuring this shift are considered, taking into account certain drawbacks of currently used methods. An approach free of these drawbacks can be based on the measurement of changes in a parameter representing the difference between the number of atoms in the involved states.
Harmony Search Method: Theory and Applications
Gao, X. Z.; Govindasamy, V.; Xu, H.; Wang, X.; Zenger, K.
2015-01-01
The Harmony Search (HS) method is an emerging metaheuristic optimization algorithm, which has been employed to cope with numerous challenging tasks during the past decade. In this paper, the essential theory and applications of the HS algorithm are first described and reviewed. Several typical variants of the original HS are next briefly explained. As an example of case study, a modified HS method inspired by the idea of Pareto-dominance-based ranking is also presented. It is further applied to handle a practical wind generator optimal design problem. PMID:25945083
NASA Technical Reports Server (NTRS)
Adams, Gaynor J; DUGAN DUANE W
1952-01-01
A method of analysis based on slender-wing theory is developed to investigate the characteristics in roll of slender cruciform wings and wing-body combinations. The method makes use of the conformal mapping processes of classical hydrodynamics which transform the region outside a circle and the region outside an arbitrary arrangement of line segments intersecting at the origin. The method of analysis may be utilized to solve other slender cruciform wing-body problems involving arbitrarily assigned boundary conditions. (author)
NASA Astrophysics Data System (ADS)
Boucharin, Alexis; Oguz, Ipek; Vachet, Clement; Shi, Yundi; Sanchez, Mar; Styner, Martin
2011-03-01
The use of regional connectivity measurements derived from diffusion imaging datasets has become of considerable interest in the neuroimaging community in order to better understand cortical and subcortical white matter connectivity. Current connectivity assessment methods are based on streamline fiber tractography, usually applied in a Monte-Carlo fashion. In this work we present a novel, graph-based method that performs a fully deterministic, efficient and stable connectivity computation. The method handles crossing fibers and deals well with multiple seed regions. The computation is based on a multi-directional graph propagation method applied to sampled orientation distribution function (ODF), which can be computed directly from the original diffusion imaging data. We show early results of our method on synthetic and real datasets. The results illustrate the potential of our method towards subjectspecific connectivity measurements that are performed in an efficient, stable and reproducible manner. Such individual connectivity measurements would be well suited for application in population studies of neuropathology, such as Autism, Huntington's Disease, Multiple Sclerosis or leukodystrophies. The proposed method is generic and could easily be applied to non-diffusion data as long as local directional data can be derived.
[Isolation and identification methods of enterobacteria group and its technological advancement].
Furuta, Itaru
2007-08-01
In the last half-century, isolation and identification methods of enterobacteria groups have markedly improved by technological advancement. Clinical microbiology tests have changed overtime from tube methods to commercial identification kits and automated identification. Tube methods are the original method for the identification of enterobacteria groups, that is, a basically essential method to recognize bacterial fermentation and biochemical principles. In this paper, traditional tube tests are discussed, such as the utilization of carbohydrates, indole, methyl red, and citrate and urease tests. Commercial identification kits and automated instruments by computer based analysis as current methods are also discussed, and those methods provide rapidity and accuracy. Nonculture techniques of nucleic acid typing methods using PCR analysis, and immunochemical methods using monoclonal antibodies can be further developed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brodin, P; Guha, C; Tome, W
Purpose: To determine patterns of failure in laryngeal cancer treated with definitive IMRT by comparing two different methods for identifying the recurrence epicenter on follow-up PET/CT. Methods: We identified 20 patients treated for laryngeal squamous cell carcinoma with definitive IMRT who had loco-regional recurrence diagnosed on PET/CT. Recurrence PET/CT scans were co-registered with the original treatment planning CT using deformable image registration with the VoxAlign deformation engine in MIM Software. Recurrence volumes were delineated on co-registered follow-up scans using a semi-automatic PETedge tool and two separate methods were used to identify the recurrence point of origin: a) Finding the pointmore » within the recurrence volume for which the maximum distance to the surface of the surrounding recurrence volume is smaller than for any other point. b) Finding the point within the recurrence volume with the maximum standardized uptake value (SUVmax), without geometric restrictions.For each method the failure pattern was determined as whether the recurrence origin fell within the original high-dose target volumes GTV70, CTV70, PTV70 (receiving 70Gy), intermediate-risk PTV59 (receiving 59.4Gy) or low-risk PTV54 (receiving 54.1Gy), in the original treatment planning CT. Results: 23 primary/nodal recurrences from the 20 patients were analyzed. The three-dimensional distance between the two different origins was on average 10.5mm (std.dev. 10mm). Most recurrences originated in the high-dose target volumes for both methods with 13 (57%) and 11 (48%) in the GTV70 and 20 (87%) and 20 (87%) in the PTV70 for method a) and b), respectively. There was good agreement between the two methods in classifying the origin target volumes with 69% concordance for GTV70, 89% for CTV70 and 100% for PTV70. Conclusion: With strong agreement in patterns of failure between two separate methods for determining recurrence origin, we conclude that most recurrences occurred within the high-dose treatment region, which influences potential risk-adaptive treatment strategies.« less
High-speed bioimaging with frequency-division-multiplexed fluorescence confocal microscopy
NASA Astrophysics Data System (ADS)
Mikami, Hideharu; Harmon, Jeffrey; Ozeki, Yasuyuki; Goda, Keisuke
2017-04-01
We present methods of fluorescence confocal microscopy that enable unprecedentedly high frame rate of > 10,000 fps. The methods are based on a frequency-division multiplexing technique, which was originally developed in the field of communication engineering. Specifically, we achieved a broad bandwidth ( 400 MHz) of detection signals using a dual- AOD method and overcame limitations in frame rate, due to a scanning device, by using a multi-line focusing method, resulting in a significant increase in frame rate. The methods have potential biomedical applications such as observation of sub-millisecond dynamics in biological tissues, in-vivo three-dimensional imaging, and fluorescence imaging flow cytometry.
A Century of Enzyme Kinetic Analysis, 1913 to 2013
Johnson, Kenneth A.
2013-01-01
This review traces the history and logical progression of methods for quantitative analysis of enzyme kinetics from the 1913 Michaelis and Menten paper to the application of modern computational methods today. Following a brief review of methods for fitting steady state kinetic data, modern methods are highlighted for fitting full progress curve kinetics based upon numerical integration of rate equations, including a re-analysis of the original Michaelis-Menten full time course kinetic data. Finally, several illustrations of modern transient state kinetic methods of analysis are shown which enable the elucidation of reactions occurring at the active sites of enzymes in order to relate structure and function. PMID:23850893
Awan, Imtiaz; Aziz, Wajid; Habib, Nazneen; Alowibdi, Jalal S.; Saeed, Sharjil; Nadeem, Malik Sajjad Ahmed; Shah, Syed Ahsin Ali
2018-01-01
Considerable interest has been devoted for developing a deeper understanding of the dynamics of healthy biological systems and how these dynamics are affected due to aging and disease. Entropy based complexity measures have widely been used for quantifying the dynamics of physical and biological systems. These techniques have provided valuable information leading to a fuller understanding of the dynamics of these systems and underlying stimuli that are responsible for anomalous behavior. The single scale based traditional entropy measures yielded contradictory results about the dynamics of real world time series data of healthy and pathological subjects. Recently the multiscale entropy (MSE) algorithm was introduced for precise description of the complexity of biological signals, which was used in numerous fields since its inception. The original MSE quantified the complexity of coarse-grained time series using sample entropy. The original MSE may be unreliable for short signals because the length of the coarse-grained time series decreases with increasing scaling factor τ, however, MSE works well for long signals. To overcome the drawback of original MSE, various variants of this method have been proposed for evaluating complexity efficiently. In this study, we have proposed multiscale normalized corrected Shannon entropy (MNCSE), in which instead of using sample entropy, symbolic entropy measure NCSE has been used as an entropy estimate. The results of the study are compared with traditional MSE. The effectiveness of the proposed approach is demonstrated using noise signals as well as interbeat interval signals from healthy and pathological subjects. The preliminary results of the study indicate that MNCSE values are more stable and reliable than original MSE values. The results show that MNCSE based features lead to higher classification accuracies in comparison with the MSE based features. PMID:29771977
Awan, Imtiaz; Aziz, Wajid; Shah, Imran Hussain; Habib, Nazneen; Alowibdi, Jalal S; Saeed, Sharjil; Nadeem, Malik Sajjad Ahmed; Shah, Syed Ahsin Ali
2018-01-01
Considerable interest has been devoted for developing a deeper understanding of the dynamics of healthy biological systems and how these dynamics are affected due to aging and disease. Entropy based complexity measures have widely been used for quantifying the dynamics of physical and biological systems. These techniques have provided valuable information leading to a fuller understanding of the dynamics of these systems and underlying stimuli that are responsible for anomalous behavior. The single scale based traditional entropy measures yielded contradictory results about the dynamics of real world time series data of healthy and pathological subjects. Recently the multiscale entropy (MSE) algorithm was introduced for precise description of the complexity of biological signals, which was used in numerous fields since its inception. The original MSE quantified the complexity of coarse-grained time series using sample entropy. The original MSE may be unreliable for short signals because the length of the coarse-grained time series decreases with increasing scaling factor τ, however, MSE works well for long signals. To overcome the drawback of original MSE, various variants of this method have been proposed for evaluating complexity efficiently. In this study, we have proposed multiscale normalized corrected Shannon entropy (MNCSE), in which instead of using sample entropy, symbolic entropy measure NCSE has been used as an entropy estimate. The results of the study are compared with traditional MSE. The effectiveness of the proposed approach is demonstrated using noise signals as well as interbeat interval signals from healthy and pathological subjects. The preliminary results of the study indicate that MNCSE values are more stable and reliable than original MSE values. The results show that MNCSE based features lead to higher classification accuracies in comparison with the MSE based features.
An integrated bioanalytical method development and validation approach: case studies.
Xue, Y-J; Melo, Brian; Vallejo, Martha; Zhao, Yuwen; Tang, Lina; Chen, Yuan-Shek; Keller, Karin M
2012-10-01
We proposed an integrated bioanalytical method development and validation approach: (1) method screening based on analyte's physicochemical properties and metabolism information to determine the most appropriate extraction/analysis conditions; (2) preliminary stability evaluation using both quality control and incurred samples to establish sample collection, storage and processing conditions; (3) mock validation to examine method accuracy and precision and incurred sample reproducibility; and (4) method validation to confirm the results obtained during method development. This integrated approach was applied to the determination of compound I in rat plasma and compound II in rat and dog plasma. The effectiveness of the approach was demonstrated by the superior quality of three method validations: (1) a zero run failure rate; (2) >93% of quality control results within 10% of nominal values; and (3) 99% incurred sample within 9.2% of the original values. In addition, rat and dog plasma methods for compound II were successfully applied to analyze more than 900 plasma samples obtained from Investigational New Drug (IND) toxicology studies in rats and dogs with near perfect results: (1) a zero run failure rate; (2) excellent accuracy and precision for standards and quality controls; and (3) 98% incurred samples within 15% of the original values. Copyright © 2011 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Hommet, Caroline; Vidal, Julie; Roux, Sylvie; Blanc, Romuald; Barthez, Marie Anne; De Becque, Brigitte; Barthelemy, Catherine; Bruneau, Nicole; Gomot, Marie
2009-01-01
Introduction: Developmental dyslexia (DD) is a frequent language-based learning disorder. The predominant etiological view postulates that reading problems originate from a phonological impairment. Method: We studied mismatch negativity (MMN) and Late Discriminative Negativity (LDN) to syllables change in both children (n = 12; 8-12 years) and…
Monitoring Social Media: Students Satisfaction with University Administration Activities
ERIC Educational Resources Information Center
Koshkin, Andrey Petrovich; Rassolov, Ilya Mihajlovich; Novikov, Andrey Vadimovich
2017-01-01
The paper presents an original method of identifying satisfaction of students with the activities of their university administration based on studying the content of comments on the social networks. The analysis of student opinions revealed areas of concern and priority areas in the work of the university administration. The paper characterizes…
ERIC Educational Resources Information Center
Hopson, Laura M.; Steiker, Lori K. H.
2008-01-01
The purpose of this article is to set forth an innovative methodological protocol for culturally grounding interventions with high-risk youths in alternative schools. This study used mixed methods to evaluate original and adapted versions of a culturally grounded substance abuse prevention program. The qualitative and quantitative methods…
Strategic Leadership: A Model for Promoting, Sustaining, and Advancing Institutional Significance
ERIC Educational Resources Information Center
Scott, Kenneth E.; Johnson, Mimi
2011-01-01
This article presents the methods, materials, and manpower required to create a strategic leadership program for promoting, sustaining, and advancing institutional significance. The functionality of the program is based on the Original Case Study Design (OCSD) methodology, in which participants are given actual college issues to investigate from a…
The Sanctuary Model of Trauma-Informed Organizational Change
ERIC Educational Resources Information Center
Bloom, Sandra L.; Sreedhar, Sarah Yanosy
2008-01-01
This article features the Sanctuary Model[R], a trauma-informed method for creating or changing an organizational culture. Although the model is based on trauma theory, its tenets have application in working with children and adults across a wide diagnostic spectrum. Originally developed in a short-term, acute inpatient psychiatric setting for…
A Computational Model of Learners Achievement Emotions Using Control-Value Theory
ERIC Educational Resources Information Center
Muñoz, Karla; Noguez, Julieta; Neri, Luis; Mc Kevitt, Paul; Lunney, Tom
2016-01-01
Game-based Learning (GBL) environments make instruction flexible and interactive. Positive experiences depend on personalization. Student modelling has focused on affect. Three methods are used: (1) recognizing the physiological effects of emotion, (2) reasoning about emotion from its origin and (3) an approach combining 1 and 2. These have proven…
Automatic Text Analysis Based on Transition Phenomena of Word Occurrences
ERIC Educational Resources Information Center
Pao, Miranda Lee
1978-01-01
Describes a method of selecting index terms directly from a word frequency list, an idea originally suggested by Goffman. Results of the analysis of word frequencies of two articles seem to indicate that the automated selection of index terms from a frequency list holds some promise for automatic indexing. (Author/MBR)
Pilates and Mindfulness: A Qualitative Study
ERIC Educational Resources Information Center
Adams, Marianne; Caldwell, Karen; Atkins, Laurie; Quin, Rebecca
2012-01-01
The Pilates method as Joseph Hubertus Pilates originally defined it in the early 1920s, was called "Contrology," or the art of control, based on the ideal of attaining a complete coordination of body, mind, and spirit. Joseph Pilates's early writings emphasized the value of controlling the body rather than attending to the process of body…
Using the "Rural Atelier" as an Educational Method in Landscape Studies
ERIC Educational Resources Information Center
Meijles, Erik; Van Hoven, Bettina
2010-01-01
Drawing on experiences from a project conducted in the "Drentsche Aa" area in the Netherlands, this article discusses the concept of the "rural atelier" as a form of problem-based learning. The rural atelier principle was used originally in rural development planning and described as such by Foorthuis (2005) and Elerie and…
ERIC Educational Resources Information Center
Burkhart, Jocelyn
2016-01-01
This paper briefly explores the gap in the environmental education literature on emotions, and then offers a rationale and potential directions for engaging the emotions more fully, through the arts. Using autoenthnographic and arts-based methods, and including original songs and invitational reflective questions to open spaces for further inquiry…
ERIC Educational Resources Information Center
Ivey, Allen E.; Daniels, Thomas
2016-01-01
Originating in 1966-68, Microcounseling was the first video-based listening program and its purposes and methods overlap with the communications field. This article presents history, research, and present applications in multiple fields nationally and internationally. Recent neuroscience and neurobiology research is presented with the important…
Closure to new results for an approximate method for calculating two-dimensional furrow infiltration
USDA-ARS?s Scientific Manuscript database
In a discussion paper, Ebrahimian and Noury (2015) raised several concerns about an approximate solution to the two-dimensional Richards equation presented by Bautista et al (2014). The solution is based on a procedure originally proposed by Warrick et al. (2007). Such a solution is of practical i...
Derrien, M; Jardé, E; Gruau, G; Pourcher, A M; Gourmelon, M; Jadas-Hécart, A; Pierson Wickmann, A C
2012-09-01
Improving the microbiological quality of coastal and river waters relies on the development of reliable markers that are capable of determining sources of fecal pollution. Recently, a principal component analysis (PCA) method based on six stanol compounds (i.e. 5β-cholestan-3β-ol (coprostanol), 5β-cholestan-3α-ol (epicoprostanol), 24-methyl-5α-cholestan-3β-ol (campestanol), 24-ethyl-5α-cholestan-3β-ol (sitostanol), 24-ethyl-5β-cholestan-3β-ol (24-ethylcoprostanol) and 24-ethyl-5β-cholestan-3α-ol (24-ethylepicoprostanol)) was shown to be suitable for distinguishing between porcine and bovine feces. In this study, we tested if this PCA method, using the above six stanols, could be used as a tool in "Microbial Source Tracking (MST)" methods in water from areas of intensive agriculture where diffuse fecal contamination is often marked by the co-existence of human and animal sources. In particular, well-defined and stable clusters were found in PCA score plots clustering samples of "pure" human, bovine and porcine feces along with runoff and diluted waters in which the source of contamination is known. A good consistency was also observed between the source assignments made by the 6-stanol-based PCA method and the microbial markers for river waters contaminated by fecal matter of unknown origin. More generally, the tests conducted in this study argue for the addition of the PCA method based on six stanols in the MST toolbox to help identify fecal contamination sources. The data presented in this study show that this addition would improve the determination of fecal contamination sources when the contamination levels are low to moderate. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yan, Yue
2018-03-01
A synthetic aperture radar (SAR) automatic target recognition (ATR) method based on the convolutional neural networks (CNN) trained by augmented training samples is proposed. To enhance the robustness of CNN to various extended operating conditions (EOCs), the original training images are used to generate the noisy samples at different signal-to-noise ratios (SNRs), multiresolution representations, and partially occluded images. Then, the generated images together with the original ones are used to train a designed CNN for target recognition. The augmented training samples can contrapuntally improve the robustness of the trained CNN to the covered EOCs, i.e., the noise corruption, resolution variance, and partial occlusion. Moreover, the significantly larger training set effectively enhances the representation capability for other conditions, e.g., the standard operating condition (SOC), as well as the stability of the network. Therefore, better performance can be achieved by the proposed method for SAR ATR. For experimental evaluation, extensive experiments are conducted on the Moving and Stationary Target Acquisition and Recognition dataset under SOC and several typical EOCs.
An adaptive multi-feature segmentation model for infrared image
NASA Astrophysics Data System (ADS)
Zhang, Tingting; Han, Jin; Zhang, Yi; Bai, Lianfa
2016-04-01
Active contour models (ACM) have been extensively applied to image segmentation, conventional region-based active contour models only utilize global or local single feature information to minimize the energy functional to drive the contour evolution. Considering the limitations of original ACMs, an adaptive multi-feature segmentation model is proposed to handle infrared images with blurred boundaries and low contrast. In the proposed model, several essential local statistic features are introduced to construct a multi-feature signed pressure function (MFSPF). In addition, we draw upon the adaptive weight coefficient to modify the level set formulation, which is formed by integrating MFSPF with local statistic features and signed pressure function with global information. Experimental results demonstrate that the proposed method can make up for the inadequacy of the original method and get desirable results in segmenting infrared images.
A Method for Assessing Ground-Truth Accuracy of the 5DCT Technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dou, Tai H., E-mail: tdou@mednet.ucla.edu; Thomas, David H.; O'Connell, Dylan P.
2015-11-15
Purpose: To develop a technique that assesses the accuracy of the breathing phase-specific volume image generation process by patient-specific breathing motion model using the original free-breathing computed tomographic (CT) scans as ground truths. Methods: Sixteen lung cancer patients underwent a previously published protocol in which 25 free-breathing fast helical CT scans were acquired with a simultaneous breathing surrogate. A patient-specific motion model was constructed based on the tissue displacements determined by a state-of-the-art deformable image registration. The first image was arbitrarily selected as the reference image. The motion model was used, along with the free-breathing phase information of the originalmore » 25 image datasets, to generate a set of deformation vector fields that mapped the reference image to the 24 nonreference images. The high-pitch helically acquired original scans served as ground truths because they captured the instantaneous tissue positions during free breathing. Image similarity between the simulated and the original scans was assessed using deformable registration that evaluated the pointwise discordance throughout the lungs. Results: Qualitative comparisons using image overlays showed excellent agreement between the simulated images and the original images. Even large 2-cm diaphragm displacements were very well modeled, as was sliding motion across the lung–chest wall boundary. The mean error across the patient cohort was 1.15 ± 0.37 mm, and the mean 95th percentile error was 2.47 ± 0.78 mm. Conclusion: The proposed ground truth–based technique provided voxel-by-voxel accuracy analysis that could identify organ-specific or tumor-specific motion modeling errors for treatment planning. Despite a large variety of breathing patterns and lung deformations during the free-breathing scanning session, the 5-dimensionl CT technique was able to accurately reproduce the original helical CT scans, suggesting its applicability to a wide range of patients.« less
NASA Astrophysics Data System (ADS)
Šilhavý, Jakub; Minár, Jozef; Mentlík, Pavel; Sládek, Ján
2016-07-01
This paper presents a new method of automatic lineament extraction which includes the removal of the 'artefacts effect' which is associated with the process of raster based analysis. The core of the proposed Multi-Hillshade Hierarchic Clustering (MHHC) method incorporates a set of variously illuminated and rotated hillshades in combination with hierarchic clustering of derived 'protolineaments'. The algorithm also includes classification into positive and negative lineaments. MHHC was tested in two different territories in Bohemian Forest and Central Western Carpathians. The original vector-based algorithm was developed for comparison of the individual lineaments proximity. Its use confirms the compatibility of manual and automatic extraction and their similar relationships to structural data in the study areas.
Novel disturbance-observer-based control for systems with high-order mismatched disturbances
NASA Astrophysics Data System (ADS)
Fang, Xing; Liu, Fei; Wang, Zhiguo; Dong, Na
2018-01-01
A novel disturbance-observer-based control method is investigated to attenuate the high-order mismatched disturbances. First, a finite-time disturbance observer (FTDO) is proposed to estimate the disturbances as well as the derivatives. By incorporating the outputs of FTDO, the original system is then reconstructed, where the mismatched disturbances are transformed to the matched ones that are compensated by feed-forward algorithm. Moreover, a feedback control law is developed to achieve the stability and tracking performance requirements for the systems. Finally, the proposed composite control method is applied to an unmanned helicopter system. The simulation results demonstrate that the proposed control method exhibits excellent control performance in the presence of high-order matched and mismatched disturbances.
Wear Detection of Drill Bit by Image-based Technique
NASA Astrophysics Data System (ADS)
Sukeri, Maziyah; Zulhilmi Paiz Ismadi, Mohd; Rahim Othman, Abdul; Kamaruddin, Shahrul
2018-03-01
Image processing for computer vision function plays an essential aspect in the manufacturing industries for the tool condition monitoring. This study proposes a dependable direct measurement method to measure the tool wear using image-based analysis. Segmentation and thresholding technique were used as the means to filter and convert the colour image to binary datasets. Then, the edge detection method was applied to characterize the edge of the drill bit. By using cross-correlation method, the edges of original and worn drill bits were correlated to each other. Cross-correlation graphs were able to detect the difference of the worn edge despite small difference between the graphs. Future development will focus on quantifying the worn profile as well as enhancing the sensitivity of the technique.
NASA Astrophysics Data System (ADS)
Krčmár, Roman; Šamaj, Ladislav
2018-01-01
The partition function of the symmetric (zero electric field) eight-vertex model on a square lattice can be formulated either in the original "electric" vertex format or in an equivalent "magnetic" Ising-spin format. In this paper, both electric and magnetic versions of the model are studied numerically by using the corner transfer matrix renormalization-group method which provides reliable data. The emphasis is put on the calculation of four specific critical exponents, related by two scaling relations, and of the central charge. The numerical method is first tested in the magnetic format, the obtained dependencies of critical exponents on the model's parameters agree with Baxter's exact solution, and weak universality is confirmed within the accuracy of the method due to the finite size of the system. In particular, the critical exponents η and δ are constant as required by weak universality. On the other hand, in the electric format, analytic formulas based on the scaling relations are derived for the critical exponents ηe and δe which agree with our numerical data. These exponents depend on the model's parameters which is evidence for the full nonuniversality of the symmetric eight-vertex model in the original electric formulation.
Overcoming the winner's curse: estimating penetrance parameters from case-control data.
Zollner, Sebastian; Pritchard, Jonathan K
2007-04-01
Genomewide association studies are now a widely used approach in the search for loci that affect complex traits. After detection of significant association, estimates of penetrance and allele-frequency parameters for the associated variant indicate the importance of that variant and facilitate the planning of replication studies. However, when these estimates are based on the original data used to detect the variant, the results are affected by an ascertainment bias known as the "winner's curse." The actual genetic effect is typically smaller than its estimate. This overestimation of the genetic effect may cause replication studies to fail because the necessary sample size is underestimated. Here, we present an approach that corrects for the ascertainment bias and generates an estimate of the frequency of a variant and its penetrance parameters. The method produces a point estimate and confidence region for the parameter estimates. We study the performance of this method using simulated data sets and show that it is possible to greatly reduce the bias in the parameter estimates, even when the original association study had low power. The uncertainty of the estimate decreases with increasing sample size, independent of the power of the original test for association. Finally, we show that application of the method to case-control data can improve the design of replication studies considerably.
Motion capture based identification of the human body inertial parameters.
Venture, Gentiane; Ayusawa, Ko; Nakamura, Yoshihiko
2008-01-01
Identification of body inertia, masses and center of mass is an important data to simulate, monitor and understand dynamics of motion, to personalize rehabilitation programs. This paper proposes an original method to identify the inertial parameters of the human body, making use of motion capture data and contact forces measurements. It allows in-vivo painless estimation and monitoring of the inertial parameters. The method is described and then obtained experimental results are presented and discussed.
NASA Astrophysics Data System (ADS)
Yu, Bing; Shu, Wenjun; Cao, Can
2018-05-01
A novel modeling method for aircraft engine using nonlinear autoregressive exogenous (NARX) models based on wavelet neural networks is proposed. The identification principle and process based on wavelet neural networks are studied, and the modeling scheme based on NARX is proposed. Then, the time series data sets from three types of aircraft engines are utilized to build the corresponding NARX models, and these NARX models are validated by the simulation. The results show that all the best NARX models can capture the original aircraft engine's dynamic characteristic well with the high accuracy. For every type of engine, the relative identification errors of its best NARX model and the component level model are no more than 3.5 % and most of them are within 1 %.
Analysis of selected data from the triservice missile data base
NASA Technical Reports Server (NTRS)
Allen, Jerry M.; Shaw, David S.; Sawyer, Wallace C.
1989-01-01
An extremely large, systematic, axisymmetric-body/tail-fin data base has been gathered through tests of an innovative missile model design which is described herein. These data were originally obtained for incorporation into a missile aerodynamics code based on engineering methods (Program MISSILE3), but these data are also valuable as diagnostic test cases for developing computational methods because of the individual-fin data included in the data base. Detailed analyses of four sample cases from these data are presented to illustrate interesting individual-fin force and moment trends. These samples quantitatively show how bow shock, fin orientation, fin deflection, and body vortices can produce strong, unusual, and computationally challenging effects on individual fin loads. Flow-visualization photographs are examined to provide physical insight into the cause of these effects.
Fogel, Paul; Gaston-Mathé, Yann; Hawkins, Douglas; Fogel, Fajwel; Luta, George; Young, S. Stanley
2016-01-01
Often data can be represented as a matrix, e.g., observations as rows and variables as columns, or as a doubly classified contingency table. Researchers may be interested in clustering the observations, the variables, or both. If the data is non-negative, then Non-negative Matrix Factorization (NMF) can be used to perform the clustering. By its nature, NMF-based clustering is focused on the large values. If the data is normalized by subtracting the row/column means, it becomes of mixed signs and the original NMF cannot be used. Our idea is to split and then concatenate the positive and negative parts of the matrix, after taking the absolute value of the negative elements. NMF applied to the concatenated data, which we call PosNegNMF, offers the advantages of the original NMF approach, while giving equal weight to large and small values. We use two public health datasets to illustrate the new method and compare it with alternative clustering methods, such as K-means and clustering methods based on the Singular Value Decomposition (SVD) or Principal Component Analysis (PCA). With the exception of situations where a reasonably accurate factorization can be achieved using the first SVD component, we recommend that the epidemiologists and environmental scientists use the new method to obtain clusters with improved quality and interpretability. PMID:27213413
Fogel, Paul; Gaston-Mathé, Yann; Hawkins, Douglas; Fogel, Fajwel; Luta, George; Young, S Stanley
2016-05-18
Often data can be represented as a matrix, e.g., observations as rows and variables as columns, or as a doubly classified contingency table. Researchers may be interested in clustering the observations, the variables, or both. If the data is non-negative, then Non-negative Matrix Factorization (NMF) can be used to perform the clustering. By its nature, NMF-based clustering is focused on the large values. If the data is normalized by subtracting the row/column means, it becomes of mixed signs and the original NMF cannot be used. Our idea is to split and then concatenate the positive and negative parts of the matrix, after taking the absolute value of the negative elements. NMF applied to the concatenated data, which we call PosNegNMF, offers the advantages of the original NMF approach, while giving equal weight to large and small values. We use two public health datasets to illustrate the new method and compare it with alternative clustering methods, such as K-means and clustering methods based on the Singular Value Decomposition (SVD) or Principal Component Analysis (PCA). With the exception of situations where a reasonably accurate factorization can be achieved using the first SVD component, we recommend that the epidemiologists and environmental scientists use the new method to obtain clusters with improved quality and interpretability.
NASA Astrophysics Data System (ADS)
Mabu, Shingo; Kido, Shoji; Hashimoto, Noriaki; Hirano, Yasushi; Kuremoto, Takashi
2018-02-01
This research proposes a multi-channel deep convolutional neural network (DCNN) for computer-aided diagnosis (CAD) that classifies normal and abnormal opacities of diffuse lung diseases in Computed Tomography (CT) images. Because CT images are gray scale, DCNN usually uses one channel for inputting image data. On the other hand, this research uses multi-channel DCNN where each channel corresponds to the original raw image or the images transformed by some preprocessing techniques. In fact, the information obtained only from raw images is limited and some conventional research suggested that preprocessing of images contributes to improving the classification accuracy. Thus, the combination of the original and preprocessed images is expected to show higher accuracy. The proposed method realizes region of interest (ROI)-based opacity annotation. We used lung CT images taken in Yamaguchi University Hospital, Japan, and they are divided into 32 × 32 ROI images. The ROIs contain six kinds of opacities: consolidation, ground-glass opacity (GGO), emphysema, honeycombing, nodular, and normal. The aim of the proposed method is to classify each ROI into one of the six opacities (classes). The DCNN structure is based on VGG network that secured the first and second places in ImageNet ILSVRC-2014. From the experimental results, the classification accuracy of the proposed method was better than the conventional method with single channel, and there was a significant difference between them.
Study on Spatio-Temporal Change of Ecological Land in Yellow River Delta Based on RS&GIS
NASA Astrophysics Data System (ADS)
An, GuoQiang
2018-06-01
The temporal and spatial variation of ecological land use and its current distribution were studied to provide reference for the protection of original ecological land and ecological environment in the Yellow River Delta. Using RS colour synthesis, supervised classification, unsupervised classification, vegetation index and other methods to monitor the impact of human activities on the original ecological land in the past 30 years; using GIS technology to analyse the statistical data and construct the model of original ecological land area index to study the ecological land distribution status. The results show that the boundary of original ecological land in the Yellow River Delta had been pushed toward the coastline at an average speed of 0.8km per year due to human activities. In the past 20 years, a large amount of original ecological land gradually transformed into artificial ecological land. In view of the evolution and status of ecological land in the Yellow River Delta, related local departments should adopt differentiated and focused protection measures to protect the ecological land of the Yellow River Delta.
Method of making metal matrix composites reinforced with ceramic particulates
Cornie, James A.; Kattamis, Theodoulos; Chambers, Brent V.; Bond, Bruce E.; Varela, Raul H.
1989-01-01
Composite materials and methods for making such materials are disclosed in which dispersed ceramic particles are at chemical equilibrium with a base metal matrix, thereby permitting such materials to be remelted and subsequently cast or otherwise processed to form net weight parts and other finished (or semi-finished) articles while maintaining the microstructure and mechanical properties (e.g. wear resistance or hardness) of the original composite. The composite materials of the present invention are composed of ceramic particles in a base metal matrix. The ceramics are preferably carbides of titanium, zirconium, tungsten, molybdenum or other refractory metals. The base metal can be iron, nickel, cobalt, chromium or other high temperature metal and alloys thereof. For ferrous matrices, alloys suitable for use as the base metal include cast iron, carbon steels, stainless steels and iron-based superalloys.
Method of making metal matrix composites reinforced with ceramic particulates
Cornie, J.A.; Kattamis, T.; Chambers, B.V.; Bond, B.E.; Varela, R.H.
1989-08-01
Composite materials and methods for making such materials are disclosed in which dispersed ceramic particles are at chemical equilibrium with a base metal matrix, thereby permitting such materials to be remelted and subsequently cast or otherwise processed to form net weight parts and other finished (or semi-finished) articles while maintaining the microstructure and mechanical properties (e.g. wear resistance or hardness) of the original composite. The composite materials of the present invention are composed of ceramic particles in a base metal matrix. The ceramics are preferably carbides of titanium, zirconium, tungsten, molybdenum or other refractory metals. The base metal can be iron, nickel, cobalt, chromium or other high temperature metal and alloys thereof. For ferrous matrices, alloys suitable for use as the base metal include cast iron, carbon steels, stainless steels and iron-based superalloys. 2 figs.
Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen
2016-01-01
Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.
Fast Fragmentation of Networks Using Module-Based Attacks
Requião da Cunha, Bruno; González-Avella, Juan Carlos; Gonçalves, Sebastián
2015-01-01
In the multidisciplinary field of Network Science, optimization of procedures for efficiently breaking complex networks is attracting much attention from a practical point of view. In this contribution, we present a module-based method to efficiently fragment complex networks. The procedure firstly identifies topological communities through which the network can be represented using a well established heuristic algorithm of community finding. Then only the nodes that participate of inter-community links are removed in descending order of their betweenness centrality. We illustrate the method by applying it to a variety of examples in the social, infrastructure, and biological fields. It is shown that the module-based approach always outperforms targeted attacks to vertices based on node degree or betweenness centrality rankings, with gains in efficiency strongly related to the modularity of the network. Remarkably, in the US power grid case, by deleting 3% of the nodes, the proposed method breaks the original network in fragments which are twenty times smaller in size than the fragments left by betweenness-based attack. PMID:26569610
Finch, Kristen; Espinoza, Edgard; Jones, F. Andrew; Cronn, Richard
2017-01-01
Premise of the study: We investigated whether wood metabolite profiles from direct analysis in real time (time-of-flight) mass spectrometry (DART-TOFMS) could be used to determine the geographic origin of Douglas-fir wood cores originating from two regions in western Oregon, USA. Methods: Three annual ring mass spectra were obtained from 188 adult Douglas-fir trees, and these were analyzed using random forest models to determine whether samples could be classified to geographic origin, growth year, or growth year and geographic origin. Specific wood molecules that contributed to geographic discrimination were identified. Results: Douglas-fir mass spectra could be differentiated into two geographic classes with an accuracy between 70% and 76%. Classification models could not accurately classify sample mass spectra based on growth year. Thirty-two molecules were identified as key for classifying western Oregon Douglas-fir wood cores to geographic origin. Discussion: DART-TOFMS is capable of detecting minute but regionally informative differences in wood molecules over a small geographic scale, and these differences made it possible to predict the geographic origin of Douglas-fir wood with moderate accuracy. Studies involving DART-TOFMS, alone and in combination with other technologies, will be relevant for identifying the geographic origin of illegally harvested wood. PMID:28529831
Design of tree structured matched wavelet for HRV signals of menstrual cycle.
Rawal, Kirti; Saini, B S; Saini, Indu
2016-07-01
An algorithm is presented for designing a new class of wavelets matched to the Heart Rate Variability (HRV) signals of the menstrual cycle. The proposed wavelets are used to find HRV variations between phases of menstrual cycle. The method finds the signal matching characteristics by minimising the shape feature error using Least Mean Square method. The proposed filter banks are used for the decomposition of the HRV signal. For reconstructing the original signal, the tree structure method is used. In this approach, decomposed sub-bands are selected based upon their energy in each sub-band. Thus, instead of using all sub-bands for reconstruction, sub-bands having high energy content are used for the reconstruction of signal. Thus, a lower number of sub-bands are required for reconstruction of the original signal which shows the effectiveness of newly created filter coefficients. Results show that proposed wavelets are able to differentiate HRV variations between phases of the menstrual cycle accurately than standard wavelets.
NASA Technical Reports Server (NTRS)
Klein, Vladislav
2002-01-01
The program objectives were defined in the original proposal entitled 'Program of Research in Flight Dynamics in the JIAFS at NASA Langley Research Center' which was originated March 20, 1975, and yearly renewals of the research program dated December 1, 1998 to December 31, 2002. The program included three major topics: 1) Improvement of existing methods and development of new methods for flight and wind tunnel data analysis based on system identification methodology; 2) Application of these methods to flight and wind tunnel data obtained from advanced aircraft; 3) Modeling and control of aircraft. The principal investigator of the program was Dr. Vladislav Klein, Professor Emeritus at The George Washington University, DC. Seven Graduate Research Scholar Assistants (GRSA) participated in the program. The results of the research conducted during four years of the total co-operative period were published in 2 NASA Technical Reports, 3 thesis and 3 papers. The list of these publications is included.
Study and application of acoustic emission testing in fault diagnosis of low-speed heavy-duty gears.
Gao, Lixin; Zai, Fenlou; Su, Shanbin; Wang, Huaqing; Chen, Peng; Liu, Limei
2011-01-01
Most present studies on the acoustic emission signals of rotating machinery are experiment-oriented, while few of them involve on-spot applications. In this study, a method of redundant second generation wavelet transform based on the principle of interpolated subdivision was developed. With this method, subdivision was not needed during the decomposition. The lengths of approximation signals and detail signals were the same as those of original ones, so the data volume was twice that of original signals; besides, the data redundancy characteristic also guaranteed the excellent analysis effect of the method. The analysis of the acoustic emission data from the faults of on-spot low-speed heavy-duty gears validated the redundant second generation wavelet transform in the processing and denoising of acoustic emission signals. Furthermore, the analysis illustrated that the acoustic emission testing could be used in the fault diagnosis of on-spot low-speed heavy-duty gears and could be a significant supplement to vibration testing diagnosis.
He, Xin; Wang, Geng Nan; Yang, Kun; Liu, Hui Zhi; Wu, Xia Jun; Wang, Jian Ping
2017-04-15
In this study, a magnetic graphene-based dispersive solid phase extraction method was developed that was combined with high performance liquid chromatography to determine the residues of fluoroquinolone drugs in foods of animal origin. During the experiments, several parameters possible influencing the extraction performance were optimized (amount of magnetic graphene, sample pH, extraction time and elution solution). This extraction method showed high absorption capacities (>6800ng) and high enrichment factors (68-79-fold) for seven fluoroquinolones. Furthermore, this absorbent could be reused for at least 40 times. The limits of detection were in the range of 0.05-0.3ng/g, and the recoveries from the standards fortified blank samples (bovine milk, chicken muscle and egg) were in the range of 82.4-108.5%. Therefore, this method could be used as a simple and sensitive tool to determine the residues of fluoroquinolones in foods of animal origin. Copyright © 2016 Elsevier Ltd. All rights reserved.
SHOP: a method for structure-based fragment and scaffold hopping.
Fontaine, Fabien; Cross, Simon; Plasencia, Guillem; Pastor, Manuel; Zamora, Ismael
2009-03-01
A new method for fragment and scaffold replacement is presented that generates new families of compounds with biological activity, using GRID molecular interaction fields (MIFs) and the crystal structure of the targets. In contrast to virtual screening strategies, this methodology aims only to replace a fragment of the original molecule, maintaining the other structural elements that are known or suspected to have a critical role in ligand binding. First, we report a validation of the method, recovering up to 95% of the original fragments searched among the top-five proposed solutions, using 164 fragment queries from 11 diverse targets. Second, six key customizable parameters are investigated, concluding that filtering the receptor MIF using the co-crystallized ligand atom type has the greatest impact on the ranking of the proposed solutions. Finally, 11 examples using more realistic scenarios have been performed; diverse chemotypes are returned, including some that are similar to compounds that are known to bind to similar targets.
NASA Astrophysics Data System (ADS)
Jiang, Zhuo; Xie, Chengjun
2013-12-01
This paper improved the algorithm of reversible integer linear transform on finite interval [0,255], which can realize reversible integer linear transform in whole number axis shielding data LSB (least significant bit). Firstly, this method use integer wavelet transformation based on lifting scheme to transform the original image, and select the transformed high frequency areas as information hiding area, meanwhile transform the high frequency coefficients blocks in integer linear way and embed the secret information in LSB of each coefficient, then information hiding by embedding the opposite steps. To extract data bits and recover the host image, a similar reverse procedure can be conducted, and the original host image can be lossless recovered. The simulation experimental results show that this method has good secrecy and concealment, after conducted the CDF (m, n) and DD (m, n) series of wavelet transformed. This method can be applied to information security domain, such as medicine, law and military.
Study and Application of Acoustic Emission Testing in Fault Diagnosis of Low-Speed Heavy-Duty Gears
Gao, Lixin; Zai, Fenlou; Su, Shanbin; Wang, Huaqing; Chen, Peng; Liu, Limei
2011-01-01
Most present studies on the acoustic emission signals of rotating machinery are experiment-oriented, while few of them involve on-spot applications. In this study, a method of redundant second generation wavelet transform based on the principle of interpolated subdivision was developed. With this method, subdivision was not needed during the decomposition. The lengths of approximation signals and detail signals were the same as those of original ones, so the data volume was twice that of original signals; besides, the data redundancy characteristic also guaranteed the excellent analysis effect of the method. The analysis of the acoustic emission data from the faults of on-spot low-speed heavy-duty gears validated the redundant second generation wavelet transform in the processing and denoising of acoustic emission signals. Furthermore, the analysis illustrated that the acoustic emission testing could be used in the fault diagnosis of on-spot low-speed heavy-duty gears and could be a significant supplement to vibration testing diagnosis. PMID:22346592
Real-time digital signal recovery for a multi-pole low-pass transfer function system.
Lee, Jhinhwan
2017-08-01
In order to solve the problems of waveform distortion and signal delay by many physical and electrical systems with multi-pole linear low-pass transfer characteristics, a simple digital-signal-processing (DSP)-based method of real-time recovery of the original source waveform from the distorted output waveform is proposed. A mathematical analysis on the convolution kernel representation of the single-pole low-pass transfer function shows that the original source waveform can be accurately recovered in real time using a particular moving average algorithm applied on the input stream of the distorted waveform, which can also significantly reduce the overall delay time constant. This method is generalized for multi-pole low-pass systems and has noise characteristics of the inverse of the low-pass filter characteristics. This method can be applied to most sensors and amplifiers operating close to their frequency response limits to improve the overall performance of data acquisition systems and digital feedback control systems.
Beretta, Giangiacomo; Caneva, Enrico; Regazzoni, Luca; Bakhtyari, Nazanin Golbamaki; Maffei Facino, Roberto
2008-07-14
The aim of this work was to establish an analytical method for identifying the botanical origin of honey, as an alternative to conventional melissopalynological, organoleptic and instrumental methods (gas-chromatography coupled to mass spectrometry (GC-MS), high-performance liquid chromatography HPLC). The procedure is based on the (1)H nuclear magnetic resonance (NMR) profile coupled, when necessary, with electrospray ionisation-mass spectrometry (ESI-MS) and two-dimensional NMR analyses of solid-phase extraction (SPE)-purified honey samples, followed by chemometric analyses. Extracts of 44 commercial Italian honeys from 20 different botanical sources were analyzed. Honeydew, chestnut and linden honeys showed constant, specific, well-resolved resonances, suitable for use as markers of origin. Honeydew honey contained the typical resonances of an aliphatic component, very likely deriving from the plant phloem sap or excreted into it by sap-sucking aphids. Chestnut honey contained the typical signals of kynurenic acid and some structurally related metabolite. In linden honey the (1)H NMR profile gave strong signals attributable to the mono-terpene derivative cyclohexa-1,3-diene-1-carboxylic acid (CDCA) and to its 1-O-beta-gentiobiosyl ester (CDCA-GBE). These markers were not detectable in the other honeys, except for the less common nectar honey from rosa mosqueta. We compared and analyzed the data by multivariate techniques. Principal component analysis found different clusters of honeys based on the presence of these specific markers. The results, although obviously only preliminary, suggest that the (1)H NMR profile (with HPLC-MS analysis when necessary) can be used as a reference framework for identifying the botanical origin of honey.
Maghrabi, Mufeed; Al-Abdullah, Tariq; Khattari, Ziad
2018-03-24
The two heating rates method (originally developed for first-order glow peaks) was used for the first time to evaluate the activation energy (E) from glow peaks obeying mixed-order (MO) kinetics. The derived expression for E has an insignificant additional term (on the scale of a few meV) when compared with the first-order case. Hence, the original expression for E using the two heating rates method can be used with excellent accuracy in the case of MO glow peaks. In addition, we derived a simple analytical expression for the MO parameter. The present procedure has the advantage that the MO parameter can now be evaluated using analytical expression instead of using the graphical representation between the geometrical factor and the MO parameter as given by the existing peak shape methods. The applicability of the derived expressions for real samples was demonstrated for the glow curve of Li 2 B 4 O 7 :Mn single crystal. The obtained parameters compare very well with those obtained by glow curve fitting and with the available published data.
The local lymph node assay (LLNA).
Rovida, Costanza; Ryan, Cindy; Cinelli, Serena; Basketter, David; Dearman, Rebecca; Kimber, Ian
2012-02-01
The murine local lymph node assay (LLNA) is a widely accepted method for assessing the skin sensitization potential of chemicals. Compared with other in vivo methods in guinea pig, the LLNA offers important advantages with respect to animal welfare, including a requirement for reduced animal numbers as well as reduced pain and trauma. In addition to hazard identification, the LLNA is used for determining the relative skin sensitizing potency of contact allergens as a pivotal contribution to the risk assessment process. The LLNA is the only in vivo method that has been subjected to a formal validation process. The original LLNA protocol is based on measurement of the proliferative activity of draining lymph node cells (LNC), as determined by incorporation of radiolabeled thymidine. Several variants to the original LLNA have been developed to eliminate the use of radioactive materials. One such alternative is considered here: the LLNA:BrdU-ELISA method, which uses 5-bromo-2-deoxyuridine (BrdU) in place of radiolabeled thymidine to measure LNC proliferation in draining nodes. © 2012 by John Wiley & Sons, Inc.
De Micco, Veronica; Ruel, Katia; Joseleau, Jean-Paul; Aronne, Giovanna
2010-08-01
During cell wall formation and degradation, it is possible to detect cellulose microfibrils assembled into thicker and thinner lamellar structures, respectively, following inverse parallel patterns. The aim of this study was to analyse such patterns of microfibril aggregation and cell wall delamination. The thickness of microfibrils and lamellae was measured on digital images of both growing and degrading cell walls viewed by means of transmission electron microscopy. To objectively detect, measure and classify microfibrils and lamellae into thickness classes, a method based on the application of computerized image analysis combined with graphical and statistical methods was developed. The method allowed common classes of microfibrils and lamellae in cell walls to be identified from different origins. During both the formation and degradation of cell walls, a preferential formation of structures with specific thickness was evidenced. The results obtained with the developed method allowed objective analysis of patterns of microfibril aggregation and evidenced a trend of doubling/halving lamellar structures, during cell wall formation/degradation in materials from different origin and which have undergone different treatments.
Essentially nonoscillatory postprocessing filtering methods
NASA Technical Reports Server (NTRS)
Lafon, F.; Osher, S.
1992-01-01
High order accurate centered flux approximations used in the computation of numerical solutions to nonlinear partial differential equations produce large oscillations in regions of sharp transitions. Here, we present a new class of filtering methods denoted by Essentially Nonoscillatory Least Squares (ENOLS), which constructs an upgraded filtered solution that is close to the physically correct weak solution of the original evolution equation. Our method relies on the evaluation of a least squares polynomial approximation to oscillatory data using a set of points which is determined via the ENO network. Numerical results are given in one and two space dimensions for both scalar and systems of hyperbolic conservation laws. Computational running time, efficiency, and robustness of method are illustrated in various examples such as Riemann initial data for both Burgers' and Euler's equations of gas dynamics. In all standard cases, the filtered solution appears to converge numerically to the correct solution of the original problem. Some interesting results based on nonstandard central difference schemes, which exactly preserve entropy, and have been recently shown generally not to be weakly convergent to a solution of the conservation law, are also obtained using our filters.
Solar cells based on InP/GaP/Si structure
NASA Astrophysics Data System (ADS)
Kvitsiani, O.; Laperashvil, D.; Laperashvili, T.; Mikelashvili, V.
2016-10-01
Solar cells (SCs) based on III-V semiconductors are reviewed. Presented work emphases on the Solar Cells containing Quantum Dots (QDs) for next-generation photovoltaics. In this work the method of fabrication of InP QDs on III-V semiconductors is investigated. The original method of electrochemical deposition of metals: indium (In), gallium (Ga) and of alloys (InGa) on the surface of gallium phosphide (GaP), and mechanism of formation of InP QDs on GaP surface is presented. The possibilities of application of InP/GaP/Si structure as SC are discussed, and the challenges arising is also considered.
A synthetic method of solar spectrum based on LED
NASA Astrophysics Data System (ADS)
Wang, Ji-qiang; Su, Shi; Zhang, Guo-yu; Zhang, Jian
2017-10-01
A synthetic method of solar spectrum which based on the spectral characteristics of the solar spectrum and LED, and the principle of arbitrary spectral synthesis was studied by using 14 kinds of LED with different central wavelengths.The LED and solar spectrum data were selected by Origin Software firstly, then calculated the total number of LED for each center band by the transformation relation between brightness and illumination and Least Squares Curve Fit in Matlab.Finally, the spectrum curve of AM1.5 standard solar spectrum was obtained. The results met the technical indexes of the solar spectrum matching with ±20% and the solar constant with >0.5.
Govindarajulu, Rajanikanth; Hughes, Colin E; Alexander, Patrick J; Bailey, C Donovan
2011-12-01
The evolutionary history of Leucaena has been impacted by polyploidy, hybridization, and divergent allopatric species diversification, suggesting that this is an ideal group to investigate the evolutionary tempo of polyploidy and the complexities of reticulation and divergence in plant diversification. Parsimony- and ML-based phylogenetic approaches were applied to 105 accessions sequenced for six sequence characterized amplified region-based nuclear encoded loci, nrDNA ITS, and four cpDNA regions. Hypotheses for the origin of tetraploid species were inferred using results derived from a novel species tree and established gene tree methods and from data on genome sizes and geographic distributions. The combination of comprehensively sampled multilocus DNA sequence data sets and a novel methodology provide strong resolution and support for the origins of all five tetraploid species. A minimum of four allopolyploidization events are required to explain the origins of these species. The origin(s) of one tetraploid pair (L. involucrata/L. pallida) can be equally explained by two unique allopolyploidizations or a single event followed by divergent speciation. Alongside other recent findings, a comprehensive picture of the complex evolutionary dynamics of polyploidy in Leucaena is emerging that includes paleotetraploidization, diploidization of the last common ancestor to Leucaena, allopatric divergence among diploids, and recent allopolyploid origins for tetraploid species likely associated with human translocation of seed. These results provide insights into the role of divergence and reticulation in a well-characterized angiosperm lineage and into traits of diploid parents and derived tetraploids (particularly self-compatibility and year-round flowering) favoring the formation and establishment of novel tetraploids combinations.