Motoki, Yoko; Miyagi, Etsuko; Taguri, Masataka; Asai-Sato, Mikiko; Enomoto, Takayuki; Wark, John Dennis; Garland, Suzanne Marie
2017-03-10
Prior research about the sexual and reproductive health of young women has relied mostly on self-reported survey studies. Thus, participant recruitment using Web-based methods can improve sexual and reproductive health research about cervical cancer prevention. In our prior study, we reported that Facebook is a promising way to reach young women for sexual and reproductive health research. However, it remains unknown whether Web-based or other conventional recruitment methods (ie, face-to-face or flyer distribution) yield comparable survey responses from similar participants. We conducted a survey to determine whether there was a difference in the sexual and reproductive health survey responses of young Japanese women based on recruitment methods: social media-based and conventional methods. From July 2012 to March 2013 (9 months), we invited women of ages 16-35 years in Kanagawa, Japan, to complete a Web-based questionnaire. They were recruited through either a social media-based (social networking site, SNS, group) or by conventional methods (conventional group). All participants enrolled were required to fill out and submit their responses through a Web-based questionnaire about their sexual and reproductive health for cervical cancer prevention. Of the 243 participants, 52.3% (127/243) were recruited by SNS, whereas 47.7% (116/243) were recruited by conventional methods. We found no differences between recruitment methods in responses to behaviors and attitudes to sexual and reproductive health survey, although more participants from the conventional group (15%, 14/95) chose not to answer the age of first intercourse compared with those from the SNS group (5.2%, 6/116; P=.03). No differences were found between recruitment methods in the responses of young Japanese women to a Web-based sexual and reproductive health survey. ©Yoko Motoki, Etsuko Miyagi, Masataka Taguri, Mikiko Asai-Sato, Takayuki Enomoto, John Dennis Wark, Suzanne Marie Garland. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 10.03.2017.
Web-Based Versus Conventional Training for Medical Students on Infant Gross Motor Screening.
Pusponegoro, Hardiono D; Soebadi, Amanda; Surya, Raymond
2015-12-01
Early detection of developmental abnormalities is important for early intervention. A simple screening method is needed for use by general practitioners, as is an effective and efficient training method. This study aims to evaluate the effectiveness, acceptability, and usability of Web-based training for medical students on a simple gross motor screening method in infants. Fifth-year medical students at University of Indonesia in Jakarta were randomized into two groups. A Web-based training group received online video modules, discussions, and assessments (at www.schoology.com ). A conventional training group received a 1-day live training using the same module. Both groups completed identical pre- and posttests and the User Satisfaction Questionnaire (USQ). The Web-based group also completed the System Usability Scale (SUS). The module was based on a gross motor screening method used in the World Health Organization Multicentre Growth Reference Study. There were 39 and 32 subjects in the Web-based and conventional groups, respectively. Mean pretest versus posttest scores (correct answers out of 20) were 9.05 versus 16.95 (p=0.0001) in the Web-based group and 9.31 versus 16.88 (p=0.0001) in the conventional group. Mean difference between pre- and posttest scores did not differ significantly between the Web-based and conventional groups (mean [standard deviation], 7.56 [3.252] versus 7.90 [5.170]; p=0.741]. Both training methods were acceptable based on USQ scores. Based on SUS scores, the Web-based training had good usability. Web-based training is an effective, efficient, and acceptable training method for medical students on simple infant gross motor screening and is as effective as conventional training.
NASA Astrophysics Data System (ADS)
Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu
2018-04-01
A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.
Nilsson, Markus; Szczepankiewicz, Filip; van Westen, Danielle; Hansson, Oskar
2015-01-01
Conventional motion and eddy-current correction, where each diffusion-weighted volume is registered to a non diffusion-weighted reference, suffers from poor accuracy for high b-value data. An alternative approach is to extrapolate reference volumes from low b-value data. We aim to compare the performance of conventional and extrapolation-based correction of diffusional kurtosis imaging (DKI) data, and to demonstrate the impact of the correction approach on group comparison studies. DKI was performed in patients with Parkinson's disease dementia (PDD), and healthy age-matched controls, using b-values of up to 2750 s/mm2. The accuracy of conventional and extrapolation-based correction methods was investigated. Parameters from DTI and DKI were compared between patients and controls in the cingulum and the anterior thalamic projection tract. Conventional correction resulted in systematic registration errors for high b-value data. The extrapolation-based methods did not exhibit such errors, yielding more accurate tractography and up to 50% lower standard deviation in DKI metrics. Statistically significant differences were found between patients and controls when using the extrapolation-based motion correction that were not detected when using the conventional method. We recommend that conventional motion and eddy-current correction should be abandoned for high b-value data in favour of more accurate methods using extrapolation-based references.
Video-Based Fingerprint Verification
Qin, Wei; Yin, Yilong; Liu, Lili
2013-01-01
Conventional fingerprint verification systems use only static information. In this paper, fingerprint videos, which contain dynamic information, are utilized for verification. Fingerprint videos are acquired by the same capture device that acquires conventional fingerprint images, and the user experience of providing a fingerprint video is the same as that of providing a single impression. After preprocessing and aligning processes, “inside similarity” and “outside similarity” are defined and calculated to take advantage of both dynamic and static information contained in fingerprint videos. Match scores between two matching fingerprint videos are then calculated by combining the two kinds of similarity. Experimental results show that the proposed video-based method leads to a relative reduction of 60 percent in the equal error rate (EER) in comparison to the conventional single impression-based method. We also analyze the time complexity of our method when different combinations of strategies are used. Our method still outperforms the conventional method, even if both methods have the same time complexity. Finally, experimental results demonstrate that the proposed video-based method can lead to better accuracy than the multiple impressions fusion method, and the proposed method has a much lower false acceptance rate (FAR) when the false rejection rate (FRR) is quite low. PMID:24008283
Competitive region orientation code for palmprint verification and identification
NASA Astrophysics Data System (ADS)
Tang, Wenliang
2015-11-01
Orientation features of the palmprint have been widely investigated in coding-based palmprint-recognition methods. Conventional orientation-based coding methods usually used discrete filters to extract the orientation feature of palmprint. However, in real operations, the orientations of the filter usually are not consistent with the lines of the palmprint. We thus propose a competitive region orientation-based coding method. Furthermore, an effective weighted balance scheme is proposed to improve the accuracy of the extracted region orientation. Compared with conventional methods, the region orientation of the palmprint extracted using the proposed method can precisely and robustly describe the orientation feature of the palmprint. Extensive experiments on the baseline PolyU and multispectral palmprint databases are performed and the results show that the proposed method achieves a promising performance in comparison to conventional state-of-the-art orientation-based coding methods in both palmprint verification and identification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hohimer, J.P.
The use of laser-based analytical methods in nuclear-fuel processing plants is considered. The species and locations for accountability, process control, and effluent control measurements in the Coprocessing, Thorex, and reference Purex fuel processing operations are identified and the conventional analytical methods used for these measurements are summarized. The laser analytical methods based upon Raman, absorption, fluorescence, and nonlinear spectroscopy are reviewed and evaluated for their use in fuel processing plants. After a comparison of the capabilities of the laser-based and conventional analytical methods, the promising areas of application of the laser-based methods in fuel processing plants are identified.
Methods for Equating Mental Tests.
1984-11-01
1983) compared conventional and IRT methods for equating the Test of English as a Foreign Language ( TOEFL ) after chaining. Three conventional and...three IRT equating methods were examined in this study; two sections of TOEFL were each (separately) equated. The IRT methods included the following: (a...group. A separate base form was established for each of the six equating methods. Instead of equating the base-form TOEFL to itself, the last (eighth
Integrating conventional and inverse representation for face recognition.
Xu, Yong; Li, Xuelong; Yang, Jian; Lai, Zhihui; Zhang, David
2014-10-01
Representation-based classification methods are all constructed on the basis of the conventional representation, which first expresses the test sample as a linear combination of the training samples and then exploits the deviation between the test sample and the expression result of every class to perform classification. However, this deviation does not always well reflect the difference between the test sample and each class. With this paper, we propose a novel representation-based classification method for face recognition. This method integrates conventional and the inverse representation-based classification for better recognizing the face. It first produces conventional representation of the test sample, i.e., uses a linear combination of the training samples to represent the test sample. Then it obtains the inverse representation, i.e., provides an approximation representation of each training sample of a subject by exploiting the test sample and training samples of the other subjects. Finally, the proposed method exploits the conventional and inverse representation to generate two kinds of scores of the test sample with respect to each class and combines them to recognize the face. The paper shows the theoretical foundation and rationale of the proposed method. Moreover, this paper for the first time shows that a basic nature of the human face, i.e., the symmetry of the face can be exploited to generate new training and test samples. As these new samples really reflect some possible appearance of the face, the use of them will enable us to obtain higher accuracy. The experiments show that the proposed conventional and inverse representation-based linear regression classification (CIRLRC), an improvement to linear regression classification (LRC), can obtain very high accuracy and greatly outperforms the naive LRC and other state-of-the-art conventional representation based face recognition methods. The accuracy of CIRLRC can be 10% greater than that of LRC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andreasen, Daniel, E-mail: dana@dtu.dk; Van Leemput, Koen; Hansen, Rasmus H.
Purpose: In radiotherapy (RT) based on magnetic resonance imaging (MRI) as the only modality, the information on electron density must be derived from the MRI scan by creating a so-called pseudo computed tomography (pCT). This is a nontrivial task, since the voxel-intensities in an MRI scan are not uniquely related to electron density. To solve the task, voxel-based or atlas-based models have typically been used. The voxel-based models require a specialized dual ultrashort echo time MRI sequence for bone visualization and the atlas-based models require deformable registrations of conventional MRI scans. In this study, we investigate the potential of amore » patch-based method for creating a pCT based on conventional T{sub 1}-weighted MRI scans without using deformable registrations. We compare this method against two state-of-the-art methods within the voxel-based and atlas-based categories. Methods: The data consisted of CT and MRI scans of five cranial RT patients. To compare the performance of the different methods, a nested cross validation was done to find optimal model parameters for all the methods. Voxel-wise and geometric evaluations of the pCTs were done. Furthermore, a radiologic evaluation based on water equivalent path lengths was carried out, comparing the upper hemisphere of the head in the pCT and the real CT. Finally, the dosimetric accuracy was tested and compared for a photon treatment plan. Results: The pCTs produced with the patch-based method had the best voxel-wise, geometric, and radiologic agreement with the real CT, closely followed by the atlas-based method. In terms of the dosimetric accuracy, the patch-based method had average deviations of less than 0.5% in measures related to target coverage. Conclusions: We showed that a patch-based method could generate an accurate pCT based on conventional T{sub 1}-weighted MRI sequences and without deformable registrations. In our evaluations, the method performed better than existing voxel-based and atlas-based methods and showed a promising potential for RT of the brain based only on MRI.« less
NASA Astrophysics Data System (ADS)
Yang, Lei; Yan, Hongyong; Liu, Hong
2017-03-01
Implicit staggered-grid finite-difference (ISFD) scheme is competitive for its great accuracy and stability, whereas its coefficients are conventionally determined by the Taylor-series expansion (TE) method, leading to a loss in numerical precision. In this paper, we modify the TE method using the minimax approximation (MA), and propose a new optimal ISFD scheme based on the modified TE (MTE) with MA method. The new ISFD scheme takes the advantage of the TE method that guarantees great accuracy at small wavenumbers, and keeps the property of the MA method that keeps the numerical errors within a limited bound at the same time. Thus, it leads to great accuracy for numerical solution of the wave equations. We derive the optimal ISFD coefficients by applying the new method to the construction of the objective function, and using a Remez algorithm to minimize its maximum. Numerical analysis is made in comparison with the conventional TE-based ISFD scheme, indicating that the MTE-based ISFD scheme with appropriate parameters can widen the wavenumber range with high accuracy, and achieve greater precision than the conventional ISFD scheme. The numerical modeling results also demonstrate that the MTE-based ISFD scheme performs well in elastic wave simulation, and is more efficient than the conventional ISFD scheme for elastic modeling.
Okamoto, Takuma; Sakaguchi, Atsushi
2017-03-01
Generating acoustically bright and dark zones using loudspeakers is gaining attention as one of the most important acoustic communication techniques for such uses as personal sound systems and multilingual guide services. Although most conventional methods are based on numerical solutions, an analytical approach based on the spatial Fourier transform with a linear loudspeaker array has been proposed, and its effectiveness has been compared with conventional acoustic energy difference maximization and presented by computer simulations. To describe the effectiveness of the proposal in actual environments, this paper investigates the experimental validation of the proposed approach with rectangular and Hann windows and compared it with three conventional methods: simple delay-and-sum beamforming, contrast maximization, and least squares-based pressure matching using an actually implemented linear array of 64 loudspeakers in an anechoic chamber. The results of both the computer simulations and the actual experiments show that the proposed approach with a Hann window more accurately controlled the bright and dark zones than the conventional methods.
Zheng, Dandan; Todor, Dorin A
2011-01-01
In real-time trans-rectal ultrasound (TRUS)-based high-dose-rate prostate brachytherapy, the accurate identification of needle-tip position is critical for treatment planning and delivery. Currently, needle-tip identification on ultrasound images can be subject to large uncertainty and errors because of ultrasound image quality and imaging artifacts. To address this problem, we developed a method based on physical measurements with simple and practical implementation to improve the accuracy and robustness of needle-tip identification. Our method uses measurements of the residual needle length and an off-line pre-established coordinate transformation factor, to calculate the needle-tip position on the TRUS images. The transformation factor was established through a one-time systematic set of measurements of the probe and template holder positions, applicable to all patients. To compare the accuracy and robustness of the proposed method and the conventional method (ultrasound detection), based on the gold-standard X-ray fluoroscopy, extensive measurements were conducted in water and gel phantoms. In water phantom, our method showed an average tip-detection accuracy of 0.7 mm compared with 1.6 mm of the conventional method. In gel phantom (more realistic and tissue-like), our method maintained its level of accuracy while the uncertainty of the conventional method was 3.4mm on average with maximum values of over 10mm because of imaging artifacts. A novel method based on simple physical measurements was developed to accurately detect the needle-tip position for TRUS-based high-dose-rate prostate brachytherapy. The method demonstrated much improved accuracy and robustness over the conventional method. Copyright © 2011 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Impact Assessment and Environmental Evaluation of Various Ammonia Production Processes
NASA Astrophysics Data System (ADS)
Bicer, Yusuf; Dincer, Ibrahim; Vezina, Greg; Raso, Frank
2017-05-01
In the current study, conventional resources-based ammonia generation routes are comparatively studied through a comprehensive life cycle assessment. The selected ammonia generation options range from mostly used steam methane reforming to partial oxidation of heavy oil. The chosen ammonia synthesis process is the most common commercially available Haber-Bosch process. The essential energy input for the methods are used from various conventional resources such as coal, nuclear, natural gas and heavy oil. Using the life cycle assessment methodology, the environmental impacts of selected methods are identified and quantified from cradle to gate. The life cycle assessment outcomes of the conventional resources based ammonia production routes show that nuclear electrolysis-based ammonia generation method yields the lowest global warming and climate change impacts while the coal-based electrolysis options bring higher environmental problems. The calculated greenhouse gas emission from nuclear-based electrolysis is 0.48 kg CO2 equivalent while it is 13.6 kg CO2 per kg of ammonia for coal-based electrolysis method.
Impact Assessment and Environmental Evaluation of Various Ammonia Production Processes.
Bicer, Yusuf; Dincer, Ibrahim; Vezina, Greg; Raso, Frank
2017-05-01
In the current study, conventional resources-based ammonia generation routes are comparatively studied through a comprehensive life cycle assessment. The selected ammonia generation options range from mostly used steam methane reforming to partial oxidation of heavy oil. The chosen ammonia synthesis process is the most common commercially available Haber-Bosch process. The essential energy input for the methods are used from various conventional resources such as coal, nuclear, natural gas and heavy oil. Using the life cycle assessment methodology, the environmental impacts of selected methods are identified and quantified from cradle to gate. The life cycle assessment outcomes of the conventional resources based ammonia production routes show that nuclear electrolysis-based ammonia generation method yields the lowest global warming and climate change impacts while the coal-based electrolysis options bring higher environmental problems. The calculated greenhouse gas emission from nuclear-based electrolysis is 0.48 kg CO 2 equivalent while it is 13.6 kg CO 2 per kg of ammonia for coal-based electrolysis method.
ERIC Educational Resources Information Center
Mattox, Daniel V., Jr.
Research compared conventional and experimental methods of instruction in a teacher education media course. The conventional method relied upon factual presentations to heterogeneous groups, while the experimental utilized homogeneous clusters of students and stressed individualized instruction. A pretest-posttest, experimental-control group…
Jacob, M E; Bai, J; Renter, D G; Rogers, A T; Shi, X; Nagaraja, T G
2014-02-01
Detection of Escherichia coli O157 in cattle feces has traditionally used culture-based methods; PCR-based methods have been suggested as an alternative. We aimed to determine if multiplex real-time (mq) or conventional PCR methods could reliably detect cattle naturally shedding high (≥10(4) CFU/g of feces) and low (∼10(2) CFU/g of feces) concentrations of E. coli O157. Feces were collected from pens of feedlot cattle and evaluated for E. coli O157 by culture methods. Samples were categorized as (i) high shedders, (ii) immunomagnetic separation (IMS) positive after enrichment, or (iii) culture negative. DNA was extracted pre- and postenrichment from 100 fecal samples from each category (high shedder, IMS positive, culture negative) and subjected to mqPCR and conventional PCR assays based on detecting three genes, rfbE, stx1, and stx2. In feces from cattle determined to be E. coli O157 high shedders by culture, 37% were positive by mqPCR prior to enrichment; 85% of samples were positive after enrichment. In IMS-positive samples, 4% were positive by mqPCR prior to enrichment, while 43% were positive after enrichment. In culture-negative feces, 7% were positive by mqPCR prior to enrichment, and 40% were positive after enrichment. The proportion of high shedder-positive and culture-positive (high shedder and IMS) samples were significantly different from mqPCR-positive samples before and after enrichment (P < 0.01). Similar results were observed for conventional PCR. Our data suggest that mqPCR and conventional PCR are most useful in identifying high shedder animals and may not be an appropriate substitute to culture-based methods for detection of E. coli O157 in cattle feces.
Zhao, Anbang; Ma, Lin; Ma, Xuefei; Hui, Juan
2017-02-20
In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS) is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequencydomain and achieves computational complexity reduction.
A 3D generic inverse dynamic method using wrench notation and quaternion algebra.
Dumas, R; Aissaoui, R; de Guise, J A
2004-06-01
In the literature, conventional 3D inverse dynamic models are limited in three aspects related to inverse dynamic notation, body segment parameters and kinematic formalism. First, conventional notation yields separate computations of the forces and moments with successive coordinate system transformations. Secondly, the way conventional body segment parameters are defined is based on the assumption that the inertia tensor is principal and the centre of mass is located between the proximal and distal ends. Thirdly, the conventional kinematic formalism uses Euler or Cardanic angles that are sequence-dependent and suffer from singularities. In order to overcome these limitations, this paper presents a new generic method for inverse dynamics. This generic method is based on wrench notation for inverse dynamics, a general definition of body segment parameters and quaternion algebra for the kinematic formalism.
A simple, less invasive stripper micropipetter-based technique for day 3 embryo biopsy.
Cedillo, Luciano; Ocampo-Bárcenas, Azucena; Maldonado, Israel; Valdez-Morales, Francisco J; Camargo, Felipe; López-Bayghen, Esther
2016-01-01
Preimplantation genetic screening (PGS) is an important procedure for in vitro fertilization (IVF). A key step of PGS, blastomere removal, is abundant with many technical issues. The aim of this study was to compare a more simple procedure based on the Stipper Micropipetter, named S-biopsy, to the conventional aspiration method. On Day 3, 368 high-quality embryos (>7 cells on Day3 with <10% fragmentation) were collected from 38 women. For each patient, their embryos were equally separated between the conventional method ( n = 188) and S-biopsy method ( n = 180). The conventional method was performed using a standardized protocol. For the S-biopsy method, a laser was used to remove a significantly smaller portion of the zona pellucida. Afterwards, the complete embryo was aspirated with a Stripper Micropipetter, forcing the removal of the blastomere. Selected blastomeres went to PGS using CGH microarrays. Embryo integrity and blastocyst formation were assessed on Day 5. Differences between groups were assessed by either the Mann-Whitney test or Fisher Exact test. Both methods resulted in the removal of only one blastomere. The S-biopsy and the conventional method did not differ in terms of affecting embryo integrity (95.0% vs. 95.7%) or blastocyst formation (72.7% vs. 70.7%). PGS analysis indicated that aneuploidy rate were similar between the two methods (63.1% vs. 65.2%). However, the time required to perform the S-biopsy method (179.2 ± 17.5 s) was significantly shorter (5-fold) than the conventional method. The S-biopsy method is comparable to the conventional method that is used to remove a blastomere for PGS, but requires less time. Furthermore, due to the simplicity of the S-biopsy technique, this method is more ideal for IVF laboratories.
A probability-based multi-cycle sorting method for 4D-MRI: A simulation study.
Liang, Xiao; Yin, Fang-Fang; Liu, Yilin; Cai, Jing
2016-12-01
To develop a novel probability-based sorting method capable of generating multiple breathing cycles of 4D-MRI images and to evaluate performance of this new method by comparing with conventional phase-based methods in terms of image quality and tumor motion measurement. Based on previous findings that breathing motion probability density function (PDF) of a single breathing cycle is dramatically different from true stabilized PDF that resulted from many breathing cycles, it is expected that a probability-based sorting method capable of generating multiple breathing cycles of 4D images may capture breathing variation information missing from conventional single-cycle sorting methods. The overall idea is to identify a few main breathing cycles (and their corresponding weightings) that can best represent the main breathing patterns of the patient and then reconstruct a set of 4D images for each of the identified main breathing cycles. This method is implemented in three steps: (1) The breathing signal is decomposed into individual breathing cycles, characterized by amplitude, and period; (2) individual breathing cycles are grouped based on amplitude and period to determine the main breathing cycles. If a group contains more than 10% of all breathing cycles in a breathing signal, it is determined as a main breathing pattern group and is represented by the average of individual breathing cycles in the group; (3) for each main breathing cycle, a set of 4D images is reconstructed using a result-driven sorting method adapted from our previous study. The probability-based sorting method was first tested on 26 patients' breathing signals to evaluate its feasibility of improving target motion PDF. The new method was subsequently tested for a sequential image acquisition scheme on the 4D digital extended cardiac torso (XCAT) phantom. Performance of the probability-based and conventional sorting methods was evaluated in terms of target volume precision and accuracy as measured by the 4D images, and also the accuracy of average intensity projection (AIP) of 4D images. Probability-based sorting showed improved similarity of breathing motion PDF from 4D images to reference PDF compared to single cycle sorting, indicated by the significant increase in Dice similarity coefficient (DSC) (probability-based sorting, DSC = 0.89 ± 0.03, and single cycle sorting, DSC = 0.83 ± 0.05, p-value <0.001). Based on the simulation study on XCAT, the probability-based method outperforms the conventional phase-based methods in qualitative evaluation on motion artifacts and quantitative evaluation on tumor volume precision and accuracy and accuracy of AIP of the 4D images. In this paper the authors demonstrated the feasibility of a novel probability-based multicycle 4D image sorting method. The authors' preliminary results showed that the new method can improve the accuracy of tumor motion PDF and the AIP of 4D images, presenting potential advantages over the conventional phase-based sorting method for radiation therapy motion management.
Sacristán, Carlos; Carballo, Matilde; Muñoz, María Jesús; Bellière, Edwige Nina; Neves, Elena; Nogal, Verónica; Esperón, Fernando
2015-12-15
Cetacean morbillivirus (CeMV) (family Paramyxoviridae, genus Morbillivirus) is considered the most pathogenic virus of cetaceans. It was first implicated in the bottlenose dolphin (Tursiops truncatus) mass stranding episode along the Northwestern Atlantic coast in the late 1980s, and in several more recent worldwide epizootics in different Odontoceti species. This study describes a new one step real-time reverse transcription fast polymerase chain reaction (real-time RT-fast PCR) method based on SYBR(®) Green to detect a fragment of the CeMV fusion protein gene. This primer set also works for conventional RT-PCR diagnosis. This method detected and identified all three well-characterized strains of CeMV: porpoise morbillivirus (PMV), dolphin morbillivirus (DMV) and pilot whale morbillivirus (PWMV). Relative sensitivity was measured by comparing the results obtained from 10-fold dilution series of PMV and DMV positive controls and a PWMV field sample, to those obtained by the previously described conventional phosphoprotein gene based RT-PCR method. Both the conventional and real-time RT-PCR methods involving the fusion protein gene were 100- to 1000-fold more sensitive than the previously described conventional RT-PCR method. Copyright © 2015 Elsevier B.V. All rights reserved.
Space-based optical image encryption.
Chen, Wen; Chen, Xudong
2010-12-20
In this paper, we propose a new method based on a three-dimensional (3D) space-based strategy for the optical image encryption. The two-dimensional (2D) processing of a plaintext in the conventional optical encryption methods is extended to a 3D space-based processing. Each pixel of the plaintext is considered as one particle in the proposed space-based optical image encryption, and the diffraction of all particles forms an object wave in the phase-shifting digital holography. The effectiveness and advantages of the proposed method are demonstrated by numerical results. The proposed method can provide a new optical encryption strategy instead of the conventional 2D processing, and may open up a new research perspective for the optical image encryption.
Jafari, Zahra
2014-01-01
Background: Team-based learning (TBL) is a structured type of cooperative learning that has growing application in medical education. This study compares levels of student learning and teaching satisfaction for a neurology course between conventional lecture and team-based learning. Methods: The study incorporated 70 students aged 19 to 22 years at the school of rehabilitation. One half of the 16 sessions of the neurology course was taught by lectures and the second half with team-based learning. Teaching satisfaction for the teaching methods was determined on a scale with 5 options in response to 20 questions. Results: Significant difference was found between lecture-based and team-based learning in final scores (p<0.001). Content validity index of the scale of student satisfaction was 94%, and external and internal consistencies of the scale were 0.954 and 0.921 orderly (p<0.001). The degree of satisfaction from TBL compared to the lecture method was 81.3%. Conclusion: Results revealed more success and student satisfaction from team-based learning compared to conventional lectures in teaching neurology to undergraduate students. It seems that application of new teaching methods such as team-based learning could be effectively introduced to improve levels of education and student learning PMID:25250250
Zhao, Anbang; Ma, Lin; Ma, Xuefei; Hui, Juan
2017-01-01
In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS) is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequency-domain and achieves computational complexity reduction. PMID:28230763
Self-calibration method without joint iteration for distributed small satellite SAR systems
NASA Astrophysics Data System (ADS)
Xu, Qing; Liao, Guisheng; Liu, Aifei; Zhang, Juan
2013-12-01
The performance of distributed small satellite synthetic aperture radar systems degrades significantly due to the unavoidable array errors, including gain, phase, and position errors, in real operating scenarios. In the conventional method proposed in (IEEE T Aero. Elec. Sys. 42:436-451, 2006), the spectrum components within one Doppler bin are considered as calibration sources. However, it is found in this article that the gain error estimation and the position error estimation in the conventional method can interact with each other. The conventional method may converge to suboptimal solutions in large position errors since it requires the joint iteration between gain-phase error estimation and position error estimation. In addition, it is also found that phase errors can be estimated well regardless of position errors when the zero Doppler bin is chosen. In this article, we propose a method obtained by modifying the conventional one, based on these two observations. In this modified method, gain errors are firstly estimated and compensated, which eliminates the interaction between gain error estimation and position error estimation. Then, by using the zero Doppler bin data, the phase error estimation can be performed well independent of position errors. Finally, position errors are estimated based on the Taylor-series expansion. Meanwhile, the joint iteration between gain-phase error estimation and position error estimation is not required. Therefore, the problem of suboptimal convergence, which occurs in the conventional method, can be avoided with low computational method. The modified method has merits of faster convergence and lower estimation error compared to the conventional one. Theoretical analysis and computer simulation results verified the effectiveness of the modified method.
Nauleau, Pierre; Apostolakis, Iason; McGarry, Matthew; Konofagou, Elisa
2018-05-29
The stiffness of the arteries is known to be an indicator of the progression of various cardiovascular diseases. Clinically, the pulse wave velocity (PWV) is used as a surrogate for arterial stiffness. Pulse wave imaging (PWI) is a non-invasive, ultrasound-based imaging technique capable of mapping the motion of the vessel walls, allowing the local assessment of arterial properties. Conventionally, a distinctive feature of the displacement wave (e.g. the 50% upstroke) is tracked across the map to estimate the PWV. However, the presence of reflections, such as those generated at the carotid bifurcation, can bias the PWV estimation. In this paper, we propose a two-step cross-correlation based method to characterize arteries using the information available in the PWI spatio-temporal map. First, the area under the cross-correlation curve is proposed as an index for locating the regions of different properties. Second, a local peak of the cross-correlation function is tracked to obtain a less biased estimate of the PWV. Three series of experiments were conducted in phantoms to evaluate the capabilities of the proposed method compared with the conventional method. In the ideal case of a homogeneous phantom, the two methods performed similarly and correctly estimated the PWV. In the presence of reflections, the proposed method provided a more accurate estimate than conventional processing: e.g. for the soft phantom, biases of -0.27 and -0.71 m · s -1 were observed. In a third series of experiments, the correlation-based method was able to locate two regions of different properties with an error smaller than 1 mm. It also provided more accurate PWV estimates than conventional processing (biases: -0.12 versus -0.26 m · s -1 ). Finally, the in vivo feasibility of the proposed method was demonstrated in eleven healthy subjects. The results indicate that the correlation-based method might be less precise in vivo but more accurate than the conventional method.
Agrawal, Yuvraj; Desai, Aravind; Mehta, Jaysheel
2011-12-01
We aimed to quantify the severity of the hallux valgus based on the lateral sesamoid position and to establish a correlation of our simple assessment method with the conventional radiological assessments. We reviewed one hundred and twenty two dorso-plantar weight bearing radiographs of feet. The intermetatarsal and hallux valgus angles were measured by the conventional methods; and the position of lateral sesamoid in relation to first metatarsal neck was assessed by our new and simple method. Significant correlation was noted between intermetatarsal angle and lateral sesamoid position (Rho 0.74, p < 0.0001); lateral sesamoid position and hallux valgus angle (Rho 0.56, p < 0.0001). Similar trends were noted in different grades of severity of hallux valgus in all the three methods of assessment. Our method of assessing hallux valgus deformity based on the lateral sesamoid position is simple, less time consuming and has statistically significant correlation with that of the established conventional radiological measurements. Copyright © 2011 European Foot and Ankle Society. Published by Elsevier Ltd. All rights reserved.
Gajjar, Ketan; Ahmadzai, Abdullah A.; Valasoulis, George; Trevisan, Júlio; Founta, Christina; Nasioutziki, Maria; Loufopoulos, Aristotelis; Kyrgiou, Maria; Stasinou, Sofia Melina; Karakitsos, Petros; Paraskevaidis, Evangelos; Da Gama-Rose, Bianca; Martin-Hirsch, Pierre L.; Martin, Francis L.
2014-01-01
Background Subjective visual assessment of cervical cytology is flawed, and this can manifest itself by inter- and intra-observer variability resulting ultimately in the degree of discordance in the grading categorisation of samples in screening vs. representative histology. Biospectroscopy methods have been suggested as sensor-based tools that can deliver objective assessments of cytology. However, studies to date have been apparently flawed by a corresponding lack of diagnostic efficiency when samples have previously been classed using cytology screening. This raises the question as to whether categorisation of cervical cytology based on imperfect conventional screening reduces the diagnostic accuracy of biospectroscopy approaches; are these latter methods more accurate and diagnose underlying disease? The purpose of this study was to compare the objective accuracy of infrared (IR) spectroscopy of cervical cytology samples using conventional cytology vs. histology-based categorisation. Methods Within a typical clinical setting, a total of n = 322 liquid-based cytology samples were collected immediately before biopsy. Of these, it was possible to acquire subsequent histology for n = 154. Cytology samples were categorised according to conventional screening methods and subsequently interrogated employing attenuated total reflection Fourier-transform IR (ATR-FTIR) spectroscopy. IR spectra were pre-processed and analysed using linear discriminant analysis. Dunn’s test was applied to identify the differences in spectra. Within the diagnostic categories, histology allowed us to determine the comparative efficiency of conventional screening vs. biospectroscopy to correctly identify either true atypia or underlying disease. Results Conventional cytology-based screening results in poor sensitivity and specificity. IR spectra derived from cervical cytology do not appear to discriminate in a diagnostic fashion when categories were based on conventional screening. Scores plots of IR spectra exhibit marked crossover of spectral points between different cytological categories. Although, significant differences between spectral bands in different categories are noted, crossover samples point to the potential for poor specificity and hampers the development of biospectroscopy as a diagnostic tool. However, when histology-based categories are used to conduct analyses, the scores plot of IR spectra exhibit markedly better segregation. Conclusions Histology demonstrates that ATR-FTIR spectroscopy of liquid-based cytology identifies the presence of underlying atypia or disease missed in conventional cytology screening. This study points to an urgent need for a future biospectroscopy study where categories are based on such histology. It will allow for the validation of this approach as a screening tool. PMID:24404130
Improving real-time efficiency of case-based reasoning for medical diagnosis.
Park, Yoon-Joo
2014-01-01
Conventional case-based reasoning (CBR) does not perform efficiently for high volume dataset because of case-retrieval time. Some previous researches overcome this problem by clustering a case-base into several small groups, and retrieve neighbors within a corresponding group to a target case. However, this approach generally produces less accurate predictive performances than the conventional CBR. This paper suggests a new case-based reasoning method called the Clustering-Merging CBR (CM-CBR) which produces similar level of predictive performances than the conventional CBR with spending significantly less computational cost.
Theory and practice of conventional adventitious virus testing.
Gregersen, Jens-Peter
2011-01-01
CONFERENCE PROCEEDING Proceedings of the PDA/FDA Adventitious Viruses in Biologics: Detection and Mitigation Strategies Workshop in Bethesda, MD, USA; December 1-3, 2010 Guest Editors: Arifa Khan (Bethesda, MD), Patricia Hughes (Bethesda, MD) and Michael Wiebe (San Francisco, CA) For decades conventional tests in cell cultures and in laboratory animals have served as standard methods for broad-spectrum screening for adventitious viruses. New virus detection methods based on molecular biology have broadened and improved our knowledge about potential contaminating viruses and about the suitability of the conventional test methods. This paper summarizes and discusses practical aspects of conventional test schemes, such as detectability of various viruses, questionable or false-positive results, animal numbers needed, time and cost of testing, and applicability for rapidly changing starting materials. Strategies to improve the virus safety of biological medicinal products are proposed. The strategies should be based upon a flexible application of existing and new methods along with a scientifically based risk assessment. However, testing alone does not guarantee the absence of adventitious agents and must be accompanied by virus removing or virus inactivating process steps for critical starting materials, raw materials, and for the drug product.
Retention of denture bases fabricated by three different processing techniques – An in vivo study
Chalapathi Kumar, V. H.; Surapaneni, Hemchand; Ravikiran, V.; Chandra, B. Sarat; Balusu, Srilatha; Reddy, V. Naveen
2016-01-01
Aim: Distortion due to Polymerization shrinkage compromises the retention. To evaluate the amount of retention of denture bases fabricated by conventional, anchorized, and injection molding polymerization techniques. Materials and Methods: Ten completely edentulous patients were selected, impressions were made, and master cast obtained was duplicated to fabricate denture bases by three polymerization techniques. Loop was attached to the finished denture bases to estimate the force required to dislodge them by retention apparatus. Readings were subjected to nonparametric Friedman two-way analysis of variance followed by Bonferroni correction methods and Wilcoxon matched-pairs signed-ranks test. Results: Denture bases fabricated by injection molding (3740 g), anchorized techniques (2913 g) recorded greater retention values than conventional technique (2468 g). Significant difference was seen between these techniques. Conclusions: Denture bases obtained by injection molding polymerization technique exhibited maximum retention, followed by anchorized technique, and least retention was seen in conventional molding technique. PMID:27382542
Systematic methods for the design of a class of fuzzy logic controllers
NASA Astrophysics Data System (ADS)
Yasin, Saad Yaser
2002-09-01
Fuzzy logic control, a relatively new branch of control, can be used effectively whenever conventional control techniques become inapplicable or impractical. Various attempts have been made to create a generalized fuzzy control system and to formulate an analytically based fuzzy control law. In this study, two methods, the left and right parameterization method and the normalized spline-base membership function method, were utilized for formulating analytical fuzzy control laws in important practical control applications. The first model was used to design an idle speed controller, while the second was used to control an inverted control problem. The results of both showed that a fuzzy logic control system based on the developed models could be used effectively to control highly nonlinear and complex systems. This study also investigated the application of fuzzy control in areas not fully utilizing fuzzy logic control. Three important practical applications pertaining to the automotive industries were studied. The first automotive-related application was the idle speed of spark ignition engines, using two fuzzy control methods: (1) left and right parameterization, and (2) fuzzy clustering techniques and experimental data. The simulation and experimental results showed that a conventional controller-like performance fuzzy controller could be designed based only on experimental data and intuitive knowledge of the system. In the second application, the automotive cruise control problem, a fuzzy control model was developed using parameters adaptive Proportional plus Integral plus Derivative (PID)-type fuzzy logic controller. Results were comparable to those using linearized conventional PID and linear quadratic regulator (LQR) controllers and, in certain cases and conditions, the developed controller outperformed the conventional PID and LQR controllers. The third application involved the air/fuel ratio control problem, using fuzzy clustering techniques, experimental data, and a conversion algorithm, to develop a fuzzy-based control algorithm. Results were similar to those obtained by recently published conventional control based studies. The influence of the fuzzy inference operators and parameters on performance and stability of the fuzzy logic controller was studied Results indicated that, the selections of certain parameters or combinations of parameters, affect greatly the performance and stability of the fuzzy controller. Diagnostic guidelines used to tune or change certain factors or parameters to improve controller performance were developed based on knowledge gained from conventional control methods and knowledge gained from the experimental and the simulation results of this study.
A probability-based multi-cycle sorting method for 4D-MRI: A simulation study
Liang, Xiao; Yin, Fang-Fang; Liu, Yilin; Cai, Jing
2016-01-01
Purpose: To develop a novel probability-based sorting method capable of generating multiple breathing cycles of 4D-MRI images and to evaluate performance of this new method by comparing with conventional phase-based methods in terms of image quality and tumor motion measurement. Methods: Based on previous findings that breathing motion probability density function (PDF) of a single breathing cycle is dramatically different from true stabilized PDF that resulted from many breathing cycles, it is expected that a probability-based sorting method capable of generating multiple breathing cycles of 4D images may capture breathing variation information missing from conventional single-cycle sorting methods. The overall idea is to identify a few main breathing cycles (and their corresponding weightings) that can best represent the main breathing patterns of the patient and then reconstruct a set of 4D images for each of the identified main breathing cycles. This method is implemented in three steps: (1) The breathing signal is decomposed into individual breathing cycles, characterized by amplitude, and period; (2) individual breathing cycles are grouped based on amplitude and period to determine the main breathing cycles. If a group contains more than 10% of all breathing cycles in a breathing signal, it is determined as a main breathing pattern group and is represented by the average of individual breathing cycles in the group; (3) for each main breathing cycle, a set of 4D images is reconstructed using a result-driven sorting method adapted from our previous study. The probability-based sorting method was first tested on 26 patients’ breathing signals to evaluate its feasibility of improving target motion PDF. The new method was subsequently tested for a sequential image acquisition scheme on the 4D digital extended cardiac torso (XCAT) phantom. Performance of the probability-based and conventional sorting methods was evaluated in terms of target volume precision and accuracy as measured by the 4D images, and also the accuracy of average intensity projection (AIP) of 4D images. Results: Probability-based sorting showed improved similarity of breathing motion PDF from 4D images to reference PDF compared to single cycle sorting, indicated by the significant increase in Dice similarity coefficient (DSC) (probability-based sorting, DSC = 0.89 ± 0.03, and single cycle sorting, DSC = 0.83 ± 0.05, p-value <0.001). Based on the simulation study on XCAT, the probability-based method outperforms the conventional phase-based methods in qualitative evaluation on motion artifacts and quantitative evaluation on tumor volume precision and accuracy and accuracy of AIP of the 4D images. Conclusions: In this paper the authors demonstrated the feasibility of a novel probability-based multicycle 4D image sorting method. The authors’ preliminary results showed that the new method can improve the accuracy of tumor motion PDF and the AIP of 4D images, presenting potential advantages over the conventional phase-based sorting method for radiation therapy motion management. PMID:27908178
Reddy, M Rami; Singh, U C; Erion, Mark D
2004-05-26
Free-energy perturbation (FEP) is considered the most accurate computational method for calculating relative solvation and binding free-energy differences. Despite some success in applying FEP methods to both drug design and lead optimization, FEP calculations are rarely used in the pharmaceutical industry. One factor limiting the use of FEP is its low throughput, which is attributed in part to the dependence of conventional methods on the user's ability to develop accurate molecular mechanics (MM) force field parameters for individual drug candidates and the time required to complete the process. In an attempt to find an FEP method that could eventually be automated, we developed a method that uses quantum mechanics (QM) for treating the solute, MM for treating the solute surroundings, and the FEP method for computing free-energy differences. The thread technique was used in all transformations and proved to be essential for the successful completion of the calculations. Relative solvation free energies for 10 structurally diverse molecular pairs were calculated, and the results were in close agreement with both the calculated results generated by conventional FEP methods and the experimentally derived values. While considerably more CPU demanding than conventional FEP methods, this method (QM/MM-based FEP) alleviates the need for development of molecule-specific MM force field parameters and therefore may enable future automation of FEP-based calculations. Moreover, calculation accuracy should be improved over conventional methods, especially for calculations reliant on MM parameters derived in the absence of experimental data.
Kwon, Deukwoo; Hoffman, F Owen; Moroz, Brian E; Simon, Steven L
2016-02-10
Most conventional risk analysis methods rely on a single best estimate of exposure per person, which does not allow for adjustment for exposure-related uncertainty. Here, we propose a Bayesian model averaging method to properly quantify the relationship between radiation dose and disease outcomes by accounting for shared and unshared uncertainty in estimated dose. Our Bayesian risk analysis method utilizes multiple realizations of sets (vectors) of doses generated by a two-dimensional Monte Carlo simulation method that properly separates shared and unshared errors in dose estimation. The exposure model used in this work is taken from a study of the risk of thyroid nodules among a cohort of 2376 subjects who were exposed to fallout from nuclear testing in Kazakhstan. We assessed the performance of our method through an extensive series of simulations and comparisons against conventional regression risk analysis methods. When the estimated doses contain relatively small amounts of uncertainty, the Bayesian method using multiple a priori plausible draws of dose vectors gave similar results to the conventional regression-based methods of dose-response analysis. However, when large and complex mixtures of shared and unshared uncertainties are present, the Bayesian method using multiple dose vectors had significantly lower relative bias than conventional regression-based risk analysis methods and better coverage, that is, a markedly increased capability to include the true risk coefficient within the 95% credible interval of the Bayesian-based risk estimate. An evaluation of the dose-response using our method is presented for an epidemiological study of thyroid disease following radiation exposure. Copyright © 2015 John Wiley & Sons, Ltd.
Austin, Peter C.; Tu, Jack V.; Ho, Jennifer E.; Levy, Daniel; Lee, Douglas S.
2014-01-01
Objective Physicians classify patients into those with or without a specific disease. Furthermore, there is often interest in classifying patients according to disease etiology or subtype. Classification trees are frequently used to classify patients according to the presence or absence of a disease. However, classification trees can suffer from limited accuracy. In the data-mining and machine learning literature, alternate classification schemes have been developed. These include bootstrap aggregation (bagging), boosting, random forests, and support vector machines. Study design and Setting We compared the performance of these classification methods with those of conventional classification trees to classify patients with heart failure according to the following sub-types: heart failure with preserved ejection fraction (HFPEF) vs. heart failure with reduced ejection fraction (HFREF). We also compared the ability of these methods to predict the probability of the presence of HFPEF with that of conventional logistic regression. Results We found that modern, flexible tree-based methods from the data mining literature offer substantial improvement in prediction and classification of heart failure sub-type compared to conventional classification and regression trees. However, conventional logistic regression had superior performance for predicting the probability of the presence of HFPEF compared to the methods proposed in the data mining literature. Conclusion The use of tree-based methods offers superior performance over conventional classification and regression trees for predicting and classifying heart failure subtypes in a population-based sample of patients from Ontario. However, these methods do not offer substantial improvements over logistic regression for predicting the presence of HFPEF. PMID:23384592
Jafari, Zahra
2014-01-01
Team-based learning (TBL) is a structured type of cooperative learning that has growing application in medical education. This study compares levels of student learning and teaching satisfaction for a neurology course between conventional lecture and team-based learning. The study incorporated 70 students aged 19 to 22 years at the school of rehabilitation. One half of the 16 sessions of the neurology course was taught by lectures and the second half with team-based learning. Teaching satisfaction for the teaching methods was determined on a scale with 5 options in response to 20 questions. Significant difference was found between lecture-based and team-based learning in final scores (p<0.001). Content validity index of the scale of student satisfaction was 94%, and external and internal consistencies of the scale were 0.954 and 0.921 orderly (p<0.001). The degree of satisfaction from TBL compared to the lecture method was 81.3%. RESULTS revealed more success and student satisfaction from team-based learning compared to conventional lectures in teaching neurology to undergraduate students. It seems that application of new teaching methods such as team-based learning could be effectively introduced to improve levels of education and student learning.
Pieterman, Elise D; Budde, Ricardo P J; Robbers-Visser, Daniëlle; van Domburg, Ron T; Helbing, Willem A
2017-09-01
Follow-up of right ventricular performance is important for patients with congenital heart disease. Cardiac magnetic resonance imaging is optimal for this purpose. However, observer-dependency of manual analysis of right ventricular volumes limit its use. Knowledge-based reconstruction is a new semiautomatic analysis tool that uses a database including knowledge of right ventricular shape in various congenital heart diseases. We evaluated whether knowledge-based reconstruction is a good alternative for conventional analysis. To assess the inter- and intra-observer variability and agreement of knowledge-based versus conventional analysis of magnetic resonance right ventricular volumes, analysis was done by two observers in a mixed group of 22 patients with congenital heart disease affecting right ventricular loading conditions (dextro-transposition of the great arteries and right ventricle to pulmonary artery conduit) and a group of 17 healthy children. We used Bland-Altman analysis and coefficient of variation. Comparison between the conventional method and the knowledge-based method showed a systematically higher volume for the latter group. We found an overestimation for end-diastolic volume (bias -40 ± 24 mL, r = .956), end-systolic volume (bias -34 ± 24 mL, r = .943), stroke volume (bias -6 ± 17 mL, r = .735) and an underestimation of ejection fraction (bias 7 ± 7%, r = .671) by knowledge-based reconstruction. The intra-observer variability of knowledge-based reconstruction varied with a coefficient of variation of 9% for end-diastolic volume and 22% for stroke volume. The same trend was noted for inter-observer variability. A systematic difference (overestimation) was noted for right ventricular size as assessed with knowledge-based reconstruction compared with conventional methods for analysis. Observer variability for the new method was comparable to what has been reported for the right ventricle in children and congenital heart disease with conventional analysis. © 2017 Wiley Periodicals, Inc.
A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots.
Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il Dan
2016-03-01
This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%.
Nonuniform fast Fourier transform method for numerical diffraction simulation on tilted planes.
Xiao, Yu; Tang, Xiahui; Qin, Yingxiong; Peng, Hao; Wang, Wei; Zhong, Lijing
2016-10-01
The method, based on the rotation of the angular spectrum in the frequency domain, is generally used for the diffraction simulation between the tilted planes. Due to the rotation of the angular spectrum, the interval between the sampling points in the Fourier domain is not even. For the conventional fast Fourier transform (FFT)-based methods, a spectrum interpolation is needed to get the approximate sampling value on the equidistant sampling points. However, due to the numerical error caused by the spectrum interpolation, the calculation accuracy degrades very quickly as the rotation angle increases. Here, the diffraction propagation between the tilted planes is transformed into a problem about the discrete Fourier transform on the uneven sampling points, which can be evaluated effectively and precisely through the nonuniform fast Fourier transform method (NUFFT). The most important advantage of this method is that the conventional spectrum interpolation is avoided and the high calculation accuracy can be guaranteed for different rotation angles, even when the rotation angle is close to π/2. Also, its calculation efficiency is comparable with that of the conventional FFT-based methods. Numerical examples as well as a discussion about the calculation accuracy and the sampling method are presented.
Fundamental Vocabulary Selection Based on Word Familiarity
NASA Astrophysics Data System (ADS)
Sato, Hiroshi; Kasahara, Kaname; Kanasugi, Tomoko; Amano, Shigeaki
This paper proposes a new method for selecting fundamental vocabulary. We are presently constructing the Fundamental Vocabulary Knowledge-base of Japanese that contains integrated information on syntax, semantics and pragmatics, for the purposes of advanced natural language processing. This database mainly consists of a lexicon and a treebank: Lexeed (a Japanese Semantic Lexicon) and the Hinoki Treebank. Fundamental vocabulary selection is the first step in the construction of Lexeed. The vocabulary should include sufficient words to describe general concepts for self-expandability, and should not be prohibitively large to construct and maintain. There are two conventional methods for selecting fundamental vocabulary. The first is intuition-based selection by experts. This is the traditional method for making dictionaries. A weak point of this method is that the selection strongly depends on personal intuition. The second is corpus-based selection. This method is superior in objectivity to intuition-based selection, however, it is difficult to compile a sufficiently balanced corpora. We propose a psychologically-motivated selection method that adopts word familiarity as the selection criterion. Word familiarity is a rating that represents the familiarity of a word as a real number ranging from 1 (least familiar) to 7 (most familiar). We determined the word familiarity ratings statistically based on psychological experiments over 32 subjects. We selected about 30,000 words as the fundamental vocabulary, based on a minimum word familiarity threshold of 5. We also evaluated the vocabulary by comparing its word coverage with conventional intuition-based and corpus-based selection over dictionary definition sentences and novels, and demonstrated the superior coverage of our lexicon. Based on this, we conclude that the proposed method is superior to conventional methods for fundamental vocabulary selection.
Conventional approaches to water quality characterization can provide data on individual chemical components of each water sample. This analyte-by-analyte approach currently serves many useful research and compliance monitoring needs. However these approaches, which require a ...
Highly efficient preparation of sphingoid bases from glucosylceramides by chemoenzymatic method[S
Gowda, Siddabasave Gowda B.; Usuki, Seigo; Hammam, Mostafa A. S.; Murai, Yuta; Igarashi, Yasuyuki; Monde, Kenji
2016-01-01
Sphingoid base derivatives have attracted increasing attention as promising chemotherapeutic candidates against lifestyle diseases such as diabetes and cancer. Natural sphingoid bases can be a potential resource instead of those derived by time-consuming total organic synthesis. In particular, glucosylceramides (GlcCers) in food plants are enriched sources of sphingoid bases, differing from those of animals. Several chemical methodologies to transform GlcCers to sphingoid bases have already investigated; however, these conventional methods using acid or alkaline hydrolysis are not efficient due to poor reaction yield, producing complex by-products and resulting in separation problems. In this study, an extremely efficient and practical chemoenzymatic transformation method has been developed using microwave-enhanced butanolysis of GlcCers and a large amount of readily available almond β-glucosidase for its deglycosylation reaction of lysoGlcCers. The method is superior to conventional acid/base hydrolysis methods in its rapidity and its reaction cleanness (no isomerization, no rearrangement) with excellent overall yield. PMID:26667669
2011-01-01
Background Fall events contribute significantly to mortality, morbidity and costs in our ageing population. In order to identify persons at risk and to target preventive measures, many scores and assessment tools have been developed. These often require expertise and are costly to implement. Recent research investigates the use of wearable inertial sensors to provide objective data on motion features which can be used to assess individual fall risk automatically. So far it is unknown how well this new method performs in comparison with conventional fall risk assessment tools. The aim of our research is to compare the predictive performance of our new sensor-based method with conventional and established methods, based on prospective data. Methods In a first study phase, 119 inpatients of a geriatric clinic took part in motion measurements using a wireless triaxial accelerometer during a Timed Up&Go (TUG) test and a 20 m walk. Furthermore, the St. Thomas Risk Assessment Tool in Falling Elderly Inpatients (STRATIFY) was performed, and the multidisciplinary geriatric care team estimated the patients' fall risk. In a second follow-up phase of the study, 46 of the participants were interviewed after one year, including a fall and activity assessment. The predictive performances of the TUG, the STRATIFY and team scores are compared. Furthermore, two automatically induced logistic regression models based on conventional clinical and assessment data (CONV) as well as sensor data (SENSOR) are matched. Results Among the risk assessment scores, the geriatric team score (sensitivity 56%, specificity 80%) outperforms STRATIFY and TUG. The induced logistic regression models CONV and SENSOR achieve similar performance values (sensitivity 68%/58%, specificity 74%/78%, AUC 0.74/0.72, +LR 2.64/2.61). Both models are able to identify more persons at risk than the simple scores. Conclusions Sensor-based objective measurements of motion parameters in geriatric patients can be used to assess individual fall risk, and our prediction model's performance matches that of a model based on conventional clinical and assessment data. Sensor-based measurements using a small wearable device may contribute significant information to conventional methods and are feasible in an unsupervised setting. More prospective research is needed to assess the cost-benefit relation of our approach. PMID:21711504
Color digital halftoning taking colorimetric color reproduction into account
NASA Astrophysics Data System (ADS)
Haneishi, Hideaki; Suzuki, Toshiaki; Shimoyama, Nobukatsu; Miyake, Yoichi
1996-01-01
Taking colorimetric color reproduction into account, the conventional error diffusion method is modified for color digital half-toning. Assuming that the input to a bilevel color printer is given in CIE-XYZ tristimulus values or CIE-LAB values instead of the more conventional RGB or YMC values, two modified versions based on vector operation in (1) the XYZ color space and (2) the LAB color space were tested. Experimental results show that the modified methods, especially the method using the LAB color space, resulted in better color reproduction performance than the conventional methods. Spatial artifacts that appear in the modified methods are presented and analyzed. It is also shown that the modified method (2) with a thresholding technique achieves a good spatial image quality.
NASA Astrophysics Data System (ADS)
Wang, Longbiao; Odani, Kyohei; Kai, Atsuhiko
2012-12-01
A blind dereverberation method based on power spectral subtraction (SS) using a multi-channel least mean squares algorithm was previously proposed to suppress the reverberant speech without additive noise. The results of isolated word speech recognition experiments showed that this method achieved significant improvements over conventional cepstral mean normalization (CMN) in a reverberant environment. In this paper, we propose a blind dereverberation method based on generalized spectral subtraction (GSS), which has been shown to be effective for noise reduction, instead of power SS. Furthermore, we extend the missing feature theory (MFT), which was initially proposed to enhance the robustness of additive noise, to dereverberation. A one-stage dereverberation and denoising method based on GSS is presented to simultaneously suppress both the additive noise and nonstationary multiplicative noise (reverberation). The proposed dereverberation method based on GSS with MFT is evaluated on a large vocabulary continuous speech recognition task. When the additive noise was absent, the dereverberation method based on GSS with MFT using only 2 microphones achieves a relative word error reduction rate of 11.4 and 32.6% compared to the dereverberation method based on power SS and the conventional CMN, respectively. For the reverberant and noisy speech, the dereverberation and denoising method based on GSS achieves a relative word error reduction rate of 12.8% compared to the conventional CMN with GSS-based additive noise reduction method. We also analyze the effective factors of the compensation parameter estimation for the dereverberation method based on SS, such as the number of channels (the number of microphones), the length of reverberation to be suppressed, and the length of the utterance used for parameter estimation. The experimental results showed that the SS-based method is robust in a variety of reverberant environments for both isolated and continuous speech recognition and under various parameter estimation conditions.
2013-07-01
Vacuum Heat Capacity: Test Method: Conventional MCDS Heating Rate 2 oC/min Temperature(oC): -75 -50 -25 0 25 50 75 100 Average (J/goC): 0.5555...PreConditioning Time-Duration: 24hrs at 125oC and -29inch Vacuum Heat Capacity: Test Method: Conventional MCDS Heating Rate 2 oC/min Temperature(oC...29inch Vacuum Heat Capacity: Test Method: Conventional MCDS Heating Rate 2 oC/min Temperature(oC): -75 -50 -25 0 - - - - Average (J/goC
MRI Volume Fusion Based on 3D Shearlet Decompositions.
Duan, Chang; Wang, Shuai; Wang, Xue Gang; Huang, Qi Hong
2014-01-01
Nowadays many MRI scans can give 3D volume data with different contrasts, but the observers may want to view various contrasts in the same 3D volume. The conventional 2D medical fusion methods can only fuse the 3D volume data layer by layer, which may lead to the loss of interframe correlative information. In this paper, a novel 3D medical volume fusion method based on 3D band limited shearlet transform (3D BLST) is proposed. And this method is evaluated upon MRI T2* and quantitative susceptibility mapping data of 4 human brains. Both the perspective impression and the quality indices indicate that the proposed method has a better performance than conventional 2D wavelet, DT CWT, and 3D wavelet, DT CWT based fusion methods.
Tani, Kazuki; Mio, Motohira; Toyofuku, Tatsuo; Kato, Shinichi; Masumoto, Tomoya; Ijichi, Tetsuya; Matsushima, Masatoshi; Morimoto, Shoichi; Hirata, Takumi
2017-01-01
Spatial normalization is a significant image pre-processing operation in statistical parametric mapping (SPM) analysis. The purpose of this study was to clarify the optimal method of spatial normalization for improving diagnostic accuracy in SPM analysis of arterial spin-labeling (ASL) perfusion images. We evaluated the SPM results of five spatial normalization methods obtained by comparing patients with Alzheimer's disease or normal pressure hydrocephalus complicated with dementia and cognitively healthy subjects. We used the following methods: 3DT1-conventional based on spatial normalization using anatomical images; 3DT1-DARTEL based on spatial normalization with DARTEL using anatomical images; 3DT1-conventional template and 3DT1-DARTEL template, created by averaging cognitively healthy subjects spatially normalized using the above methods; and ASL-DARTEL template created by averaging cognitively healthy subjects spatially normalized with DARTEL using ASL images only. Our results showed that ASL-DARTEL template was small compared with the other two templates. Our SPM results obtained with ASL-DARTEL template method were inaccurate. Also, there were no significant differences between 3DT1-conventional and 3DT1-DARTEL template methods. In contrast, the 3DT1-DARTEL method showed higher detection sensitivity, and precise anatomical location. Our SPM results suggest that we should perform spatial normalization with DARTEL using anatomical images.
NASA Astrophysics Data System (ADS)
Nauleau, Pierre; Apostolakis, Iason; McGarry, Matthew; Konofagou, Elisa
2018-06-01
The stiffness of the arteries is known to be an indicator of the progression of various cardiovascular diseases. Clinically, the pulse wave velocity (PWV) is used as a surrogate for arterial stiffness. Pulse wave imaging (PWI) is a non-invasive, ultrasound-based imaging technique capable of mapping the motion of the vessel walls, allowing the local assessment of arterial properties. Conventionally, a distinctive feature of the displacement wave (e.g. the 50% upstroke) is tracked across the map to estimate the PWV. However, the presence of reflections, such as those generated at the carotid bifurcation, can bias the PWV estimation. In this paper, we propose a two-step cross-correlation based method to characterize arteries using the information available in the PWI spatio-temporal map. First, the area under the cross-correlation curve is proposed as an index for locating the regions of different properties. Second, a local peak of the cross-correlation function is tracked to obtain a less biased estimate of the PWV. Three series of experiments were conducted in phantoms to evaluate the capabilities of the proposed method compared with the conventional method. In the ideal case of a homogeneous phantom, the two methods performed similarly and correctly estimated the PWV. In the presence of reflections, the proposed method provided a more accurate estimate than conventional processing: e.g. for the soft phantom, biases of ‑0.27 and ‑0.71 m · s–1 were observed. In a third series of experiments, the correlation-based method was able to locate two regions of different properties with an error smaller than 1 mm. It also provided more accurate PWV estimates than conventional processing (biases: ‑0.12 versus ‑0.26 m · s–1). Finally, the in vivo feasibility of the proposed method was demonstrated in eleven healthy subjects. The results indicate that the correlation-based method might be less precise in vivo but more accurate than the conventional method.
Real-time performance assessment and adaptive control for a water chiller unit in an HVAC system
NASA Astrophysics Data System (ADS)
Bai, Jianbo; Li, Yang; Chen, Jianhao
2018-02-01
The paper proposes an adaptive control method for a water chiller unit in a HVAC system. Based on the minimum variance evaluation, the adaptive control method was used to realize better control of the water chiller unit. To verify the performance of the adaptive control method, the proposed method was compared with an a conventional PID controller, the simulation results showed that adaptive control method had superior control performance to that of the conventional PID controller.
Shrestha, Rojeet; Miura, Yusuke; Hirano, Ken-Ichi; Chen, Zhen; Okabe, Hiroaki; Chiba, Hitoshi; Hui, Shu-Ping
2018-01-01
Fatty acid (FA) profiling of milk has important applications in human health and nutrition. Conventional methods for the saponification and derivatization of FA are time-consuming and laborious. We aimed to develop a simple, rapid, and economical method for the determination of FA in milk. We applied a beneficial approach of microwave-assisted saponification (MAS) of milk fats and microwave-assisted derivatization (MAD) of FA to its hydrazides, integrated with HPLC-based analysis. The optimal conditions for MAS and MAD were determined. Microwave irradiation significantly reduced the sample preparation time from 80 min in the conventional method to less than 3 min. We used three internal standards for the measurement of short-, medium- and long-chain FA. The proposed method showed satisfactory analytical sensitivity, recovery and reproducibility. There was a significant correlation in the milk FA concentrations between the proposed and conventional methods. Being quick, economic, and convenient, the proposed method for the milk FA measurement can be substitute for the convention method.
An adaptive signal-processing approach to online adaptive tutoring.
Bergeron, Bryan; Cline, Andrew
2011-01-01
Conventional intelligent or adaptive tutoring online systems rely on domain-specific models of learner behavior based on rules, deep domain knowledge, and other resource-intensive methods. We have developed and studied a domain-independent methodology of adaptive tutoring based on domain-independent signal-processing approaches that obviate the need for the construction of explicit expert and student models. A key advantage of our method over conventional approaches is a lower barrier to entry for educators who want to develop adaptive online learning materials.
2013-01-01
Background Intravascular ultrasound (IVUS) is a standard imaging modality for identification of plaque formation in the coronary and peripheral arteries. Volumetric three-dimensional (3D) IVUS visualization provides a powerful tool to overcome the limited comprehensive information of 2D IVUS in terms of complex spatial distribution of arterial morphology and acoustic backscatter information. Conventional 3D IVUS techniques provide sub-optimal visualization of arterial morphology or lack acoustic information concerning arterial structure due in part to low quality of image data and the use of pixel-based IVUS image reconstruction algorithms. In the present study, we describe a novel volumetric 3D IVUS reconstruction algorithm to utilize IVUS signal data and a shape-based nonlinear interpolation. Methods We developed an algorithm to convert a series of IVUS signal data into a fully volumetric 3D visualization. Intermediary slices between original 2D IVUS slices were generated utilizing the natural cubic spline interpolation to consider the nonlinearity of both vascular structure geometry and acoustic backscatter in the arterial wall. We evaluated differences in image quality between the conventional pixel-based interpolation and the shape-based nonlinear interpolation methods using both virtual vascular phantom data and in vivo IVUS data of a porcine femoral artery. Volumetric 3D IVUS images of the arterial segment reconstructed using the two interpolation methods were compared. Results In vitro validation and in vivo comparative studies with the conventional pixel-based interpolation method demonstrated more robustness of the shape-based nonlinear interpolation algorithm in determining intermediary 2D IVUS slices. Our shape-based nonlinear interpolation demonstrated improved volumetric 3D visualization of the in vivo arterial structure and more realistic acoustic backscatter distribution compared to the conventional pixel-based interpolation method. Conclusions This novel 3D IVUS visualization strategy has the potential to improve ultrasound imaging of vascular structure information, particularly atheroma determination. Improved volumetric 3D visualization with accurate acoustic backscatter information can help with ultrasound molecular imaging of atheroma component distribution. PMID:23651569
Tracking B-Cell Repertoires and Clonal Histories in Normal and Malignant Lymphocytes.
Weston-Bell, Nicola J; Cowan, Graeme; Sahota, Surinder S
2017-01-01
Methods for tracking B-cell repertoires and clonal history in normal and malignant B-cells based on immunoglobulin variable region (IGV) gene analysis have developed rapidly with the advent of massive parallel next-generation sequencing (mpNGS) protocols. mpNGS permits a depth of analysis of IGV genes not hitherto feasible, and presents challenges of bioinformatics analysis, which can be readily met by current pipelines. This strategy offers a potential resolution of B-cell usage at a depth that may capture fully the natural state, in a given biological setting. Conventional methods based on RT-PCR amplification and Sanger sequencing are also available where mpNGS is not accessible. Each method offers distinct advantages. Conventional methods for IGV gene sequencing are readily adaptable to most laboratories and provide an ease of analysis to capture salient features of B-cell use. This chapter describes two methods in detail for analysis of IGV genes, mpNGS and conventional RT-PCR with Sanger sequencing.
Triangular model integrating clinical teaching and assessment
Abdelaziz, Adel; Koshak, Emad
2014-01-01
Structuring clinical teaching is a challenge facing medical education curriculum designers. A variety of instructional methods on different domains of learning are indicated to accommodate different learning styles. Conventional methods of clinical teaching, like training in ambulatory care settings, are prone to the factor of coincidence in having varieties of patient presentations. Accordingly, alternative methods of instruction are indicated to compensate for the deficiencies of these conventional methods. This paper presents an initiative that can be used to design a checklist as a blueprint to guide appropriate selection and implementation of teaching/learning and assessment methods in each of the educational courses and modules based on educational objectives. Three categories of instructional methods were identified, and within each a variety of methods were included. These categories are classroom-type settings, health services-based settings, and community service-based settings. Such categories have framed our triangular model of clinical teaching and assessment. PMID:24624002
Triangular model integrating clinical teaching and assessment.
Abdelaziz, Adel; Koshak, Emad
2014-01-01
Structuring clinical teaching is a challenge facing medical education curriculum designers. A variety of instructional methods on different domains of learning are indicated to accommodate different learning styles. Conventional methods of clinical teaching, like training in ambulatory care settings, are prone to the factor of coincidence in having varieties of patient presentations. Accordingly, alternative methods of instruction are indicated to compensate for the deficiencies of these conventional methods. This paper presents an initiative that can be used to design a checklist as a blueprint to guide appropriate selection and implementation of teaching/learning and assessment methods in each of the educational courses and modules based on educational objectives. Three categories of instructional methods were identified, and within each a variety of methods were included. These categories are classroom-type settings, health services-based settings, and community service-based settings. Such categories have framed our triangular model of clinical teaching and assessment.
Cost effectiveness of conventional versus LANDSAT use data for hydrologic modeling
NASA Technical Reports Server (NTRS)
George, T. S.; Taylor, R. S.
1982-01-01
Six case studies were analyzed to investigate the cost effectiveness of using land use data obtained from LANDSAT as opposed to conventionally obtained data. A procedure was developed to determine the relative effectiveness of the two alternative means of acquiring data for hydrological modelling. The cost of conventionally acquired data ranged between $3,000 and $16,000 for the six test basins. Information based on LANDSAT imagery cost between $2,000 and $5,000. Results of the effectiveness analysis shows the differences between the two methods are insignificant. From the cost comparison and the act that each method, conventional and LANDSAT, is shown to be equally effective in developing land use data for hydrologic studies, the cost effectiveness of the conventional or LANDSAT method is found to be a function of basin size for the six test watersheds analyzed. The LANDSAT approach is cost effective for areas containing more than 10 square miles.
Ayoib, Adilah; Hashim, Uda; Gopinath, Subash C B; Md Arshad, M K
2017-11-01
This review covers a developmental progression on early to modern taxonomy at cellular level following the advent of electron microscopy and the advancement in deoxyribonucleic acid (DNA) extraction for expatiation of biological classification at DNA level. Here, we discuss the fundamental values of conventional chemical methods of DNA extraction using liquid/liquid extraction (LLE) followed by development of solid-phase extraction (SPE) methods, as well as recent advances in microfluidics device-based system for DNA extraction on-chip. We also discuss the importance of DNA extraction as well as the advantages over conventional chemical methods, and how Lab-on-a-Chip (LOC) system plays a crucial role for the future achievements.
Evaluation of the White Test for the Intraoperative Detection of Bile Leakage
Leelawat, Kawin; Chaiyabutr, Kittipong; Subwongcharoen, Somboon; Treepongkaruna, Sa-ad
2012-01-01
We assess whether the White test is better than the conventional bile leakage test for the intraoperative detection of bile leakage in hepatectomized patients. This study included 30 patients who received elective liver resection. Both the conventional bile leakage test (injecting an isotonic sodium chloride solution through the cystic duct) and the White test (injecting a fat emulsion solution through the cystic duct) were carried out in the same patients. The detection of bile leakage was compared between the conventional method and the White test. A bile leak was demonstrated in 8 patients (26.7%) by the conventional method and in 19 patients (63.3%) by the White test. In addition, the White test detected a significantly higher number of bile leakage sites compared with the conventional method (Wilcoxon signed-rank test; P < 0.001). The White test is better than the conventional test for the intraoperative detection of bile leakage. Based on our study, we recommend that surgeons investigating bile leakage sites during liver resections should use the White test instead of the conventional bile leakage test. PMID:22547901
Evaluation of the white test for the intraoperative detection of bile leakage.
Leelawat, Kawin; Chaiyabutr, Kittipong; Subwongcharoen, Somboon; Treepongkaruna, Sa-Ad
2012-01-01
We assess whether the White test is better than the conventional bile leakage test for the intraoperative detection of bile leakage in hepatectomized patients. This study included 30 patients who received elective liver resection. Both the conventional bile leakage test (injecting an isotonic sodium chloride solution through the cystic duct) and the White test (injecting a fat emulsion solution through the cystic duct) were carried out in the same patients. The detection of bile leakage was compared between the conventional method and the White test. A bile leak was demonstrated in 8 patients (26.7%) by the conventional method and in 19 patients (63.3%) by the White test. In addition, the White test detected a significantly higher number of bile leakage sites compared with the conventional method (Wilcoxon signed-rank test; P < 0.001). The White test is better than the conventional test for the intraoperative detection of bile leakage. Based on our study, we recommend that surgeons investigating bile leakage sites during liver resections should use the White test instead of the conventional bile leakage test.
Digital Versus Conventional Impressions in Fixed Prosthodontics: A Review.
Ahlholm, Pekka; Sipilä, Kirsi; Vallittu, Pekka; Jakonen, Minna; Kotiranta, Ulla
2018-01-01
To conduct a systematic review to evaluate the evidence of possible benefits and accuracy of digital impression techniques vs. conventional impression techniques. Reports of digital impression techniques versus conventional impression techniques were systematically searched for in the following databases: Cochrane Central Register of Controlled Trials, PubMed, and Web of Science. A combination of controlled vocabulary, free-text words, and well-defined inclusion and exclusion criteria guided the search. Digital impression accuracy is at the same level as conventional impression methods in fabrication of crowns and short fixed dental prostheses (FDPs). For fabrication of implant-supported crowns and FDPs, digital impression accuracy is clinically acceptable. In full-arch impressions, conventional impression methods resulted in better accuracy compared to digital impressions. Digital impression techniques are a clinically acceptable alternative to conventional impression methods in fabrication of crowns and short FDPs. For fabrication of implant-supported crowns and FDPs, digital impression systems also result in clinically acceptable fit. Digital impression techniques are faster and can shorten the operation time. Based on this study, the conventional impression technique is still recommended for full-arch impressions. © 2016 by the American College of Prosthodontists.
Kordi, Masoumeh; Fakari, Farzaneh Rashidi; Mazloum, Seyed Reza; Khadivzadeh, Talaat; Akhlaghi, Farideh; Tara, Mahmoud
2016-01-01
Introduction: Delay in diagnosis of bleeding can be due to underestimation of the actual amount of blood loss during delivery. Therefore, this research aimed to compare the efficacy of web-based, simulation-based, and conventional training on the accuracy of visual estimation of postpartum hemorrhage volume. Materials and Methods: This three-group randomized clinical trial study was performed on 105 midwifery students in Mashhad School of Nursing and Midwifery in 2013. The samples were selected by the convenience method and were randomly divided into three groups of web-based, simulation-based, and conventional training. The three groups participated before and 1 week after the training course in eight station practical tests, then, the students of the web-based group were trained on-line for 1 week, the students of the simulation-based group were trained in the Clinical Skills Centre for 4 h, and the students of the conventional group were trained for 4 h presentation by researchers. The data gathering tool was a demographic questionnaire designed by the researchers and objective structured clinical examination. Data were analyzed by software version 11.5. Results: The accuracy of visual estimation of postpartum hemorrhage volume after training increased significantly in the three groups at all stations (1, 2, 4, 5, 6 and 7 (P = 0.001), 8 (P = 0.027)) except station 3 (blood loss of 20 cc, P = 0.095), but the mean score of blood loss estimation after training did not significantly different between the three groups (P = 0.95). Conclusion: Training increased the accuracy of estimation of postpartum hemorrhage, but no significant difference was found among the three training groups. We can use web-based training as a substitute or supplement of training along with two other more common simulation and conventional methods. PMID:27500175
Comparison between a model-based and a conventional pyramid sensor reconstructor.
Korkiakoski, Visa; Vérinaud, Christophe; Le Louarn, Miska; Conan, Rodolphe
2007-08-20
A model of a non-modulated pyramid wavefront sensor (P-WFS) based on Fourier optics has been presented. Linearizations of the model represented as Jacobian matrices are used to improve the P-WFS phase estimates. It has been shown in simulations that a linear approximation of the P-WFS is sufficient in closed-loop adaptive optics. Also a method to compute model-based synthetic P-WFS command matrices is shown, and its performance is compared to the conventional calibration. It was observed that in poor visibility the new calibration is better than the conventional.
A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots
Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il “Dan”
2016-01-01
This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%. PMID:26938540
AlBarakati, SF; Kula, KS; Ghoneima, AA
2012-01-01
Objective The aim of this study was to assess the reliability and reproducibility of angular and linear measurements of conventional and digital cephalometric methods. Methods A total of 13 landmarks and 16 skeletal and dental parameters were defined and measured on pre-treatment cephalometric radiographs of 30 patients. The conventional and digital tracings and measurements were performed twice by the same examiner with a 6 week interval between measurements. The reliability within the method was determined using Pearson's correlation coefficient (r2). The reproducibility between methods was calculated by paired t-test. The level of statistical significance was set at p < 0.05. Results All measurements for each method were above 0.90 r2 (strong correlation) except maxillary length, which had a correlation of 0.82 for conventional tracing. Significant differences between the two methods were observed in most angular and linear measurements except for ANB angle (p = 0.5), angle of convexity (p = 0.09), anterior cranial base (p = 0.3) and the lower anterior facial height (p = 0.6). Conclusion In general, both methods of conventional and digital cephalometric analysis are highly reliable. Although the reproducibility of the two methods showed some statistically significant differences, most differences were not clinically significant. PMID:22184624
Stabile, Sueli Aparecida Batista; Evangelista, Dilson Henrique Ramos; Talamonte, Valdely Helena; Lippi, Umberto Gazi; Lopes, Reginaldo Guedes Coelho
2012-01-01
To compare two oncotic cervical cytology techniques, the conventional and the liquid-based cytology, in low risk patients for uterine cervical cancer. Comparative prospective study with 100 patients who came to their annual gynecological exam, and were submitted simultaneously to both techniques. We used the McNemar test, with a significance level of p < 0.05 to compare the results obtained related to adequacy of the smear quality, descriptive diagnosis prevalence, guided biopsy confirmation and histology. Adequacy of the smear was similar for both methods. The quality with squamocolumnar junction in 93% of conventional cytology and in 84% of the liquid-based cytology had statistical significance. As for the diagnosis of atypical cells they were detected in 3% of conventional cytology and in 10% of liquid-based cytology (p = 0.06). Atypical squamous cells of undetermined significance were the most prevalent abnormality. The liquid-based cytology performance was better when compared with colposcopy (guided biopsy), presenting sensitivity of 66.7% and specificity of 100%. There was no cytological and histological concordance for the conventional cytology. Liquid-based cytology had a better performance to diagnose atypical cells and the cytohistological concordance was higher than in the conventional cytology.
Why conventional detection methods fail in identifying the existence of contamination events.
Liu, Shuming; Li, Ruonan; Smith, Kate; Che, Han
2016-04-15
Early warning systems are widely used to safeguard water security, but their effectiveness has raised many questions. To understand why conventional detection methods fail to identify contamination events, this study evaluates the performance of three contamination detection methods using data from a real contamination accident and two artificial datasets constructed using a widely applied contamination data construction approach. Results show that the Pearson correlation Euclidean distance (PE) based detection method performs better for real contamination incidents, while the Euclidean distance method (MED) and linear prediction filter (LPF) method are more suitable for detecting sudden spike-like variation. This analysis revealed why the conventional MED and LPF methods failed to identify existence of contamination events. The analysis also revealed that the widely used contamination data construction approach is misleading. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Milic, Vladimir; Kasac, Josip; Novakovic, Branko
2015-10-01
This paper is concerned with ?-gain optimisation of input-affine nonlinear systems controlled by analytic fuzzy logic system. Unlike the conventional fuzzy-based strategies, the non-conventional analytic fuzzy control method does not require an explicit fuzzy rule base. As the first contribution of this paper, we prove, by using the Stone-Weierstrass theorem, that the proposed fuzzy system without rule base is universal approximator. The second contribution of this paper is an algorithm for solving a finite-horizon minimax problem for ?-gain optimisation. The proposed algorithm consists of recursive chain rule for first- and second-order derivatives, Newton's method, multi-step Adams method and automatic differentiation. Finally, the results of this paper are evaluated on a second-order nonlinear system.
A comparison of problem-based learning and conventional teaching in nursing ethics education.
Lin, Chiou-Fen; Lu, Meei-Shiow; Chung, Chun-Chih; Yang, Che-Ming
2010-05-01
The aim of this study was to compare the learning effectiveness of peer tutored problem-based learning and conventional teaching of nursing ethics in Taiwan. The study adopted an experimental design. The peer tutored problem-based learning method was applied to an experimental group and the conventional teaching method to a control group. The study sample consisted of 142 senior nursing students who were randomly assigned to the two groups. All the students were tested for their nursing ethical discrimination ability both before and after the educational intervention. A learning satisfaction survey was also administered to both groups at the end of each course. After the intervention, both groups showed a significant increase in ethical discrimination ability. There was a statistically significant difference between the ethical discrimination scores of the two groups (P < 0.05), with the experimental group on average scoring higher than the control group. There were significant differences in satisfaction with self-motivated learning and critical thinking between the groups. Peer tutored problem-based learning and lecture-type conventional teaching were both effective for nursing ethics education, but problem-based learning was shown to be more effective. Peer tutored problem-based learning has the potential to enhance the efficacy of teaching nursing ethics in situations in which there are personnel and resource constraints.
Géczi, Gábor; Horváth, Márk; Kaszab, Tímea; Alemany, Gonzalo Garnacho
2013-01-01
Extension of shelf life and preservation of products are both very important for the food industry. However, just as with other processes, speed and higher manufacturing performance are also beneficial. Although microwave heating is utilized in a number of industrial processes, there are many unanswered questions about its effects on foods. Here we analyze whether the effects of microwave heating with continuous flow are equivalent to those of traditional heat transfer methods. In our study, the effects of heating of liquid foods by conventional and continuous flow microwave heating were studied. Among other properties, we compared the stability of the liquid foods between the two heat treatments. Our goal was to determine whether the continuous flow microwave heating and the conventional heating methods have the same effects on the liquid foods, and, therefore, whether microwave heat treatment can effectively replace conventional heat treatments. We have compared the colour, separation phenomena of the samples treated by different methods. For milk, we also monitored the total viable cell count, for orange juice, vitamin C contents in addition to the taste of the product by sensory analysis. The majority of the results indicate that the circulating coil microwave method used here is equivalent to the conventional heating method based on thermal conduction and convection. However, some results in the analysis of the milk samples show clear differences between heat transfer methods. According to our results, the colour parameters (lightness, red-green and blue-yellow values) of the microwave treated samples differed not only from the untreated control, but also from the traditional heat treated samples. The differences are visually undetectable, however, they become evident through analytical measurement with spectrophotometer. This finding suggests that besides thermal effects, microwave-based food treatment can alter product properties in other ways as well.
Géczi, Gábor; Horváth, Márk; Kaszab, Tímea; Alemany, Gonzalo Garnacho
2013-01-01
Extension of shelf life and preservation of products are both very important for the food industry. However, just as with other processes, speed and higher manufacturing performance are also beneficial. Although microwave heating is utilized in a number of industrial processes, there are many unanswered questions about its effects on foods. Here we analyze whether the effects of microwave heating with continuous flow are equivalent to those of traditional heat transfer methods. In our study, the effects of heating of liquid foods by conventional and continuous flow microwave heating were studied. Among other properties, we compared the stability of the liquid foods between the two heat treatments. Our goal was to determine whether the continuous flow microwave heating and the conventional heating methods have the same effects on the liquid foods, and, therefore, whether microwave heat treatment can effectively replace conventional heat treatments. We have compared the colour, separation phenomena of the samples treated by different methods. For milk, we also monitored the total viable cell count, for orange juice, vitamin C contents in addition to the taste of the product by sensory analysis. The majority of the results indicate that the circulating coil microwave method used here is equivalent to the conventional heating method based on thermal conduction and convection. However, some results in the analysis of the milk samples show clear differences between heat transfer methods. According to our results, the colour parameters (lightness, red-green and blue-yellow values) of the microwave treated samples differed not only from the untreated control, but also from the traditional heat treated samples. The differences are visually undetectable, however, they become evident through analytical measurement with spectrophotometer. This finding suggests that besides thermal effects, microwave-based food treatment can alter product properties in other ways as well. PMID:23341982
Singh, Sunint; Palaskar, Jayant N.; Mittal, Sanjeev
2013-01-01
Background: Conventional heat cure poly methyl methacrylate (PMMA) is the most commonly used denture base resin despite having some short comings. Lengthy polymerization time being one of them and in order to overcome this fact microwave curing method was recommended. Unavailability of specially designed microwavable acrylic resin made it unpopular. Therefore, in this study, conventional heat cure PMMA was polymerized by microwave energy. Aim and Objectives: This study was designed to evaluate the surface porosities in PMMA cured by conventional water bath and microwave energy and compare it with microwavable acrylic resin cured by microwave energy. Materials and Methods: Wax samples were obtained by pouring molten wax into a metal mold of 25 mm × 12 mm × 3 mm dimensions. These samples were divided into three groups namely C, CM, and M. Group C denotes conventional heat cure PMMA cured by water bath method, CM denotes conventional heat cure PMMA cured by microwave energy, M denotes specially designed microwavable acrylic denture base resin cured by microwave energy. After polymerization, each sample was scanned in three pre-marked areas for surface porosities using the optical microscope. As per the literature available, this instrument is being used for the first time to measure the porosity in acrylic resin. It is a reliable method of measuring area of surface pores. Portion of the sample being scanned is displayed on the computer and with the help of software area of each pore was measured and data were analyzed. Results: Conventional heat cure PMMA samples cured by microwave energy showed maximum porosities than the samples cured by conventional water bath method and microwavable acrylic resin cured by microwave energy. Higher percentage of porosities was statistically significant, but well within the range to be clinically acceptable. Conclusion: Within the limitations of this in-vitro study, conventional heat cure PMMA can be cured by microwave energy without compromising on its property such as surface porosity. PMID:24015000
Marschollek, Michael; Rehwald, Anja; Wolf, Klaus-Hendrik; Gietzelt, Matthias; Nemitz, Gerhard; zu Schwabedissen, Hubertus Meyer; Schulze, Mareike
2011-06-28
Fall events contribute significantly to mortality, morbidity and costs in our ageing population. In order to identify persons at risk and to target preventive measures, many scores and assessment tools have been developed. These often require expertise and are costly to implement. Recent research investigates the use of wearable inertial sensors to provide objective data on motion features which can be used to assess individual fall risk automatically. So far it is unknown how well this new method performs in comparison with conventional fall risk assessment tools. The aim of our research is to compare the predictive performance of our new sensor-based method with conventional and established methods, based on prospective data. In a first study phase, 119 inpatients of a geriatric clinic took part in motion measurements using a wireless triaxial accelerometer during a Timed Up&Go (TUG) test and a 20 m walk. Furthermore, the St. Thomas Risk Assessment Tool in Falling Elderly Inpatients (STRATIFY) was performed, and the multidisciplinary geriatric care team estimated the patients' fall risk. In a second follow-up phase of the study, 46 of the participants were interviewed after one year, including a fall and activity assessment. The predictive performances of the TUG, the STRATIFY and team scores are compared. Furthermore, two automatically induced logistic regression models based on conventional clinical and assessment data (CONV) as well as sensor data (SENSOR) are matched. Among the risk assessment scores, the geriatric team score (sensitivity 56%, specificity 80%) outperforms STRATIFY and TUG. The induced logistic regression models CONV and SENSOR achieve similar performance values (sensitivity 68%/58%, specificity 74%/78%, AUC 0.74/0.72, +LR 2.64/2.61). Both models are able to identify more persons at risk than the simple scores. Sensor-based objective measurements of motion parameters in geriatric patients can be used to assess individual fall risk, and our prediction model's performance matches that of a model based on conventional clinical and assessment data. Sensor-based measurements using a small wearable device may contribute significant information to conventional methods and are feasible in an unsupervised setting. More prospective research is needed to assess the cost-benefit relation of our approach.
Li, Yang; Yang, Jianyi
2017-04-24
The prediction of protein-ligand binding affinity has recently been improved remarkably by machine-learning-based scoring functions. For example, using a set of simple descriptors representing the atomic distance counts, the RF-Score improves the Pearson correlation coefficient to about 0.8 on the core set of the PDBbind 2007 database, which is significantly higher than the performance of any conventional scoring function on the same benchmark. A few studies have been made to discuss the performance of machine-learning-based methods, but the reason for this improvement remains unclear. In this study, by systemically controlling the structural and sequence similarity between the training and test proteins of the PDBbind benchmark, we demonstrate that protein structural and sequence similarity makes a significant impact on machine-learning-based methods. After removal of training proteins that are highly similar to the test proteins identified by structure alignment and sequence alignment, machine-learning-based methods trained on the new training sets do not outperform the conventional scoring functions any more. On the contrary, the performance of conventional functions like X-Score is relatively stable no matter what training data are used to fit the weights of its energy terms.
Googling DNA sequences on the World Wide Web.
Hajibabaei, Mehrdad; Singer, Gregory A C
2009-11-10
New web-based technologies provide an excellent opportunity for sharing and accessing information and using web as a platform for interaction and collaboration. Although several specialized tools are available for analyzing DNA sequence information, conventional web-based tools have not been utilized for bioinformatics applications. We have developed a novel algorithm and implemented it for searching species-specific genomic sequences, DNA barcodes, by using popular web-based methods such as Google. We developed an alignment independent character based algorithm based on dividing a sequence library (DNA barcodes) and query sequence to words. The actual search is conducted by conventional search tools such as freely available Google Desktop Search. We implemented our algorithm in two exemplar packages. We developed pre and post-processing software to provide customized input and output services, respectively. Our analysis of all publicly available DNA barcode sequences shows a high accuracy as well as rapid results. Our method makes use of conventional web-based technologies for specialized genetic data. It provides a robust and efficient solution for sequence search on the web. The integration of our search method for large-scale sequence libraries such as DNA barcodes provides an excellent web-based tool for accessing this information and linking it to other available categories of information on the web.
NASA Astrophysics Data System (ADS)
Dehkordi, N. Mahdian; Sadati, N.; Hamzeh, M.
2017-09-01
This paper presents a robust dc-link voltage as well as a current control strategy for a bidirectional interlink converter (BIC) in a hybrid ac/dc microgrid. To enhance the dc-bus voltage control, conventional methods strive to measure and feedforward the load or source power in the dc-bus control scheme. However, the conventional feedforward-based approaches require remote measurement with communications. Moreover, conventional methods suffer from stability and performance issues, mainly due to the use of the small-signal-based control design method. To overcome these issues, in this paper, the power from DG units of the dc subgrid imposed on the BIC is considered an unmeasurable disturbance signal. In the proposed method, in contrast to existing methods, using the nonlinear model of BIC, a robust controller that does not need the remote measurement with communications effectively rejects the impact of the disturbance signal imposed on the BIC's dc-link voltage. To avoid communication links, the robust controller has a plug-and-play feature that makes it possible to add a DG/load to or remove it from the dc subgrid without distorting the hybrid microgrid stability. Finally, Monte Carlo simulations are conducted to confirm the effectiveness of the proposed control strategy in MATLAB/SimPowerSystems software environment.
Austin, Peter C; Lee, Douglas S; Steyerberg, Ewout W; Tu, Jack V
2012-01-01
In biomedical research, the logistic regression model is the most commonly used method for predicting the probability of a binary outcome. While many clinical researchers have expressed an enthusiasm for regression trees, this method may have limited accuracy for predicting health outcomes. We aimed to evaluate the improvement that is achieved by using ensemble-based methods, including bootstrap aggregation (bagging) of regression trees, random forests, and boosted regression trees. We analyzed 30-day mortality in two large cohorts of patients hospitalized with either acute myocardial infarction (N = 16,230) or congestive heart failure (N = 15,848) in two distinct eras (1999–2001 and 2004–2005). We found that both the in-sample and out-of-sample prediction of ensemble methods offered substantial improvement in predicting cardiovascular mortality compared to conventional regression trees. However, conventional logistic regression models that incorporated restricted cubic smoothing splines had even better performance. We conclude that ensemble methods from the data mining and machine learning literature increase the predictive performance of regression trees, but may not lead to clear advantages over conventional logistic regression models for predicting short-term mortality in population-based samples of subjects with cardiovascular disease. PMID:22777999
On standardization of low symmetry crystal fields
NASA Astrophysics Data System (ADS)
Gajek, Zbigniew
2015-07-01
Standardization methods of low symmetry - orthorhombic, monoclinic and triclinic - crystal fields are formulated and discussed. Two alternative approaches are presented, the conventional one, based on the second-rank parameters and the standardization based on the fourth-rank parameters. Mainly f-electron systems are considered but some guidelines for d-electron systems and the spin Hamiltonian describing the zero-field splitting are given. The discussion focuses on premises for choosing the most suitable method, in particular on inadequacy of the conventional one. Few examples from the literature illustrate this situation.
High-speed engine/component performance assessment using exergy and thrust-based methods
NASA Technical Reports Server (NTRS)
Riggins, D. W.
1996-01-01
This investigation summarizes a comparative study of two high-speed engine performance assessment techniques based on energy (available work) and thrust-potential (thrust availability). Simple flow-fields utilizing Rayleigh heat addition and one-dimensional flow with friction are used to demonstrate the fundamental inability of conventional energy techniques to predict engine component performance, aid in component design, or accurately assess flow losses. The use of the thrust-based method on these same examples demonstrates its ability to yield useful information in all these categories. Energy and thrust are related and discussed from the stand-point of their fundamental thermodynamic and fluid dynamic definitions in order to explain the differences in information obtained using the two methods. The conventional definition of energy is shown to include work which is inherently unavailable to an aerospace Brayton engine. An engine-based energy is then developed which accurately accounts for this inherently unavailable work; performance parameters based on this quantity are then shown to yield design and loss information equivalent to the thrust-based method.
Weber, Uwe; Constantinescu, Mihai A; Woermann, Ulrich; Schmitz, Felix; Schnabel, Kai
2016-01-01
Various different learning methods are available for planning tuition regarding the introduction to surgical hand disinfection. These learning methods should help to organise and deal with this topic. The use of a video film is an alternative to conventional tuition due to the real presentation possibilities of practical demonstration. This study examines by way of comparison which form of communication is more effective for learning and applying surgical hand disinfection for medical students in their first year of studies: video-based instruction or conventional tuition. A total of 50 first-year medical students were randomly allocated either to the "Conventional Instruction" (CI) study group or to the "Video-based Instruction" (VI) study group. The conventional instruction was carried out by an experienced nurse preceptor/nurse educator for the operating theatre who taught the preparatory measures and the actual procedure in a two-minute lesson. The second group watched a two-minute video sequence with identical content. Afterwards, both groups demonstrated practically the knowledge they had acquired at an individual practical test station. The quality (a) of the preparation and (b) of the procedure as well as (c) the quality of the results was assessed by 6 blind experts using a check list. The acceptability of the respective teaching method was also asked about using a questionnaire. The group performance did not differ either in the preparation (t=-78, p<0.44) or in the quality (t=-99, p<0.34). With respect to performance, it was possible to demonstrate a strong treatment effect. In the practical (t=-3.33, p<0.002, d=0.943) and in the total score (t=-2.65, p<0.011, d=0.751), the group with video-based instruction achieved a significantly better result. In response to the question as to which of the two learning methods they would prefer, the significant majority (60.4%) of students stated video instruction. In this study, the use of the video-based instruction emerged as the more effective teaching method for learning surgical hand disinfection for medical students and is preferable to conventional instruction. The video instruction is associated with a higher learning effectiveness, efficiency and acceptability.
Viegas, Carla; Sabino, Raquel; Botelho, Daniel; dos Santos, Mateus; Gomes, Anita Quintal
2015-09-01
Cork oak is the second most dominant forest species in Portugal and makes this country the world leader in cork export. Occupational exposure to Chrysonilia sitophila and the Penicillium glabrum complex in cork industry is common, and the latter fungus is associated with suberosis. However, as conventional methods seem to underestimate its presence in occupational environments, the aim of our study was to see whether information obtained by polymerase chain reaction (PCR), a molecular-based method, can complement conventional findings and give a better insight into occupational exposure of cork industry workers. We assessed fungal contamination with the P. glabrum complex in three cork manufacturing plants in the outskirts of Lisbon using both conventional and molecular methods. Conventional culturing failed to detect the fungus at six sampling sites in which PCR did detect it. This confirms our assumption that the use of complementing methods can provide information for a more accurate assessment of occupational exposure to the P. glabrum complex in cork industry.
Yamada, Toru; Umeyama, Shinji; Matsuda, Keiji
2012-01-01
In conventional functional near-infrared spectroscopy (fNIRS), systemic physiological fluctuations evoked by a body's motion and psychophysiological changes often contaminate fNIRS signals. We propose a novel method for separating functional and systemic signals based on their hemodynamic differences. Considering their physiological origins, we assumed a negative and positive linear relationship between oxy- and deoxyhemoglobin changes of functional and systemic signals, respectively. Their coefficients are determined by an empirical procedure. The proposed method was compared to conventional and multi-distance NIRS. The results were as follows: (1) Nonfunctional tasks evoked substantial oxyhemoglobin changes, and comparatively smaller deoxyhemoglobin changes, in the same direction by conventional NIRS. The systemic components estimated by the proposed method were similar to the above finding. The estimated functional components were very small. (2) During finger-tapping tasks, laterality in the functional component was more distinctive using our proposed method than that by conventional fNIRS. The systemic component indicated task-evoked changes, regardless of the finger used to perform the task. (3) For all tasks, the functional components were highly coincident with signals estimated by multi-distance NIRS. These results strongly suggest that the functional component obtained by the proposed method originates in the cerebral cortical layer. We believe that the proposed method could improve the reliability of fNIRS measurements without any modification in commercially available instruments. PMID:23185590
IT: An Effective Pedagogic Tool in the Teaching of Quantitative Methods in Management.
ERIC Educational Resources Information Center
Nadkami, Sanjay M.
1998-01-01
Examines the possibility of supplementing conventional pedagogic methods with information technology-based teaching aids in the instruction of quantitative methods to undergraduate students. Considers the case for a problem-based learning approach, and discusses the role of information technology. (Author/LRW)
NASA Technical Reports Server (NTRS)
Ragan, R. M.; Jackson, T. J.; Fitch, W. N.; Shubinski, R. P.
1976-01-01
Models designed to support the hydrologic studies associated with urban water resources planning require input parameters that are defined in terms of land cover. Estimating the land cover is a difficult and expensive task when drainage areas larger than a few sq. km are involved. Conventional and LANDSAT based methods for estimating the land cover based input parameters required by hydrologic planning models were compared in a case study of the 50.5 sq. km (19.5 sq. mi) Four Mile Run Watershed in Virginia. Results of the study indicate that the LANDSAT based approach is highly cost effective for planning model studies. The conventional approach to define inputs was based on 1:3600 aerial photos, required 110 man-days and a total cost of $14,000. The LANDSAT based approach required 6.9 man-days and cost $2,350. The conventional and LANDSAT based models gave similar results relative to discharges and estimated annual damages expected from no flood control, channelization, and detention storage alternatives.
Taguchi, Y-h; Iwadate, Mitsuo; Umeyama, Hideaki
2015-04-30
Feature extraction (FE) is difficult, particularly if there are more features than samples, as small sample numbers often result in biased outcomes or overfitting. Furthermore, multiple sample classes often complicate FE because evaluating performance, which is usual in supervised FE, is generally harder than the two-class problem. Developing sample classification independent unsupervised methods would solve many of these problems. Two principal component analysis (PCA)-based FE, specifically, variational Bayes PCA (VBPCA) was extended to perform unsupervised FE, and together with conventional PCA (CPCA)-based unsupervised FE, were tested as sample classification independent unsupervised FE methods. VBPCA- and CPCA-based unsupervised FE both performed well when applied to simulated data, and a posttraumatic stress disorder (PTSD)-mediated heart disease data set that had multiple categorical class observations in mRNA/microRNA expression of stressed mouse heart. A critical set of PTSD miRNAs/mRNAs were identified that show aberrant expression between treatment and control samples, and significant, negative correlation with one another. Moreover, greater stability and biological feasibility than conventional supervised FE was also demonstrated. Based on the results obtained, in silico drug discovery was performed as translational validation of the methods. Our two proposed unsupervised FE methods (CPCA- and VBPCA-based) worked well on simulated data, and outperformed two conventional supervised FE methods on a real data set. Thus, these two methods have suggested equivalence for FE on categorical multiclass data sets, with potential translational utility for in silico drug discovery.
Mino, Takuya; Maekawa, Kenji; Ueda, Akihiro; Higuchi, Shizuo; Sejima, Junichi; Takeuchi, Tetsuo; Hara, Emilio Satoshi; Kimura-Ono, Aya; Sonoyama, Wataru; Kuboki, Takuo
2015-04-01
The aim of this article was to investigate the accuracy in the reproducibility of full-arch implant provisional restorations to final restorations between a 3D Scan/CAD/CAM technique and the conventional method. We fabricated two final restorations for rehabilitation of maxillary and mandibular complete edentulous area and performed a computer-based comparative analysis of the accuracy in the reproducibility of the provisional restoration to final restoration between a 3D scanning and CAD/CAM (Scan/CAD/CAM) technique and the conventional silicone-mold transfer technique. Final restorations fabricated either by the conventional or Scan/CAD/CAM method were successfully installed in the patient. The total concave/convex volume discrepancy observed with the Scan/CAD/CAM technique was 503.50mm(3) and 338.15 mm(3) for maxillary and mandibular implant-supported prostheses (ISPs), respectively. On the other hand, total concave/convex volume discrepancy observed with the conventional method was markedly high (1106.84 mm(3) and 771.23 mm(3) for maxillary and mandibular ISPs, respectively). The results of the present report suggest that Scan/CAD/CAM method enables a more precise and accurate transfer of provisional restorations to final restorations compared to the conventional method. Copyright © 2014 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.
Tan, Maxine; Aghaei, Faranak; Wang, Yunzhi; Zheng, Bin
2017-01-01
The purpose of this study is to evaluate a new method to improve performance of computer-aided detection (CAD) schemes of screening mammograms with two approaches. In the first approach, we developed a new case based CAD scheme using a set of optimally selected global mammographic density, texture, spiculation, and structural similarity features computed from all four full-field digital mammography (FFDM) images of the craniocaudal (CC) and mediolateral oblique (MLO) views by using a modified fast and accurate sequential floating forward selection feature selection algorithm. Selected features were then applied to a “scoring fusion” artificial neural network (ANN) classification scheme to produce a final case based risk score. In the second approach, we combined the case based risk score with the conventional lesion based scores of a conventional lesion based CAD scheme using a new adaptive cueing method that is integrated with the case based risk scores. We evaluated our methods using a ten-fold cross-validation scheme on 924 cases (476 cancer and 448 recalled or negative), whereby each case had all four images from the CC and MLO views. The area under the receiver operating characteristic curve was AUC = 0.793±0.015 and the odds ratio monotonically increased from 1 to 37.21 as CAD-generated case based detection scores increased. Using the new adaptive cueing method, the region based and case based sensitivities of the conventional CAD scheme at a false positive rate of 0.71 per image increased by 2.4% and 0.8%, respectively. The study demonstrated that supplementary information can be derived by computing global mammographic density image features to improve CAD-cueing performance on the suspicious mammographic lesions. PMID:27997380
Reflection full-waveform inversion using a modified phase misfit function
NASA Astrophysics Data System (ADS)
Cui, Chao; Huang, Jian-Ping; Li, Zhen-Chun; Liao, Wen-Yuan; Guan, Zhe
2017-09-01
Reflection full-waveform inversion (RFWI) updates the low- and highwavenumber components, and yields more accurate initial models compared with conventional full-waveform inversion (FWI). However, there is strong nonlinearity in conventional RFWI because of the lack of low-frequency data and the complexity of the amplitude. The separation of phase and amplitude information makes RFWI more linear. Traditional phase-calculation methods face severe phase wrapping. To solve this problem, we propose a modified phase-calculation method that uses the phase-envelope data to obtain the pseudo phase information. Then, we establish a pseudophase-information-based objective function for RFWI, with the corresponding source and gradient terms. Numerical tests verify that the proposed calculation method using the phase-envelope data guarantees the stability and accuracy of the phase information and the convergence of the objective function. The application on a portion of the Sigsbee2A model and comparison with inversion results of the improved RFWI and conventional FWI methods verify that the pseudophase-based RFWI produces a highly accurate and efficient velocity model. Moreover, the proposed method is robust to noise and high frequency.
Yakhelef, N; Audibert, M; Varaine, F; Chakaya, J; Sitienei, J; Huerga, H; Bonnet, M
2014-05-01
In 2007, the World Health Organization recommended introducing rapid Mycobacterium tuberculosis culture into the diagnostic algorithm of smear-negative pulmonary tuberculosis (TB). To assess the cost-effectiveness of introducing a rapid non-commercial culture method (thin-layer agar), together with Löwenstein-Jensen culture to diagnose smear-negative TB at a district hospital in Kenya. Outcomes (number of true TB cases treated) were obtained from a prospective study evaluating the effectiveness of a clinical and radiological algorithm (conventional) against the alternative algorithm (conventional plus M. tuberculosis culture) in 380 smear-negative TB suspects. The costs of implementing each algorithm were calculated using a 'micro-costing' or 'ingredient-based' method. We then compared the cost and effectiveness of conventional vs. culture-based algorithms and estimated the incremental cost-effectiveness ratio. The costs of conventional and culture-based algorithms per smear-negative TB suspect were respectively €39.5 and €144. The costs per confirmed and treated TB case were respectively €452 and €913. The culture-based algorithm led to diagnosis and treatment of 27 more cases for an additional cost of €1477 per case. Despite the increase in patients started on treatment thanks to culture, the relatively high cost of a culture-based algorithm will make it difficult for resource-limited countries to afford.
Bidirectional composition on lie groups for gradient-based image alignment.
Mégret, Rémi; Authesserre, Jean-Baptiste; Berthoumieu, Yannick
2010-09-01
In this paper, a new formulation based on bidirectional composition on Lie groups (BCL) for parametric gradient-based image alignment is presented. Contrary to the conventional approaches, the BCL method takes advantage of the gradients of both template and current image without combining them a priori. Based on this bidirectional formulation, two methods are proposed and their relationship with state-of-the-art gradient based approaches is fully discussed. The first one, i.e., the BCL method, relies on the compositional framework to provide the minimization of the compensated error with respect to an augmented parameter vector. The second one, the projected BCL (PBCL), corresponds to a close approximation of the BCL approach. A comparative study is carried out dealing with computational complexity, convergence rate and frequence of convergence. Numerical experiments using a conventional benchmark show the performance improvement especially for asymmetric levels of noise, which is also discussed from a theoretical point of view.
Purification of photon subtraction from continuous squeezed light by filtering
NASA Astrophysics Data System (ADS)
Yoshikawa, Jun-ichi; Asavanant, Warit; Furusawa, Akira
2017-11-01
Photon subtraction from squeezed states is a powerful scheme to create good approximation of so-called Schrödinger cat states. However, conventional continuous-wave-based methods actually involve some impurity in squeezing of localized wave packets, even in the ideal case of no optical losses. Here, we theoretically discuss this impurity by introducing mode match of squeezing. Furthermore, here we propose a method to remove this impurity by filtering the photon-subtraction field. Our method in principle enables creation of pure photon-subtracted squeezed states, which was not possible with conventional methods.
NASA Astrophysics Data System (ADS)
Mao, Deqing; Zhang, Yin; Zhang, Yongchao; Huang, Yulin; Yang, Jianyu
2018-01-01
Doppler beam sharpening (DBS) is a critical technology for airborne radar ground mapping in forward-squint region. In conventional DBS technology, the narrow-band Doppler filter groups formed by fast Fourier transform (FFT) method suffer from low spectral resolution and high side lobe levels. The iterative adaptive approach (IAA), based on the weighted least squares (WLS), is applied to the DBS imaging applications, forming narrower Doppler filter groups than the FFT with lower side lobe levels. Regrettably, the IAA is iterative, and requires matrix multiplication and inverse operation when forming the covariance matrix, its inverse and traversing the WLS estimate for each sampling point, resulting in a notably high computational complexity for cubic time. We propose a fast IAA (FIAA)-based super-resolution DBS imaging method, taking advantage of the rich matrix structures of the classical narrow-band filtering. First, we formulate the covariance matrix via the FFT instead of the conventional matrix multiplication operation, based on the typical Fourier structure of the steering matrix. Then, by exploiting the Gohberg-Semencul representation, the inverse of the Toeplitz covariance matrix is computed by the celebrated Levinson-Durbin (LD) and Toeplitz-vector algorithm. Finally, the FFT and fast Toeplitz-vector algorithm are further used to traverse the WLS estimates based on the data-dependent trigonometric polynomials. The method uses the Hermitian feature of the echo autocorrelation matrix R to achieve its fast solution and uses the Toeplitz structure of R to realize its fast inversion. The proposed method enjoys a lower computational complexity without performance loss compared with the conventional IAA-based super-resolution DBS imaging method. The results based on simulations and measured data verify the imaging performance and the operational efficiency.
Design and implement of mobile equipment management system based on QRcode
NASA Astrophysics Data System (ADS)
Yu, Runze; Duan, Xiaohui; Jiao, Bingli
2017-08-01
A mobile equipment management system based on QRcode is proposed for remote and convenient device management. Unlike conventional systems, the system here makes managers accessible to real-time information with smart phones. Compared with the conventional method, which can only be operated with specific devices, this lightweight and efficient tele management mode is conducive to the asset management in multiple scenarios.
Video-based teleradiology for intraosseous lesions. A receiver operating characteristic analysis.
Tyndall, D A; Boyd, K S; Matteson, S R; Dove, S B
1995-11-01
Immediate access to off-site expert diagnostic consultants regarding unusual radiographic findings or radiographic quality assurance issues could be a current problem for private dental practitioners. Teleradiology, a system for transmitting radiographic images, offers a potential solution to this problem. Although much research has been done to evaluate feasibility and utilization of teleradiology systems in medical imaging, little research on dental applications has been performed. In this investigation 47 panoramic films with an equal distribution of images with intraosseous jaw lesions and no disease were viewed by a panel of observers with teleradiology and conventional viewing methods. The teleradiology system consisted of an analog video-based system simulating remote radiographic consultation between a general dentist and a dental imaging specialist. Conventional viewing consisted of traditional viewbox methods. Observers were asked to identify the presence or absence of 24 intraosseous lesions and to determine their locations. No statistically significant differences in modalities or observers were identified between methods at the 0.05 level. The results indicate that viewing intraosseous lesions of video-based panoramic images is equal to conventional light box viewing.
Task-based statistical image reconstruction for high-quality cone-beam CT
NASA Astrophysics Data System (ADS)
Dang, Hao; Webster Stayman, J.; Xu, Jennifer; Zbijewski, Wojciech; Sisniega, Alejandro; Mow, Michael; Wang, Xiaohui; Foos, David H.; Aygun, Nafi; Koliatsos, Vassilis E.; Siewerdsen, Jeffrey H.
2017-11-01
Task-based analysis of medical imaging performance underlies many ongoing efforts in the development of new imaging systems. In statistical image reconstruction, regularization is often formulated in terms to encourage smoothness and/or sharpness (e.g. a linear, quadratic, or Huber penalty) but without explicit formulation of the task. We propose an alternative regularization approach in which a spatially varying penalty is determined that maximizes task-based imaging performance at every location in a 3D image. We apply the method to model-based image reconstruction (MBIR—viz., penalized weighted least-squares, PWLS) in cone-beam CT (CBCT) of the head, focusing on the task of detecting a small, low-contrast intracranial hemorrhage (ICH), and we test the performance of the algorithm in the context of a recently developed CBCT prototype for point-of-care imaging of brain injury. Theoretical predictions of local spatial resolution and noise are computed via an optimization by which regularization (specifically, the quadratic penalty strength) is allowed to vary throughout the image to maximize local task-based detectability index ({{d}\\prime} ). Simulation studies and test-bench experiments were performed using an anthropomorphic head phantom. Three PWLS implementations were tested: conventional (constant) penalty; a certainty-based penalty derived to enforce constant point-spread function, PSF; and the task-based penalty derived to maximize local detectability at each location. Conventional (constant) regularization exhibited a fairly strong degree of spatial variation in {{d}\\prime} , and the certainty-based method achieved uniform PSF, but each exhibited a reduction in detectability compared to the task-based method, which improved detectability up to ~15%. The improvement was strongest in areas of high attenuation (skull base), where the conventional and certainty-based methods tended to over-smooth the data. The task-driven reconstruction method presents a promising regularization method in MBIR by explicitly incorporating task-based imaging performance as the objective. The results demonstrate improved ICH conspicuity and support the development of high-quality CBCT systems.
Patch-Based Super-Resolution of MR Spectroscopic Images: Application to Multiple Sclerosis
Jain, Saurabh; Sima, Diana M.; Sanaei Nezhad, Faezeh; Hangel, Gilbert; Bogner, Wolfgang; Williams, Stephen; Van Huffel, Sabine; Maes, Frederik; Smeets, Dirk
2017-01-01
Purpose: Magnetic resonance spectroscopic imaging (MRSI) provides complementary information to conventional magnetic resonance imaging. Acquiring high resolution MRSI is time consuming and requires complex reconstruction techniques. Methods: In this paper, a patch-based super-resolution method is presented to increase the spatial resolution of metabolite maps computed from MRSI. The proposed method uses high resolution anatomical MR images (T1-weighted and Fluid-attenuated inversion recovery) to regularize the super-resolution process. The accuracy of the method is validated against conventional interpolation techniques using a phantom, as well as simulated and in vivo acquired human brain images of multiple sclerosis subjects. Results: The method preserves tissue contrast and structural information, and matches well with the trend of acquired high resolution MRSI. Conclusions: These results suggest that the method has potential for clinically relevant neuroimaging applications. PMID:28197066
Fatania, Nita; Fraser, Mark; Savage, Mike; Hart, Jason; Abdolrasouli, Alireza
2015-12-01
Performance of matrix-assisted laser desorption ionisation-time of flight mass spectrometry (MALDI-TOF MS) was compared in a side-by side-analysis with conventional phenotypic methods currently in use in our laboratory for identification of yeasts in a routine diagnostic setting. A diverse collection of 200 clinically important yeasts (19 species, five genera) were identified by both methods using standard protocols. Discordant or unreliable identifications were resolved by sequencing of the internal transcribed spacer region of the rRNA gene. MALDI-TOF and conventional methods were in agreement for 182 isolates (91%) with correct identification to species level. Eighteen discordant results (9%) were due to rarely encountered species, hence the difficulty in their identification using traditional phenotypic methods. MALDI-TOF MS enabled rapid, reliable and accurate identification of clinically important yeasts in a routine diagnostic microbiology laboratory. Isolates with rare, unusual or low probability identifications should be confirmed using robust molecular methods. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
K-space data processing for magnetic resonance elastography (MRE).
Corbin, Nadège; Breton, Elodie; de Mathelin, Michel; Vappou, Jonathan
2017-04-01
Magnetic resonance elastography (MRE) requires substantial data processing based on phase image reconstruction, wave enhancement, and inverse problem solving. The objective of this study is to propose a new, fast MRE method based on MR raw data processing, particularly adapted to applications requiring fast MRE measurement or high elastogram update rate. The proposed method allows measuring tissue elasticity directly from raw data without prior phase image reconstruction and without phase unwrapping. Experimental feasibility is assessed both in a gelatin phantom and in the liver of a porcine model in vivo. Elastograms are reconstructed with the raw MRE method and compared to those obtained using conventional MRE. In a third experiment, changes in elasticity are monitored in real-time in a gelatin phantom during its solidification by using both conventional MRE and raw MRE. The raw MRE method shows promising results by providing similar elasticity values to the ones obtained with conventional MRE methods while decreasing the number of processing steps and circumventing the delicate step of phase unwrapping. Limitations of the proposed method are the influence of the magnitude on the elastogram and the requirement for a minimum number of phase offsets. This study demonstrates the feasibility of directly reconstructing elastograms from raw data.
NASA Astrophysics Data System (ADS)
Karimi, Hossein; Nikmehr, Saeid; Khodapanah, Ehsan
2016-09-01
In this paper, we develop a B-spline finite-element method (FEM) based on a locally modal wave propagation with anisotropic perfectly matched layers (PMLs), for the first time, to simulate nonlinear and lossy plasmonic waveguides. Conventional approaches like beam propagation method, inherently omit the wave spectrum and do not provide physical insight into nonlinear modes especially in the plasmonic applications, where nonlinear modes are constructed by linear modes with very close propagation constant quantities. Our locally modal B-spline finite element method (LMBS-FEM) does not suffer from the weakness of the conventional approaches. To validate our method, first, propagation of wave for various kinds of linear, nonlinear, lossless and lossy materials of metal-insulator plasmonic structures are simulated using LMBS-FEM in MATLAB and the comparisons are made with FEM-BPM module of COMSOL Multiphysics simulator and B-spline finite-element finite-difference wide angle beam propagation method (BSFEFD-WABPM). The comparisons show that not only our developed numerical approach is computationally more accurate and efficient than conventional approaches but also it provides physical insight into the nonlinear nature of the propagation modes.
Conventionalism and Methodological Standards in Contending with Skepticism about Uncertainty
NASA Astrophysics Data System (ADS)
Brumble, K. C.
2012-12-01
What it means to measure and interpret confidence and uncertainty in a result is often particular to a specific scientific community and its methodology of verification. Additionally, methodology in the sciences varies greatly across disciplines and scientific communities. Understanding the accuracy of predictions of a particular science thus depends largely upon having an intimate working knowledge of the methods, standards, and conventions utilized and underpinning discoveries in that scientific field. Thus, valid criticism of scientific predictions and discoveries must be conducted by those who are literate in the field in question: they must have intimate working knowledge of the methods of the particular community and of the particular research under question. The interpretation and acceptance of uncertainty is one such shared, community-based convention. In the philosophy of science, this methodological and community-based way of understanding scientific work is referred to as conventionalism. By applying the conventionalism of historian and philosopher of science Thomas Kuhn to recent attacks upon methods of multi-proxy mean temperature reconstructions, I hope to illuminate how climate skeptics and their adherents fail to appreciate the need for community-based fluency in the methodological standards for understanding uncertainty shared by the wider climate science community. Further, I will flesh out a picture of climate science community standards of evidence and statistical argument following the work of philosopher of science Helen Longino. I will describe how failure to appreciate the conventions of professionalism and standards of evidence accepted in the climate science community results in the application of naïve falsification criteria. Appeal to naïve falsification in turn has allowed scientists outside the standards and conventions of the mainstream climate science community to consider themselves and to be judged by climate skeptics as valid critics of particular statistical reconstructions with naïve and misapplied methodological criticism. Examples will include the skeptical responses to multi-proxy mean temperature reconstructions and congressional hearings criticizing the work of Michael Mann et al.'s Hockey Stick.
Anttila, Ahti; Pokhrel, Arun; Kotaniemi-Talonen, Laura; Hakama, Matti; Malila, Nea; Nieminen, Pekka
2011-03-01
The purpose was to evaluate alternative cytological screening methods in population-based screening for cervical cancer up to cancer incidence and mortality outcome. Automation-assisted screening was compared to conventional cytological screening in a randomized design. The study was based on follow-up of 503,391 women invited in the Finnish cervical cancer screening program during 1999-2003. The endpoints were incident cervical cancer, severe intraepithelial neoplasia and deaths from cervical cancer. One third of the women had been randomly allocated to automation-assisted screening and two thirds to conventional cytology. Information on cervical cancer and severe neoplasia were obtained through 1999-2007 from a linkage between screening and cancer registry files. There were altogether 3.2 million woman-years at risk, and the average follow-up time was 6.3 years. There was no difference in the risk of cervical cancer between the automation-assisted and conventional screening methods; the relative risk (RR) of cervical cancer between the study and control arm was 1.00 (95% confidence interval [CI] = 0.76-1.29) among all invited and 1.08 (95% CI = 0.76-1.51) among women who were test negative at entry. Comparing women who were test negative with nonscreened, RR of cervical cancer incidence was 0.26, 95% CI = 0.19-0.36 and of mortality 0.24 (0.13-0.43). Both methods were valid for screening. Because cervical cancer is rare in our country, we cannot rule out small differences between methods. Evidence on alternative methods for cervical cancer screening is increasing and it is thus feasible to evaluate new methods in large-scale population-based screening programs up to cancer outcome. Copyright © 2010 UICC.
Green extraction of grape skin phenolics by using deep eutectic solvents.
Cvjetko Bubalo, Marina; Ćurko, Natka; Tomašević, Marina; Kovačević Ganić, Karin; Radojčić Redovniković, Ivana
2016-06-01
Conventional extraction techniques for plant phenolics are usually associated with high organic solvent consumption and long extraction times. In order to establish an environmentally friendly extraction method for grape skin phenolics, deep eutectic solvents (DES) as a green alternative to conventional solvents coupled with highly efficient microwave-assisted and ultrasound-assisted extraction methods (MAE and UAE, respectively) have been considered. Initially, screening of five different DES for proposed extraction was performed and choline chloride-based DES containing oxalic acid as a hydrogen bond donor with 25% of water was selected as the most promising one, resulting in more effective extraction of grape skin phenolic compounds compared to conventional solvents. Additionally, in our study, UAE proved to be the best extraction method with extraction efficiency superior to both MAE and conventional extraction method. The knowledge acquired in this study will contribute to further DES implementation in extraction of biologically active compounds from various plant sources. Copyright © 2016 Elsevier Ltd. All rights reserved.
Shin, Younghak; Lee, Seungchan; Ahn, Minkyu; Cho, Hohyun; Jun, Sung Chan; Lee, Heung-No
2015-11-01
One of the main problems related to electroencephalogram (EEG) based brain-computer interface (BCI) systems is the non-stationarity of the underlying EEG signals. This results in the deterioration of the classification performance during experimental sessions. Therefore, adaptive classification techniques are required for EEG based BCI applications. In this paper, we propose simple adaptive sparse representation based classification (SRC) schemes. Supervised and unsupervised dictionary update techniques for new test data and a dictionary modification method by using the incoherence measure of the training data are investigated. The proposed methods are very simple and additional computation for the re-training of the classifier is not needed. The proposed adaptive SRC schemes are evaluated using two BCI experimental datasets. The proposed methods are assessed by comparing classification results with the conventional SRC and other adaptive classification methods. On the basis of the results, we find that the proposed adaptive schemes show relatively improved classification accuracy as compared to conventional methods without requiring additional computation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Al-mejrad, Lamya A.; Albarrag, Ahmed M.
2017-01-01
PURPOSE The goal of this study was to compare the adhesion of Candida albicans to the surfaces of CAD/CAM and conventionally fabricated complete denture bases. MATERIALS AND METHODS Twenty discs of acrylic resin poly (methyl methacrylate) were fabricated with CAD/CAM and conventional procedures (heat-polymerized acrylic resin). The specimens were divided into two groups: 10 discs were fabricated using the CAD/CAM procedure (Wieland Digital Denture Ivoclar Vivadent), and 10 discs were fabricated using a conventional flasking and pressure-pack technique. Candida colonization was performed on all the specimens using four Candida albicans isolates. The difference in Candida albicans adhesion on the discs was evaluated. The number of adherent yeast cells was calculated by the colony-forming units (CFU) and by Fluorescence microscopy. RESULTS There was a significant difference in the adhesion of Candida albicans to the complete denture bases created with CAD/CAM and the adhesion to those created with the conventional procedure. The CAD/CAM denture bases exhibited less adhesion of Candida albicans than did the denture bases created with the conventional procedure (P<.05). CONCLUSION The CAD/CAM procedure for fabricating complete dentures showed promising potential for reducing the adherence of Candida to the denture base surface. Clinical Implications. Complete dentures made with the CAD/CAM procedure might decrease the incidence of denture stomatitis compared with conventional dentures. PMID:29142649
2010-01-01
Background Problem-based Learning (PBL) has been suggested as a key educational method of knowledge acquisition to improve medical education. We sought to evaluate the differences in medical school education between graduates from PBL-based and conventional curricula and to what extent these curricula fit job requirements. Methods Graduates from all German medical schools who graduated between 1996 and 2002 were eligible for this study. Graduates self-assessed nine competencies as required at their day-to-day work and as taught in medical school on a 6-point Likert scale. Results were compared between graduates from a PBL-based curriculum (University Witten/Herdecke) and conventional curricula. Results Three schools were excluded because of low response rates. Baseline demographics between graduates of the PBL-based curriculum (n = 101, 49% female) and the conventional curricula (n = 4720, 49% female) were similar. No major differences were observed regarding job requirements with priorities for "Independent learning/working" and "Practical medical skills". All competencies were rated to be better taught in PBL-based curriculum compared to the conventional curricula (all p < 0.001), except for "Medical knowledge" and "Research competence". Comparing competencies required at work and taught in medical school, PBL was associated with benefits in "Interdisciplinary thinking" (Δ + 0.88), "Independent learning/working" (Δ + 0.57), "Psycho-social competence" (Δ + 0.56), "Teamwork" (Δ + 0.39) and "Problem-solving skills" (Δ + 0.36), whereas "Research competence" (Δ - 1.23) and "Business competence" (Δ - 1.44) in the PBL-based curriculum needed improvement. Conclusion Among medical graduates in Germany, PBL demonstrated benefits with regard to competencies which were highly required in the job of physicians. Research and business competence deserve closer attention in future curricular development. PMID:20074350
Improving Students' Diagram Comprehension with Classroom Instruction
ERIC Educational Resources Information Center
Cromley, Jennifer G.; Perez, Tony C.; Fitzhugh, Shannon L.; Newcombe, Nora S.; Wills, Theodore W.; Tanaka, Jacqueline C.
2013-01-01
The authors tested whether students can be taught to better understand conventional representations in diagrams, photographs, and other visual representations in science textbooks. The authors developed a teacher-delivered, workbook-and-discussion-based classroom instructional method called Conventions of Diagrams (COD). The authors trained 1…
NASA Astrophysics Data System (ADS)
Ma, Jinlei; Zhou, Zhiqiang; Wang, Bo; Zong, Hua
2017-05-01
The goal of infrared (IR) and visible image fusion is to produce a more informative image for human observation or some other computer vision tasks. In this paper, we propose a novel multi-scale fusion method based on visual saliency map (VSM) and weighted least square (WLS) optimization, aiming to overcome some common deficiencies of conventional methods. Firstly, we introduce a multi-scale decomposition (MSD) using the rolling guidance filter (RGF) and Gaussian filter to decompose input images into base and detail layers. Compared with conventional MSDs, this MSD can achieve the unique property of preserving the information of specific scales and reducing halos near edges. Secondly, we argue that the base layers obtained by most MSDs would contain a certain amount of residual low-frequency information, which is important for controlling the contrast and overall visual appearance of the fused image, and the conventional "averaging" fusion scheme is unable to achieve desired effects. To address this problem, an improved VSM-based technique is proposed to fuse the base layers. Lastly, a novel WLS optimization scheme is proposed to fuse the detail layers. This optimization aims to transfer more visual details and less irrelevant IR details or noise into the fused image. As a result, the fused image details would appear more naturally and be suitable for human visual perception. Experimental results demonstrate that our method can achieve a superior performance compared with other fusion methods in both subjective and objective assessments.
Linear discriminant analysis based on L1-norm maximization.
Zhong, Fujin; Zhang, Jiashu
2013-08-01
Linear discriminant analysis (LDA) is a well-known dimensionality reduction technique, which is widely used for many purposes. However, conventional LDA is sensitive to outliers because its objective function is based on the distance criterion using L2-norm. This paper proposes a simple but effective robust LDA version based on L1-norm maximization, which learns a set of local optimal projection vectors by maximizing the ratio of the L1-norm-based between-class dispersion and the L1-norm-based within-class dispersion. The proposed method is theoretically proved to be feasible and robust to outliers while overcoming the singular problem of the within-class scatter matrix for conventional LDA. Experiments on artificial datasets, standard classification datasets and three popular image databases demonstrate the efficacy of the proposed method.
Study of Burn Scar Extraction Automatically Based on Level Set Method using Remote Sensing Data
Liu, Yang; Dai, Qin; Liu, JianBo; Liu, ShiBin; Yang, Jin
2014-01-01
Burn scar extraction using remote sensing data is an efficient way to precisely evaluate burn area and measure vegetation recovery. Traditional burn scar extraction methodologies have no well effect on burn scar image with blurred and irregular edges. To address these issues, this paper proposes an automatic method to extract burn scar based on Level Set Method (LSM). This method utilizes the advantages of the different features in remote sensing images, as well as considers the practical needs of extracting the burn scar rapidly and automatically. This approach integrates Change Vector Analysis (CVA), Normalized Difference Vegetation Index (NDVI) and the Normalized Burn Ratio (NBR) to obtain difference image and modifies conventional Level Set Method Chan-Vese (C-V) model with a new initial curve which results from a binary image applying K-means method on fitting errors of two near-infrared band images. Landsat 5 TM and Landsat 8 OLI data sets are used to validate the proposed method. Comparison with conventional C-V model, OSTU algorithm, Fuzzy C-mean (FCM) algorithm are made to show that the proposed approach can extract the outline curve of fire burn scar effectively and exactly. The method has higher extraction accuracy and less algorithm complexity than that of the conventional C-V model. PMID:24503563
NASA Astrophysics Data System (ADS)
He, Yu; Shen, Yuecheng; Feng, Xiaohua; Liu, Changjun; Wang, Lihong V.
2017-08-01
A circularly polarized antenna, providing more homogeneous illumination compared to a linearly polarized antenna, is more suitable for microwave induced thermoacoustic tomography (TAT). The conventional realization of a circular polarization is by using a helical antenna, but it suffers from low efficiency, low power capacity, and limited aperture in TAT systems. Here, we report an implementation of a circularly polarized illumination method in TAT by inserting a single-layer linear-to-circular polarizer based on frequency selective surfaces between a pyramidal horn antenna and an imaging object. The performance of the proposed method was validated by both simulations and experimental imaging of a breast tumor phantom. The results showed that a circular polarization was achieved, and the resultant thermoacoustic signal-to-noise was twice greater than that in the helical antenna case. The proposed method is more desirable in a waveguide-based TAT system than the conventional method.
2012-01-01
Background Ovine footrot is a contagious disease with worldwide occurrence in sheep. The main causative agent is the fastidious bacterium Dichelobacter nodosus. In Scandinavia, footrot was first diagnosed in Sweden in 2004 and later also in Norway and Denmark. Clinical examination of sheep feet is fundamental to diagnosis of footrot, but D. nodosus should also be detected to confirm the diagnosis. PCR-based detection using conventional PCR has been used at our institutes, but the method was laborious and there was a need for a faster, easier-to-interpret method. The aim of this study was to develop a TaqMan-based real-time PCR assay for detection of D. nodosus and to compare its performance with culturing and conventional PCR. Methods A D. nodosus-specific TaqMan based real-time PCR assay targeting the 16S rRNA gene was designed. The inclusivity and exclusivity (specificity) of the assay was tested using 55 bacterial and two fungal strains. To evaluate the sensitivity and harmonisation of results between different laboratories, aliquots of a single DNA preparation were analysed at three Scandinavian laboratories. The developed real-time PCR assay was compared to culturing by analysing 126 samples, and to a conventional PCR method by analysing 224 samples. A selection of PCR-products was cloned and sequenced in order to verify that they had been identified correctly. Results The developed assay had a detection limit of 3.9 fg of D. nodosus genomic DNA. This result was obtained at all three laboratories and corresponds to approximately three copies of the D. nodosus genome per reaction. The assay showed 100% inclusivity and 100% exclusivity for the strains tested. The real-time PCR assay found 54.8% more positive samples than by culturing and 8% more than conventional PCR. Conclusions The developed real-time PCR assay has good specificity and sensitivity for detection of D. nodosus, and the results are easy to interpret. The method is less time-consuming than either culturing or conventional PCR. PMID:22293440
USDA-ARS?s Scientific Manuscript database
Molecular detection of bacterial pathogens based on LAMP methods is a faster and simpler approach than conventional culture methods. Although different LAMP-based methods for pathogenic bacterial detection are available, a systematic comparison of these different LAMP assays has not been performed. ...
Improved magnetic resonance fingerprinting reconstruction with low-rank and subspace modeling.
Zhao, Bo; Setsompop, Kawin; Adalsteinsson, Elfar; Gagoski, Borjan; Ye, Huihui; Ma, Dan; Jiang, Yun; Ellen Grant, P; Griswold, Mark A; Wald, Lawrence L
2018-02-01
This article introduces a constrained imaging method based on low-rank and subspace modeling to improve the accuracy and speed of MR fingerprinting (MRF). A new model-based imaging method is developed for MRF to reconstruct high-quality time-series images and accurate tissue parameter maps (e.g., T 1 , T 2 , and spin density maps). Specifically, the proposed method exploits low-rank approximations of MRF time-series images, and further enforces temporal subspace constraints to capture magnetization dynamics. This allows the time-series image reconstruction problem to be formulated as a simple linear least-squares problem, which enables efficient computation. After image reconstruction, tissue parameter maps are estimated via dictionary-based pattern matching, as in the conventional approach. The effectiveness of the proposed method was evaluated with in vivo experiments. Compared with the conventional MRF reconstruction, the proposed method reconstructs time-series images with significantly reduced aliasing artifacts and noise contamination. Although the conventional approach exhibits some robustness to these corruptions, the improved time-series image reconstruction in turn provides more accurate tissue parameter maps. The improvement is pronounced especially when the acquisition time becomes short. The proposed method significantly improves the accuracy of MRF, and also reduces data acquisition time. Magn Reson Med 79:933-942, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Oh, Taekjun; Lee, Donghwa; Kim, Hyungjin; Myung, Hyun
2015-01-01
Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach. PMID:26151203
Stürmer, Til; Joshi, Manisha; Glynn, Robert J.; Avorn, Jerry; Rothman, Kenneth J.; Schneeweiss, Sebastian
2006-01-01
Objective Propensity score analyses attempt to control for confounding in non-experimental studies by adjusting for the likelihood that a given patient is exposed. Such analyses have been proposed to address confounding by indication, but there is little empirical evidence that they achieve better control than conventional multivariate outcome modeling. Study design and methods Using PubMed and Science Citation Index, we assessed the use of propensity scores over time and critically evaluated studies published through 2003. Results Use of propensity scores increased from a total of 8 papers before 1998 to 71 in 2003. Most of the 177 published studies abstracted assessed medications (N=60) or surgical interventions (N=51), mainly in cardiology and cardiac surgery (N=90). Whether PS methods or conventional outcome models were used to control for confounding had little effect on results in those studies in which such comparison was possible. Only 9 out of 69 studies (13%) had an effect estimate that differed by more than 20% from that obtained with a conventional outcome model in all PS analyses presented. Conclusions Publication of results based on propensity score methods has increased dramatically, but there is little evidence that these methods yield substantially different estimates compared with conventional multivariable methods. PMID:16632131
"Drug" Discovery with the Help of Organic Chemistry.
Itoh, Yukihiro; Suzuki, Takayoshi
2017-01-01
The first step in "drug" discovery is to find compounds binding to a potential drug target. In modern medicinal chemistry, the screening of a chemical library, structure-based drug design, and ligand-based drug design, or a combination of these methods, are generally used for identifying the desired compounds. However, they do not necessarily lead to success and there is no infallible method for drug discovery. Therefore, it is important to explore medicinal chemistry based on not only the conventional methods but also new ideas. So far, we have found various compounds as drug candidates. In these studies, some strategies based on organic chemistry have allowed us to find drug candidates, through 1) construction of a focused library using organic reactions and 2) rational design of enzyme inhibitors based on chemical reactions catalyzed by the target enzyme. Medicinal chemistry based on organic chemical reactions could be expected to supplement the conventional methods. In this review, we present drug discovery with the help of organic chemistry showing examples of our explorative studies on histone deacetylase inhibitors and lysine-specific demethylase 1 inhibitors.
Mitsuhashi, Shota; Akamatsu, Yasushi; Kobayashi, Hideo; Kusayama, Yoshihiro; Kumagai, Ken; Saito, Tomoyuki
2018-02-01
Rotational malpositioning of the tibial component can lead to poor functional outcome in TKA. Although various surgical techniques have been proposed, precise rotational placement of the tibial component was difficult to accomplish even with the use of a navigation system. The purpose of this study is to assess whether combined CT-based and image-free navigation systems replicate accurately the rotational alignment of tibial component that was preoperatively planned on CT, compared with the conventional method. We compared the number of outliers for rotational alignment of the tibial component using combined CT-based and image-free navigation systems (navigated group) with those of conventional method (conventional group). Seventy-two TKAs were performed between May 2012 and December 2014. In the navigated group, the anteroposterior axis was prepared using CT-based navigation system and the tibial component was positioned under control of the navigation. In the conventional group, the tibial component was placed with reference to the Akagi line that was determined visually. Fisher's exact probability test was performed to evaluate the results. There was a significant difference between the two groups with regard to the number of outliers: 3 outliers in the navigated group compared with 12 outliers in the conventional group (P < 0.01). We concluded that combined CT-based and image-free navigation systems decreased the number of rotational outliers of tibial component, and was helpful for the replication of the accurate rotational alignment of the tibial component that was preoperatively planned.
Fluence-based and microdosimetric event-based methods for radiation protection in space
NASA Technical Reports Server (NTRS)
Curtis, Stanley B.; Meinhold, C. B. (Principal Investigator)
2002-01-01
The National Council on Radiation Protection and Measurements (NCRP) has recently published a report (Report #137) that discusses various aspects of the concepts used in radiation protection and the difficulties in measuring the radiation environment in spacecraft for the estimation of radiation risk to space travelers. Two novel dosimetric methodologies, fluence-based and microdosimetric event-based methods, are discussed and evaluated, along with the more conventional quality factor/LET method. It was concluded that for the present, any reason to switch to a new methodology is not compelling. It is suggested that because of certain drawbacks in the presently-used conventional method, these alternative methodologies should be kept in mind. As new data become available and dosimetric techniques become more refined, the question should be revisited and that in the future, significant improvement might be realized. In addition, such concepts as equivalent dose and organ dose equivalent are discussed and various problems regarding the measurement/estimation of these quantities are presented.
NASA Astrophysics Data System (ADS)
Deka, Jashmini; Mojumdar, Aditya; Parisse, Pietro; Onesti, Silvia; Casalis, Loredana
2017-03-01
Helicase are essential enzymes which are widespread in all life-forms. Due to their central role in nucleic acid metabolism, they are emerging as important targets for anti-viral, antibacterial and anti-cancer drugs. The development of easy, cheap, fast and robust biochemical assays to measure helicase activity, overcoming the limitations of the current methods, is a pre-requisite for the discovery of helicase inhibitors through high-throughput screenings. We have developed a method which exploits the optical properties of DNA-conjugated gold nanoparticles (AuNP) and meets the required criteria. The method was tested with the catalytic domain of the human RecQ4 helicase and compared with a conventional FRET-based assay. The AuNP-based assay produced similar results but is simpler, more robust and cheaper than FRET. Therefore, our nanotechnology-based platform shows the potential to provide a useful alternative to the existing conventional methods for following helicase activity and to screen small-molecule libraries as potential helicase inhibitors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tejabhiram, Y., E-mail: tejabhiram@gmail.com; Pradeep, R.; Helen, A.T.
2014-12-15
Highlights: • Novel low temperature synthesis of nickel ferrite nanoparticles. • Comparison with two conventional synthesis techniques including hydrothermal method. • XRD results confirm the formation of crystalline nickel ferrites at 110 °C. • Superparamagnetic particles with applications in drug delivery and hyperthermia. • Magnetic properties superior to conventional methods found in new process. - Abstract: We report a simple, low temperature and surfactant free co-precipitation method for the preparation of nickel ferrite nanostructures using ferrous sulfate as the iron precursor. The products obtained from this method were compared for their physical properties with nickel ferrites produced through conventional co-precipitationmore » and hydrothermal methods which used ferric nitrate as the iron precursor. X-ray diffraction analysis confirmed the synthesis of single phase inverse spinel nanocrystalline nickel ferrites at temperature as low as 110 °C in the low temperature method. Electron microscopy analysis on the samples revealed the formation of nearly spherical nanostructures in the size range of 20–30 nm which are comparable to other conventional methods. Vibrating sample magnetometer measurements showed the formation of superparamagnetic particles with high magnetic saturation 41.3 emu/g which corresponds well with conventional synthesis methods. The spontaneous synthesis of the nickel ferrite nanoparticles by the low temperature synthesis method was attributed to the presence of 0.808 kJ mol{sup −1} of excess Gibbs free energy due to ferrous sulfate precursor.« less
CALL, Prewriting Strategies, and EFL Writing Quantity
ERIC Educational Resources Information Center
Shafiee, Sajad; Koosha, Mansour; Afghar, Akbar
2015-01-01
This study sought to explore the effect of teaching prewriting strategies through different methods of input delivery (i.e. conventional, web-based, and hybrid) on EFL learners' writing quantity. In its quasi-experimental study, the researchers recruited 98 available sophomores, and assigned them to three experimental groups (conventional,…
Xiao, Yongling; Abrahamowicz, Michal
2010-03-30
We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.
A flood map based DOI decoding method for block detector: a GATE simulation study.
Shi, Han; Du, Dong; Su, Zhihong; Peng, Qiyu
2014-01-01
Positron Emission Tomography (PET) systems using detectors with Depth of Interaction (DOI) capabilities could achieve higher spatial resolution and better image quality than those without DOI. Up till now, most DOI methods developed are not cost-efficient for a whole body PET system. In this paper, we present a DOI decoding method based on flood map for low-cost conventional block detector with four-PMT readout. Using this method, the DOI information can be directly extracted from the DOI-related crystal spot deformation in the flood map. GATE simulations are then carried out to validate the method, confirming a DOI sorting accuracy of 85.27%. Therefore, we conclude that this method has the potential to be applied in conventional detectors to achieve a reasonable DOI measurement without dramatically increasing their complexity and cost of an entire PET system.
[An improved medical image fusion algorithm and quality evaluation].
Chen, Meiling; Tao, Ling; Qian, Zhiyu
2009-08-01
Medical image fusion is of very important value for application in medical image analysis and diagnosis. In this paper, the conventional method of wavelet fusion is improved,so a new algorithm of medical image fusion is presented and the high frequency and low frequency coefficients are studied respectively. When high frequency coefficients are chosen, the regional edge intensities of each sub-image are calculated to realize adaptive fusion. The choice of low frequency coefficient is based on the edges of images, so that the fused image preserves all useful information and appears more distinctly. We apply the conventional and the improved fusion algorithms based on wavelet transform to fuse two images of human body and also evaluate the fusion results through a quality evaluation method. Experimental results show that this algorithm can effectively retain the details of information on original images and enhance their edge and texture features. This new algorithm is better than the conventional fusion algorithm based on wavelet transform.
NASA Astrophysics Data System (ADS)
Zhang, Chunxi; Zhang, Zuchen; Song, Jingming; Wu, Chunxiao; Song, Ningfang
2015-03-01
A splicing parameter optimization method to increase the tensile strength of splicing joint between photonic crystal fiber (PCF) and conventional fiber is demonstrated. Based on the splicing recipes provided by splicer or fiber manufacturers, the optimal values of some major splicing parameters are obtained in sequence, and a conspicuous improvement in the mechanical strength of splicing joints between PCFs and conventional fibers is validated through experiments.
NASA Astrophysics Data System (ADS)
Yanagihara, Kota; Kubo, Shin; Dodin, Ilya; Nakamura, Hiroaki; Tsujimura, Toru
2017-10-01
Geometrical Optics Ray-tracing is a reasonable numerical analytic approach for describing the Electron Cyclotron resonance Wave (ECW) in slowly varying spatially inhomogeneous plasma. It is well known that the result with this conventional method is adequate in most cases. However, in the case of Helical fusion plasma which has complicated magnetic structure, strong magnetic shear with a large scale length of density can cause a mode coupling of waves outside the last closed flux surface, and complicated absorption structure requires a strong focused wave for ECH. Since conventional Ray Equations to describe ECW do not have any terms to describe the diffraction, polarization and wave decay effects, we can not describe accurately a mode coupling of waves, strong focus waves, behavior of waves in inhomogeneous absorption region and so on. For fundamental solution of these problems, we consider the extension of the Ray-tracing method. Specific process is planned as follows. First, calculate the reference ray by conventional method, and define the local ray-base coordinate system along the reference ray. Then, calculate the evolution of the distributions of amplitude and phase on ray-base coordinate step by step. The progress of our extended method will be presented.
Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.
2011-01-01
We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276
A holistic calibration method with iterative distortion compensation for stereo deflectometry
NASA Astrophysics Data System (ADS)
Xu, Yongjia; Gao, Feng; Zhang, Zonghua; Jiang, Xiangqian
2018-07-01
This paper presents a novel holistic calibration method for stereo deflectometry system to improve the system measurement accuracy. The reconstruction result of stereo deflectometry is integrated with the calculated normal data of the measured surface. The calculation accuracy of the normal data is seriously influenced by the calibration accuracy of the geometrical relationship of the stereo deflectometry system. Conventional calibration approaches introduce form error to the system due to inaccurate imaging model and distortion elimination. The proposed calibration method compensates system distortion based on an iterative algorithm instead of the conventional distortion mathematical model. The initial value of the system parameters are calculated from the fringe patterns displayed on the systemic LCD screen through a reflection of a markless flat mirror. An iterative algorithm is proposed to compensate system distortion and optimize camera imaging parameters and system geometrical relation parameters based on a cost function. Both simulation work and experimental results show the proposed calibration method can significantly improve the calibration and measurement accuracy of a stereo deflectometry. The PV (peak value) of measurement error of a flat mirror can be reduced to 69.7 nm by applying the proposed method from 282 nm obtained with the conventional calibration approach.
Pickup, William; Bremer, Phil; Peng, Mei
2018-03-01
The extensive time and cost associated with conventional sensory profiling methods has spurred sensory researchers to develop rapid method alternatives, such as Napping® with Ultra-Flash Profiling (UFP). Napping®-UFP generates sensory maps by requiring untrained panellists to separate samples based on perceived sensory similarities. Evaluations of this method have been restrained to manufactured/formulated food models, and predominantly structured on comparisons against the conventional descriptive method. The present study aims to extend the validation of Napping®-UFP (N = 72) to natural biological products; and to evaluate this method against Descriptive Analysis (DA; N = 8) with physiochemical measurements as an additional evaluative criterion. The results revealed that sample configurations generated by DA and Napping®-UFP were not significantly correlated (RV = 0.425, P = 0.077); however, they were both correlated with the product map generated based on the instrumental measures (P < 0.05). The finding also noted that sample characterisations from DA and Napping®-UFP were driven by different sensory attributes, indicating potential structural differences between these two methods in configuring samples. Overall, these findings lent support for the extended use of Napping®-UFP for evaluations of natural biological products. Although DA was shown to be a better method for establishing sensory-instrumental relationships, Napping®-UFP exhibited strengths in generating informative sample configurations based on holistic perception of products. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
Dental enamel defect diagnosis through different technology-based devices.
Kobayashi, Tatiana Yuriko; Vitor, Luciana Lourenço Ribeiro; Carrara, Cleide Felício Carvalho; Silva, Thiago Cruvinel; Rios, Daniela; Machado, Maria Aparecida Andrade Moreira; Oliveira, Thais Marchini
2018-06-01
Dental enamel defects (DEDs) are faulty or deficient enamel formations of primary and permanent teeth. Changes during tooth development result in hypoplasia (a quantitative defect) and/or hypomineralisation (a qualitative defect). To compare technology-based diagnostic methods for detecting DEDs. Two-hundred and nine dental surfaces of anterior permanent teeth were selected in patients, 6-11 years of age, with cleft lip with/without cleft palate. First, a conventional clinical examination was conducted according to the modified Developmental Defects of Enamel Index (DDE Index). Dental surfaces were evaluated using an operating microscope and a fluorescence-based device. Interexaminer reproducibility was determined using the kappa test. To compare groups, McNemar's test was used. Cramer's V test was used for comparing the distribution of index codes obtained after classification of all dental surfaces. Cramer's V test revealed statistically significant differences (P < .0001) in the distribution of index codes obtained using the different methods; the coefficients were 0.365 for conventional clinical examination versus fluorescence, 0.961 for conventional clinical examination versus operating microscope and 0.358 for operating microscope versus fluorescence. The sensitivity of the operating microscope and fluorescence method was statistically significant (P = .008 and P < .0001, respectively). Otherwise, the results did not show statistically significant differences in accuracy and specificity for either the operating microscope or the fluorescence methods. This study suggests that the operating microscope performed better than the fluorescence-based device and could be an auxiliary method for the detection of DEDs. © 2017 FDI World Dental Federation.
Significance of parametric spectral ratio methods in detection and recognition of whispered speech
NASA Astrophysics Data System (ADS)
Mathur, Arpit; Reddy, Shankar M.; Hegde, Rajesh M.
2012-12-01
In this article the significance of a new parametric spectral ratio method that can be used to detect whispered speech segments within normally phonated speech is described. Adaptation methods based on the maximum likelihood linear regression (MLLR) are then used to realize a mismatched train-test style speech recognition system. This proposed parametric spectral ratio method computes a ratio spectrum of the linear prediction (LP) and the minimum variance distortion-less response (MVDR) methods. The smoothed ratio spectrum is then used to detect whispered segments of speech within neutral speech segments effectively. The proposed LP-MVDR ratio method exhibits robustness at different SNRs as indicated by the whisper diarization experiments conducted on the CHAINS and the cell phone whispered speech corpus. The proposed method also performs reasonably better than the conventional methods for whisper detection. In order to integrate the proposed whisper detection method into a conventional speech recognition engine with minimal changes, adaptation methods based on the MLLR are used herein. The hidden Markov models corresponding to neutral mode speech are adapted to the whispered mode speech data in the whispered regions as detected by the proposed ratio method. The performance of this method is first evaluated on whispered speech data from the CHAINS corpus. The second set of experiments are conducted on the cell phone corpus of whispered speech. This corpus is collected using a set up that is used commercially for handling public transactions. The proposed whisper speech recognition system exhibits reasonably better performance when compared to several conventional methods. The results shown indicate the possibility of a whispered speech recognition system for cell phone based transactions.
Using Formal Methods to Assist in the Requirements Analysis of the Space Shuttle GPS Change Request
NASA Technical Reports Server (NTRS)
DiVito, Ben L.; Roberts, Larry W.
1996-01-01
We describe a recent NASA-sponsored pilot project intended to gauge the effectiveness of using formal methods in Space Shuttle software requirements analysis. Several Change Requests (CR's) were selected as promising targets to demonstrate the utility of formal methods in this application domain. A CR to add new navigation capabilities to the Shuttle, based on Global Positioning System (GPS) technology, is the focus of this report. Carried out in parallel with the Shuttle program's conventional requirements analysis process was a limited form of analysis based on formalized requirements. Portions of the GPS CR were modeled using the language of SRI's Prototype Verification System (PVS). During the formal methods-based analysis, numerous requirements issues were discovered and submitted as official issues through the normal requirements inspection process. Shuttle analysts felt that many of these issues were uncovered earlier than would have occurred with conventional methods. We present a summary of these encouraging results and conclusions we have drawn from the pilot project.
Neck pain assessment in a virtual environment.
Sarig-Bahat, Hilla; Weiss, Patrice L Tamar; Laufer, Yocheved
2010-02-15
Neck-pain and control group comparative analysis of conventional and virtual reality (VR)-based assessment of cervical range of motion (CROM). To use a tracker-based VR system to compare CROM of individuals suffering from chronic neck pain with CROM of asymptomatic individuals; to compare VR system results with those obtained during conventional assessment; to present the diagnostic value of CROM measures obtained by both assessments; and to demonstrate the effect of a single VR session on CROM. Neck pain is a common musculoskeletal complaint with a reported annual prevalence of 30% to 50%. In the absence of a gold standard for CROM assessment, a variety of assessment devices and methodologies exist. Common to these methodologies, assessment of CROM is carried out by instructing subjects to move their head as far as possible. However, these elicited movements do not necessarily replicate functional movements which occur spontaneously in response to multiple stimuli. To achieve a more functional approach to cervical motion assessment, we have recently developed a VR environment in which electromagnetic tracking is used to monitor cervical motion while participants are involved in a simple yet engaging gaming scenario. CROM measures were collected from 25 symptomatic and 42 asymptomatic individuals using VR and conventional assessments. Analysis of variance was used to determine differences between groups and assessment methods. Logistic regression analysis, using a single predictor, compared the diagnostic ability of both methods. Results obtained by both methods demonstrated significant CROM limitations in the symptomatic group. The VR measures showed greater CROM and sensitivity while conventional measures showed greater specificity. A single session exposure to VR resulted in a significant increase in CROM. Neck pain is significantly associated with reduced CROM as demonstrated by both VR and conventional assessment methods. The VR method provides assessment of functional CROM and can be used for CROM enhancement. Assessment by VR has greater sensitivity than conventional assessment and can be used for the detection of true symptomatic individuals.
Small-Scale System for Evaluation of Stretch-Flangeability with Excellent Reliability
NASA Astrophysics Data System (ADS)
Yoon, Jae Ik; Jung, Jaimyun; Lee, Hak Hyeon; Kim, Hyoung Seop
2018-02-01
We propose a system for evaluating the stretch-flangeability of small-scale specimens based on the hole-expansion ratio (HER). The system has no size effect and shows excellent reproducibility, reliability, and economic efficiency. To verify the reliability and reproducibility of the proposed hole-expansion testing (HET) method, the deformation behavior of the conventional standard stretch-flangeability evaluation method was compared with the proposed method using finite-element method simulations. The distribution of shearing defects in the hole-edge region of the specimen, which has a significant influence on the HER, was investigated using scanning electron microscopy. The stretch-flangeability of several kinds of advanced high-strength steel determined using the conventional standard method was compared with that using the proposed small-scale HET method. It was verified that the deformation behavior, morphology and distribution of shearing defects, and stretch-flangeability results for the specimens were the same for the conventional standard method and the proposed small-scale stretch-flangeability evaluation system.
Small-Scale System for Evaluation of Stretch-Flangeability with Excellent Reliability
NASA Astrophysics Data System (ADS)
Yoon, Jae Ik; Jung, Jaimyun; Lee, Hak Hyeon; Kim, Hyoung Seop
2018-06-01
We propose a system for evaluating the stretch-flangeability of small-scale specimens based on the hole-expansion ratio (HER). The system has no size effect and shows excellent reproducibility, reliability, and economic efficiency. To verify the reliability and reproducibility of the proposed hole-expansion testing (HET) method, the deformation behavior of the conventional standard stretch-flangeability evaluation method was compared with the proposed method using finite-element method simulations. The distribution of shearing defects in the hole-edge region of the specimen, which has a significant influence on the HER, was investigated using scanning electron microscopy. The stretch-flangeability of several kinds of advanced high-strength steel determined using the conventional standard method was compared with that using the proposed small-scale HET method. It was verified that the deformation behavior, morphology and distribution of shearing defects, and stretch-flangeability results for the specimens were the same for the conventional standard method and the proposed small-scale stretch-flangeability evaluation system.
ERIC Educational Resources Information Center
Lee, Young-Jin
2012-01-01
This paper presents a computational method that can efficiently estimate the ability of students from the log files of a Web-based learning environment capturing their problem solving processes. The computational method developed in this study approximates the posterior distribution of the student's ability obtained from the conventional Bayes…
MURAHASHI, Shun-Ichi
2011-01-01
This review focuses on the development of ruthenium and flavin catalysts for environmentally benign oxidation reactions based on mimicking the functions of cytochrome P-450 and flavoenzymes, and low valent transition-metal catalysts that replace conventional acids and bases. Several new concepts and new types of catalytic reactions based on these concepts are described. PMID:21558760
A New Method for Extubation: Comparison between Conventional and New Methods.
Yousefshahi, Fardin; Barkhordari, Khosro; Movafegh, Ali; Tavakoli, Vida; Paknejad, Omalbanin; Bina, Payvand; Yousefshahi, Hadi; Sheikh Fathollahi, Mahmood
2012-08-01
Extubation is associated with the risk of complications such as accumulated secretion above the endotracheal tube cuff, eventual atelectasia following a reduction in pulmonary volumes because of a lack of physiological positive end expiratory pressure, and intra-tracheal suction. In order to reduce these complications, and, based on basic physiological principles, a new practical extubation method is presented in this article. The study was designed as a six-month prospective cross-sectional clinical trial. Two hundred fifty-seven patients undergoing coronary artery bypass grafting (CABG) were divided into two groups based on their scheduled surgery time. The first group underwent the conventional extubation method, while the other group was extubated according to a new described method. Arterial blood gas (ABG) analysis results before and after extubation were compared between the two groups to find the effect of the extubation method on the ABG parameters and the oxygenation profile. In all time intervals, the partial pressure of oxygen in arterial blood / fraction of inspired oxygen (PaO(2)/FiO(2)) ratio in the new method group patients was improved compared to that in the conventional method; some differences, like PaO(2)/FiO(2) four hours after extubation, were statistically significant, however (p value=0.0063). The new extubation method improved some respiratory parameters and thus attenuated oxygenation complications and amplified oxygenation after extubation.
Conjugate Acid-Base Pairs, Free Energy, and the Equilibrium Constant
ERIC Educational Resources Information Center
Beach, Darrell H.
1969-01-01
Describes a method of calculating the equilibrium constant from free energy data. Values of the equilibrium constants of six Bronsted-Lowry reactions calculated by the author's method and by a conventional textbook method are compared. (LC)
Optimization and Validation of Rotating Current Excitation with GMR Array Sensors for Riveted
2016-09-16
distribution. Simulation results, using both an optimized coil and a conventional coil, are generated using the finite element method (FEM) model...optimized coil and a conventional coil, are generated using the finite element method (FEM) model. The signal magnitude for an optimized coil is seen to be...optimized coil. 4. Model Based Performance Analysis A 3D finite element model (FEM) is used to analyze the performance of the optimized coil and
A Stirling engine analysis method based upon moving gas nodes
NASA Technical Reports Server (NTRS)
Martini, W. R.
1986-01-01
A Lagrangian nodal analysis method for Stirling engines (SEs) is described, validated, and applied to a conventional SE and an isothermalized SE (with fins in the hot and cold spaces). The analysis employs a constant-mass gas node (which moves with respect to the solid nodes during each time step) instead of the fixed gas nodes of Eulerian analysis. The isothermalized SE is found to have efficiency only slightly greater than that of a conventional SE.
Osteoporosis risk prediction using machine learning and conventional methods.
Kim, Sung Kean; Yoo, Tae Keun; Oh, Ein; Kim, Deok Won
2013-01-01
A number of clinical decision tools for osteoporosis risk assessment have been developed to select postmenopausal women for the measurement of bone mineral density. We developed and validated machine learning models with the aim of more accurately identifying the risk of osteoporosis in postmenopausal women, and compared with the ability of a conventional clinical decision tool, osteoporosis self-assessment tool (OST). We collected medical records from Korean postmenopausal women based on the Korea National Health and Nutrition Surveys (KNHANES V-1). The training data set was used to construct models based on popular machine learning algorithms such as support vector machines (SVM), random forests (RF), artificial neural networks (ANN), and logistic regression (LR) based on various predictors associated with low bone density. The learning models were compared with OST. SVM had significantly better area under the curve (AUC) of the receiver operating characteristic (ROC) than ANN, LR, and OST. Validation on the test set showed that SVM predicted osteoporosis risk with an AUC of 0.827, accuracy of 76.7%, sensitivity of 77.8%, and specificity of 76.0%. We were the first to perform comparisons of the performance of osteoporosis prediction between the machine learning and conventional methods using population-based epidemiological data. The machine learning methods may be effective tools for identifying postmenopausal women at high risk for osteoporosis.
Adaptive compressed sensing of remote-sensing imaging based on the sparsity prediction
NASA Astrophysics Data System (ADS)
Yang, Senlin; Li, Xilong; Chong, Xin
2017-10-01
The conventional compressive sensing works based on the non-adaptive linear projections, and the parameter of its measurement times is usually set empirically. As a result, the quality of image reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was given. Then an estimation method for the sparsity of image was proposed based on the two dimensional discrete cosine transform (2D DCT). With an energy threshold given beforehand, the DCT coefficients were processed with both energy normalization and sorting in descending order, and the sparsity of the image can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of image effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparse degree estimated with the energy threshold provided, the proposed method can ensure the quality of image reconstruction.
Adaptive compressed sensing of multi-view videos based on the sparsity estimation
NASA Astrophysics Data System (ADS)
Yang, Senlin; Li, Xilong; Chong, Xin
2017-11-01
The conventional compressive sensing for videos based on the non-adaptive linear projections, and the measurement times is usually set empirically. As a result, the quality of videos reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was described. Then an estimation method for the sparsity of multi-view videos was proposed based on the two dimensional discrete wavelet transform (2D DWT). With an energy threshold given beforehand, the DWT coefficients were processed with both energy normalization and sorting by descending order, and the sparsity of the multi-view video can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of video frame effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparsity estimated with the energy threshold provided, the proposed method can ensure the reconstruction quality of multi-view videos.
The German Aortic Valve Registry (GARY): in-hospital outcome
Hamm, Christian W.; Möllmann, Helge; Holzhey, David; Beckmann, Andreas; Veit, Christof; Figulla, Hans-Reiner; Cremer, J.; Kuck, Karl-Heinz; Lange, Rüdiger; Zahn, Ralf; Sack, Stefan; Schuler, Gerhard; Walther, Thomas; Beyersdorf, Friedhelm; Böhm, Michael; Heusch, Gerd; Funkat, Anne-Kathrin; Meinertz, Thomas; Neumann, Till; Papoutsis, Konstantinos; Schneider, Steffen; Welz, Armin; Mohr, Friedrich W.
2014-01-01
Background Aortic stenosis is a frequent valvular disease especially in elderly patients. Catheter-based valve implantation has emerged as a valuable treatment approach for these patients being either at very high risk for conventional surgery or even deemed inoperable. The German Aortic Valve Registry (GARY) provides data on conventional and catheter-based aortic procedures on an all-comers basis. Methods and results A total of 13 860 consecutive patients undergoing repair for aortic valve disease [conventional surgery and transvascular (TV) or transapical (TA) catheter-based techniques] have been enrolled in this registry during 2011 and baseline, procedural, and outcome data have been acquired. The registry summarizes the results of 6523 conventional aortic valve replacements without (AVR) and 3464 with concomitant coronary bypass surgery (AVR + CABG) as well as 2695 TV AVI and 1181 TA interventions (TA AVI). Patients undergoing catheter-based techniques were significantly older and had higher risk profiles. The stroke rate was low in all groups with 1.3% (AVR), 1.9% (AVR + CABG), 1.7% (TV AVI), and 2.3% (TA AVI). The in-hospital mortality was 2.1% (AVR) and 4.5% (AVR + CABG) for patients undergoing conventional surgery, and 5.1% (TV AVI) and AVI 7.7% (TA AVI). Conclusion The in-hospital outcome results of this registry show that conventional surgery yields excellent results in all risk groups and that catheter-based aortic valve replacements is an alternative to conventional surgery in high risk and elderly patients. PMID:24022003
Acoustic analysis of the propfan
NASA Technical Reports Server (NTRS)
Farassat, F.; Succi, G. P.
1979-01-01
A review of propeller noise prediction technology is presented. Two methods for the prediction of the noise from conventional and advanced propellers in forward flight are described. These methods are based on different time domain formulations. Brief descriptions of the computer algorithms based on these formulations are given. The output of the programs (the acoustic pressure signature) was Fourier analyzed to get the acoustic pressure spectrum. The main difference between the two programs is that one can handle propellers with supersonic tip speed while the other is for subsonic tip speed propellers. Comparisons of the calculated and measured acoustic data for a conventional and an advanced propeller show good agreement in general.
Won, Helen; Yang, Samuel; Gaydos, Charlotte; Hardick, Justin; Ramachandran, Padmini; Hsieh, Yu-Hsiang; Kecojevic, Alexander; Njanpop-Lafourcade, Berthe-Marie; Mueller, Judith E; Tameklo, Tsidi Agbeko; Badziklou, Kossi; Gessner, Bradford D; Rothman, Richard E
2012-09-01
This study aimed to conduct a pilot evaluation of broad-based multiprobe polymerase chain reaction (PCR) in clinical cerebrospinal fluid (CSF) samples compared to local conventional PCR/culture methods used for bacterial meningitis surveillance. A previously described PCR consisting of initial broad-based detection of Eubacteriales by a universal probe, followed by Gram typing, and pathogen-specific probes was designed targeting variable regions of the 16S rRNA gene. The diagnostic performance of the 16S rRNA assay in "127 CSF samples was evaluated in samples from patients from Togo, Africa, by comparison to conventional PCR/culture methods. Our probes detected Neisseria meningitidis, Streptococcus pneumoniae, and Haemophilus influenzae. Uniprobe sensitivity and specificity versus conventional PCR were 100% and 54.6%, respectively. Sensitivity and specificity of uniprobe versus culture methods were 96.5% and 52.5%, respectively. Gram-typing probes correctly typed 98.8% (82/83) and pathogen-specific probes identified 96.4% (80/83) of the positives. This broad-based PCR algorithm successfully detected and provided species level information for multiple bacterial meningitis agents in clinical samples. Copyright © 2012 Elsevier Inc. All rights reserved.
Won, Helen; Yang, Samuel; Gaydos, Charlotte; Hardick, Justin; Ramachandran, Padmini; Hsieh, Yu-Hsiang; Kecojevic, Alexander; Njanpop-Lafourcade, Berthe-Marie; Mueller, Judith E.; Tameklo, Tsidi Agbeko; Badziklou, Kossi; Gessner, Bradford D.; Rothman, Richard E.
2012-01-01
This study aimed to conduct a pilot evaluation of broad-based multiprobe polymerase chain reaction (PCR) in clinical cerebrospinal fluid (CSF) samples compared to local conventional PCR/culture methods used for bacterial meningitis surveillance. A previously described PCR consisting of initial broad-based detection of Eubacteriales by a universal probe, followed by Gram typing, and pathogen-specific probes was designed targeting variable regions of the 16S rRNA gene. The diagnostic performance of the 16S rRNA assay in “”127 CSF samples was evaluated in samples from patients from Togo, Africa, by comparison to conventional PCR/culture methods. Our probes detected Neisseria meningitidis, Streptococcus pneumoniae, and Haemophilus influenzae. Uniprobe sensitivity and specificity versus conventional PCR were 100% and 54.6%, respectively. Sensitivity and specificity of uniprobe versus culture methods were 96.5% and 52.5%, respectively. Gram-typing probes correctly typed 98.8% (82/83) and pathogen-specific probes identified 96.4% (80/83) of the positives. This broad-based PCR algorithm successfully detected and provided species level information for multiple bacterial meningitis agents in clinical samples. PMID:22809694
Thin film DNA-complex-based dye lasers fabricated by immersion and conventional processes
NASA Astrophysics Data System (ADS)
Kawabe, Yutaka; Suzuki, Yuki
2017-08-01
DNA based thin film dye laser is one of promising optical devices for future technology. Laser oscillation and amplified spontaneous emission (ASE) were demonstrated by hemicyanine-doped DNA complex films prepared with `immersion method' as well as those made by a conventional way. In the immersion process, DNA-surfactant complex films were stained by immersion into an acetone solution including the dyes. In this study, three types of hemicyanines were incorporated with both methods, and laser oscillation was achieved with optically induced population grating formed in all of the complex films. The laser threshold values for six cases ranged in 0.07 - 0.18 mJ/cm2 , which was close to the best values made in DNA complex matrices. Continual pumping showed that laser oscillation persisted for 4 - 10 minutes. Immersion process gave superior laser capability especially for output efficiency over the conventional counterparts.
Soós, Sándor Árpád; Jeszenői, Norbert; Darvas, Katalin; Harsányi, László
2016-11-08
Despite their worldwide popularity the question of using non-conventional treatments is a source of controversy among medical professionals. Although these methods may have potential benefits it presents a problem when patients use non-conventional treatments in the perioperative period without informing their attending physician about it and this may cause adverse events and complications. To prevent this, physicians need to have a profound knowledge about non-conventional treatments. An anonymous questionnaire was distributed among surgeons and anaesthesiologists working in Hungarian university clinics and in selected city or county hospitals. Questionnaires were distributed by post, online or in person. Altogether 258 questionnaires were received from 22 clinical and hospital departments. Anaesthesiologists and surgeons use reflexology, Traditional Chinese Medicine, herbal medicine and manual therapy most frequently in their clinical practice. Traditional Chinese Medicine was considered to be the most scientifically sound method, while homeopathy was perceived as the least well-grounded method. Neural therapy was the least well-known method among our subjects. Among the subjects of our survey only 3.1 % of perioperative care physicians had some qualifications in non-conventional medicine, 12.4 % considered themselves to be well-informed in this topic and 48.4 % would like to study some complementary method. Women were significantly more interested in alternative treatments than men, p = 0.001427; OR: 2.2765. Anaesthesiologists would be significantly more willing to learn non-conventional methods than surgeons. 86.4 % of the participants thought that non-conventional treatments should be evaluated from the point of view of evidence. Both surgeons and anaesthesiologists accept the application of integrative medicine and they also approve of the idea of teaching these methods at universities. According to perioperative care physicians, non-conventional methods should be evaluated based on evidence. They also expressed a willingness to learn about those treatments that meet the criteria of evidence and apply these in their clinical practice.
Discriminant locality preserving projections based on L1-norm maximization.
Zhong, Fujin; Zhang, Jiashu; Li, Defang
2014-11-01
Conventional discriminant locality preserving projection (DLPP) is a dimensionality reduction technique based on manifold learning, which has demonstrated good performance in pattern recognition. However, because its objective function is based on the distance criterion using L2-norm, conventional DLPP is not robust to outliers which are present in many applications. This paper proposes an effective and robust DLPP version based on L1-norm maximization, which learns a set of local optimal projection vectors by maximizing the ratio of the L1-norm-based locality preserving between-class dispersion and the L1-norm-based locality preserving within-class dispersion. The proposed method is proven to be feasible and also robust to outliers while overcoming the small sample size problem. The experimental results on artificial datasets, Binary Alphadigits dataset, FERET face dataset and PolyU palmprint dataset have demonstrated the effectiveness of the proposed method.
Comparison of retention between maxillary milled and conventional denture bases: A clinical study.
AlHelal, Abdulaziz; AlRumaih, Hamad S; Kattadiyil, Mathew T; Baba, Nadim Z; Goodacre, Charles J
2017-02-01
Clinical studies comparing the retention values of milled denture bases with those of conventionally processed denture bases are lacking. The purpose of this clinical study was to compare the retention values of conventional heat-polymerized denture bases with those of digitally milled maxillary denture bases. Twenty individuals with completely edentulous maxillary arches participated in this study. Definitive polyvinyl siloxane impressions were scanned (iSeries; Dental Wings), and the standard tessellation language files were sent to Global Dental Science for the fabrication of a computer-aided design and computer-aided manufacturing (CAD-CAM) milled denture base (group MB) (AvaDent). The impression was then poured to obtain a definitive cast that was used to fabricate a heat-polymerized acrylic resin denture base resin (group HB). A custom-designed testing device was used to measure denture retention (N). Each denture base was subjected to a vertical pulling force by using an advanced digital force gauge 3 times at 10-minute intervals. The average retention of the 2 fabrication methods was compared using repeated ANOVA (α=.05). Significantly increased retention was observed for the milled denture bases compared with that of the conventional heat-polymerized denture bases (P<.001). The retention offered by milled complete denture bases from prepolymerized poly(methyl methacrylate) resin was significantly higher than that offered by conventional heat- polymerized denture bases. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Modeling method of time sequence model based grey system theory and application proceedings
NASA Astrophysics Data System (ADS)
Wei, Xuexia; Luo, Yaling; Zhang, Shiqiang
2015-12-01
This article gives a modeling method of grey system GM(1,1) model based on reusing information and the grey system theory. This method not only extremely enhances the fitting and predicting accuracy of GM(1,1) model, but also maintains the conventional routes' merit of simple computation. By this way, we have given one syphilis trend forecast method based on reusing information and the grey system GM(1,1) model.
Doshi, Neena Piyush
2017-01-01
Team-based learning (TBL) combines small and large group learning by incorporating multiple small groups in a large group setting. It is a teacher-directed method that encourages student-student interaction. This study compares student learning and teaching satisfaction between conventional lecture and TBL in the subject of pathology. The present study is aimed to assess the effectiveness of TBL method of teaching over the conventional lecture. The present study was conducted in the Department of Pathology, GMERS Medical College and General Hospital, Gotri, Vadodara, Gujarat. The study population comprised 126 students of second-year MBBS, in their third semester of the academic year 2015-2016. "Hemodynamic disorders" were taught by conventional method and "transfusion medicine" by TBL method. Effectiveness of both the methods was assessed. A posttest multiple choice question was conducted at the end of "hemodynamic disorders." Assessment of TBL was based on individual score, team score, and each member's contribution to the success of the team. The individual score and overall score were compared with the posttest score on "hemodynamic disorders." A feedback was taken from the students regarding their experience with TBL. Tukey's multiple comparisons test and ANOVA summary were used to find the significance of scores between didactic and TBL methods. Student feedback was taken using "Student Satisfaction Scale" based on Likert scoring method. The mean of student scores by didactic, Individual Readiness Assurance Test (score "A"), and overall (score "D") was 49.8% (standard deviation [SD]-14.8), 65.6% (SD-10.9), and 65.6% (SD-13.8), respectively. The study showed positive educational outcome in terms of knowledge acquisition, participation and engagement, and team performance with TBL.
Cervical motion assessment using virtual reality.
Sarig-Bahat, Hilla; Weiss, Patrice L; Laufer, Yocheved
2009-05-01
Repeated measures of cervical motion in asymptomatic subjects. To introduce a virtual reality (VR)-based assessment of cervical range of motion (ROM); to establish inter and intratester reliability of the VR-based assessment in comparison with conventional assessment in asymptomatic individuals; and to evaluate the effect of a single VR session on cervical ROM. Cervical ROM and clinical issues related to neck pain is frequently studied. A wide variety of methods is available for evaluation of cervical motion. To date, most methods rely on voluntary responses to an assessor's instructions. However, in day-to-day life, head movement is generally an involuntary response to multiple stimuli. Therefore, there is a need for a more functional assessment method, using sensory stimuli to elicit spontaneous neck motion. VR attributes may provide a methodology for achieving this goal. A novel method was developed for cervical motion assessment utilizing an electromagnetic tracking system and a VR game scenario displayed via a head mounted device. Thirty asymptomatic participants were assessed by both conventional and VR-based methods. Inter and intratester repeatability analyses were performed. The effect of a single VR session on ROM was evaluated. Both assessments showed non-biased results between tests and between testers (P > 0.1). Full-cycle repeatability coefficients ranged between 15.0 degrees and 29.2 degrees with smaller values for rotation and for the VR assessment. A single VR session significantly increased ROM, with largest effect found in the rotation direction. Inter and intratester reliability was supported for both the VR-based and the conventional methods. Results suggest better repeatability for the VR method, with rotation being more precise than flexion/extension. A single VR session was found to be effective in increasing cervical motion, possibly due to its motivating effect.
Single Wall Carbon Nanotube Alignment Mechanisms for Non-Destructive Evaluation
NASA Technical Reports Server (NTRS)
Hong, Seunghun
2002-01-01
As proposed in our original proposal, we developed a new innovative method to assemble millions of single wall carbon nanotube (SWCNT)-based circuit components as fast as conventional microfabrication processes. This method is based on surface template assembly strategy. The new method solves one of the major bottlenecks in carbon nanotube based electrical applications and, potentially, may allow us to mass produce a large number of SWCNT-based integrated devices of critical interests to NASA.
A Shot Number Based Approach to Performance Analysis in Table Tennis
Yoshida, Kazuto; Yamada, Koshi
2017-01-01
Abstract The current study proposes a novel approach that improves the conventional performance analysis in table tennis by introducing the concept of frequency, or the number of shots, of each shot number. The improvements over the conventional method are as follows: better accuracy of the evaluation of skills and tactics of players, additional insights into scoring and returning skills and ease of understanding the results with a single criterion. The performance analysis of matches played at the 2012 Summer Olympics in London was conducted using the proposed method. The results showed some effects of the shot number and gender differences in table tennis. Furthermore, comparisons were made between Chinese players and players from other countries, what threw light on the skills and tactics of the Chinese players. The present findings demonstrate that the proposed method provides useful information and has some advantages over the conventional method. PMID:28210334
Apostol, Izydor; Kelner, Drew; Jiang, Xinzhao Grace; Huang, Gang; Wypych, Jette; Zhang, Xin; Gastwirt, Jessica; Chen, Kenneth; Fodor, Szilan; Hapuarachchi, Suminda; Meriage, Dave; Ye, Frank; Poppe, Leszek; Szpankowski, Wojciech
2012-12-01
To predict precision and other performance characteristics of chromatographic purity methods, which represent the most widely used form of analysis in the biopharmaceutical industry. We have conducted a comprehensive survey of purity methods, and show that all performance characteristics fall within narrow measurement ranges. This observation was used to develop a model called Uncertainty Based on Current Information (UBCI), which expresses these performance characteristics as a function of the signal and noise levels, hardware specifications, and software settings. We applied the UCBI model to assess the uncertainty of purity measurements, and compared the results to those from conventional qualification. We demonstrated that the UBCI model is suitable to dynamically assess method performance characteristics, based on information extracted from individual chromatograms. The model provides an opportunity for streamlining qualification and validation studies by implementing a "live validation" of test results utilizing UBCI as a concurrent assessment of measurement uncertainty. Therefore, UBCI can potentially mitigate the challenges associated with laborious conventional method validation and facilitates the introduction of more advanced analytical technologies during the method lifecycle.
Shukla, Surabhi; Einstein, A; Shukla, Abhilasha; Mishra, Deepika
2015-01-01
Background: Liquid-based cytology (LBC), recommended in the mass screening of potentially malignant cervical and oral lesions, suffers from high cost owing to the use of expensive automated devices and materials. Considering the need for cost-effective LBC techniques, we evaluated the efficacy of an inexpensive manual LBC (MLBC) technique against conventional cytological technique in terms of specimen adequacy and smear quality of oral smears. Materials and Methods: Cytological samples were collected from 21 patients using a cytobrush device. After preparation of a conventional smear, the brush containing the remaining sample was immersed in the preservative vial. The preserved material was processed by an MLBC technique and subsequently, direct smears were made from the prepared cell button. Both conventional and MLBC smears were stained by routine Papanicolaou technique and evaluated by an independent observer for the thickness of the smear, cellular distribution, resolution/clarity of cells, cellular staining characteristics and the presence of unsatisfactory background/artifacts. Each parameter was graded as satisfactory; or satisfactory, but limited; or unsatisfactory. Chi-square test was used to compare the values obtained (significance set at P ≤ 0.05). Results: MLBC technique produced a significant number of satisfactory smears with regard to cell distribution, clarity/resolution, staining characteristics and background/artifacts compared to conventional methods. Conclusions: MLBC is a cost-effective cytological technique that may produce oral smears with excellent cytomorphology and longer storage life. PMID:26980958
Chen, Minghao; Wei, Shiyou; Hu, Junyan; Yuan, Jing; Liu, Fenghua
2017-01-01
The present study aimed to undertake a review of available evidence assessing whether time-lapse imaging (TLI) has favorable outcomes for embryo incubation and selection compared with conventional methods in clinical in vitro fertilization (IVF). Using PubMed, EMBASE, Cochrane library and ClinicalTrial.gov up to February 2017 to search for randomized controlled trials (RCTs) comparing TLI versus conventional methods. Both studies randomized women and oocytes were included. For studies randomized women, the primary outcomes were live birth and ongoing pregnancy, the secondary outcomes were clinical pregnancy and miscarriage; for studies randomized oocytes, the primary outcome was blastocyst rate, the secondary outcome was good quality embryo on Day 2/3. Subgroup analysis was conducted based on different incubation and embryo selection between groups. Ten RCTs were included, four randomized oocytes and six randomized women. For oocyte-based review, the pool-analysis observed no significant difference between TLI group and control group for blastocyst rate [relative risk (RR) 1.08, 95% CI 0.94-1.25, I2 = 0%, two studies, including 1154 embryos]. The quality of evidence was moderate for all outcomes in oocyte-based review. For woman-based review, only one study provided live birth rate (RR 1,23, 95% CI 1.06-1.44,I2 N/A, one study, including 842 women), the pooled result showed no significant difference in ongoing pregnancy rate (RR 1.04, 95% CI 0.80-1.36, I2 = 59%, four studies, including 1403 women) between two groups. The quality of the evidence was low or very low for all outcomes in woman-based review. Currently there is insufficient evidence to support that TLI is superior to conventional methods for human embryo incubation and selection. In consideration of the limitations and flaws of included studies, more well designed RCTs are still in need to comprehensively evaluate the effectiveness of clinical TLI use.
CNN based approach for activity recognition using a wrist-worn accelerometer.
Panwar, Madhuri; Dyuthi, S Ram; Chandra Prakash, K; Biswas, Dwaipayan; Acharyya, Amit; Maharatna, Koushik; Gautam, Arvind; Naik, Ganesh R
2017-07-01
In recent years, significant advancements have taken place in human activity recognition using various machine learning approaches. However, feature engineering have dominated conventional methods involving the difficult process of optimal feature selection. This problem has been mitigated by using a novel methodology based on deep learning framework which automatically extracts the useful features and reduces the computational cost. As a proof of concept, we have attempted to design a generalized model for recognition of three fundamental movements of the human forearm performed in daily life where data is collected from four different subjects using a single wrist worn accelerometer sensor. The validation of the proposed model is done with different pre-processing and noisy data condition which is evaluated using three possible methods. The results show that our proposed methodology achieves an average recognition rate of 99.8% as opposed to conventional methods based on K-means clustering, linear discriminant analysis and support vector machine.
Convolutional Neural Network-Based Finger-Vein Recognition Using NIR Image Sensors
Hong, Hyung Gil; Lee, Min Beom; Park, Kang Ryoung
2017-01-01
Conventional finger-vein recognition systems perform recognition based on the finger-vein lines extracted from the input images or image enhancement, and texture feature extraction from the finger-vein images. In these cases, however, the inaccurate detection of finger-vein lines lowers the recognition accuracy. In the case of texture feature extraction, the developer must experimentally decide on a form of the optimal filter for extraction considering the characteristics of the image database. To address this problem, this research proposes a finger-vein recognition method that is robust to various database types and environmental changes based on the convolutional neural network (CNN). In the experiments using the two finger-vein databases constructed in this research and the SDUMLA-HMT finger-vein database, which is an open database, the method proposed in this research showed a better performance compared to the conventional methods. PMID:28587269
Convolutional Neural Network-Based Finger-Vein Recognition Using NIR Image Sensors.
Hong, Hyung Gil; Lee, Min Beom; Park, Kang Ryoung
2017-06-06
Conventional finger-vein recognition systems perform recognition based on the finger-vein lines extracted from the input images or image enhancement, and texture feature extraction from the finger-vein images. In these cases, however, the inaccurate detection of finger-vein lines lowers the recognition accuracy. In the case of texture feature extraction, the developer must experimentally decide on a form of the optimal filter for extraction considering the characteristics of the image database. To address this problem, this research proposes a finger-vein recognition method that is robust to various database types and environmental changes based on the convolutional neural network (CNN). In the experiments using the two finger-vein databases constructed in this research and the SDUMLA-HMT finger-vein database, which is an open database, the method proposed in this research showed a better performance compared to the conventional methods.
NASA Astrophysics Data System (ADS)
Cattaneo, Paolo M.; Dalstra, Michel; Beckmann, Felix; Donath, Tilman; Melsen, Birte
2004-10-01
This study explores the application of conventional micro tomography (μCT) and synchrotron radiation (SR) based μCT to evaluate the bone around titanium dental implants. The SR experiment was performed at beamline W2 of HASYLAB at DESY using a monochromatic X-ray beam of 50 keV. The testing material consisted of undecalcified bone segments harvested from the upper jaw of a macaca fascicularis monkey each containing a titanium dental implant. The results from the two different techniques were qualitatively compared with conventional histological sections examined under light microscopy. The SR-based μCT produced images that, especially at the bone-implant interface, are less noisy and sharper than the ones obtained with conventional μCT. For the proper evaluation of the implant-bone interface, only the SR-based μCT technique is able to display the areas of bony contact and visualize the true 3D structure of bone around dental implants correctly. This investigation shows that both conventional and SR-based μCT scanning techniques are non-destructive methods, which provide detailed images of bone. However with SR-based μCT it is possible to obtain an improved image quality of the bone surrounding dental implants, which display a level of detail comparable to histological sections. Therefore, SR-based μCT scanning could represent a valid, unbiased three-dimensional alternative to evaluate osseointegration of dental implants
Evaluation of bearing capacity of piles from cone penetration test data.
DOT National Transportation Integrated Search
2007-12-01
A statistical analysis and ranking criteria were used to compare the CPT methods and the conventional alpha design method. Based on the results, the de Ruiter/Beringen and LCPC methods showed the best capability in predicting the measured load carryi...
Development of quadruped walking locomotion gait generator using a hybrid method
NASA Astrophysics Data System (ADS)
Jasni, F.; Shafie, A. A.
2013-12-01
The earth, in many areas is hardly reachable by the wheeled or tracked locomotion system. Thus, walking locomotion system is becoming a favourite option for mobile robot these days. This is because of the ability of walking locomotion to move on the rugged and unlevel terrains. However, to develop a walking locomotion gait for a robot is not a simple task. Central Pattern Generator (CPGs) method is a biological inspired method that is introduced as a method to develop the gait for the walking robot recently to tackle the issue faced by the conventional method of pre-designed trajectory based method. However, research shows that even the CPG method do have some limitations. Thus, in this paper, a hybrid method that combines CPG and the pre-designed trajectory based method is introduced to develop a walking gait for quadruped walking robot. The 3-D foot trajectories and the joint angle trajectories developed using the proposed method are compared with the data obtained via the conventional method of pre-designed trajectory to confirm the performance.
Modified neural networks for rapid recovery of tokamak plasma parameters for real time control
NASA Astrophysics Data System (ADS)
Sengupta, A.; Ranjan, P.
2002-07-01
Two modified neural network techniques are used for the identification of the equilibrium plasma parameters of the Superconducting Steady State Tokamak I from external magnetic measurements. This is expected to ultimately assist in a real time plasma control. As different from the conventional network structure where a single network with the optimum number of processing elements calculates the outputs, a multinetwork system connected in parallel does the calculations here in one of the methods. This network is called the double neural network. The accuracy of the recovered parameters is clearly more than the conventional network. The other type of neural network used here is based on the statistical function parametrization combined with a neural network. The principal component transformation removes linear dependences from the measurements and a dimensional reduction process reduces the dimensionality of the input space. This reduced and transformed input set, rather than the entire set, is fed into the neural network input. This is known as the principal component transformation-based neural network. The accuracy of the recovered parameters in the latter type of modified network is found to be a further improvement over the accuracy of the double neural network. This result differs from that obtained in an earlier work where the double neural network showed better performance. The conventional network and the function parametrization methods have also been used for comparison. The conventional network has been used for an optimization of the set of magnetic diagnostics. The effective set of sensors, as assessed by this network, are compared with the principal component based network. Fault tolerance of the neural networks has been tested. The double neural network showed the maximum resistance to faults in the diagnostics, while the principal component based network performed poorly. Finally the processing times of the methods have been compared. The double network and the principal component network involve the minimum computation time, although the conventional network also performs well enough to be used in real time.
Pseudophasic extraction method for the separation of ultra-fine minerals
Chaiko, David J.
2002-01-01
An improved aqueous-based extraction method for the separation and recovery of ultra-fine mineral particles. The process operates within the pseudophase region of the conventional aqueous biphasic extraction system where a low-molecular-weight, water soluble polymer alone is used in combination with a salt and operates within the pseudo-biphase regime of the conventional aqueous biphasic extraction system. A combination of low molecular weight, mutually immiscible polymers are used with or without a salt. This method is especially suited for the purification of clays that are useful as rheological control agents and for the preparation of nanocomposites.
Measurement of Crystalline Silica Aerosol Using Quantum Cascade Laser-Based Infrared Spectroscopy.
Wei, Shijun; Kulkarni, Pramod; Ashley, Kevin; Zheng, Lina
2017-10-24
Inhalation exposure to airborne respirable crystalline silica (RCS) poses major health risks in many industrial environments. There is a need for new sensitive instruments and methods for in-field or near real-time measurement of crystalline silica aerosol. The objective of this study was to develop an approach, using quantum cascade laser (QCL)-based infrared spectroscopy (IR), to quantify airborne concentrations of RCS. Three sampling methods were investigated for their potential for effective coupling with QCL-based transmittance measurements: (i) conventional aerosol filter collection, (ii) focused spot sample collection directly from the aerosol phase, and (iii) dried spot obtained from deposition of liquid suspensions. Spectral analysis methods were developed to obtain IR spectra from the collected particulate samples in the range 750-1030 cm -1 . The new instrument was calibrated and the results were compared with standardized methods based on Fourier transform infrared (FTIR) spectrometry. Results show that significantly lower detection limits for RCS (≈330 ng), compared to conventional infrared methods, could be achieved with effective microconcentration and careful coupling of the particulate sample with the QCL beam. These results offer promise for further development of sensitive filter-based laboratory methods and portable sensors for near real-time measurement of crystalline silica aerosol.
Laser-based pedestrian tracking in outdoor environments by multiple mobile robots.
Ozaki, Masataka; Kakimuma, Kei; Hashimoto, Masafumi; Takahashi, Kazuhiko
2012-10-29
This paper presents an outdoors laser-based pedestrian tracking system using a group of mobile robots located near each other. Each robot detects pedestrians from its own laser scan image using an occupancy-grid-based method, and the robot tracks the detected pedestrians via Kalman filtering and global-nearest-neighbor (GNN)-based data association. The tracking data is broadcast to multiple robots through intercommunication and is combined using the covariance intersection (CI) method. For pedestrian tracking, each robot identifies its own posture using real-time-kinematic GPS (RTK-GPS) and laser scan matching. Using our cooperative tracking method, all the robots share the tracking data with each other; hence, individual robots can always recognize pedestrians that are invisible to any other robot. The simulation and experimental results show that cooperating tracking provides the tracking performance better than conventional individual tracking does. Our tracking system functions in a decentralized manner without any central server, and therefore, this provides a degree of scalability and robustness that cannot be achieved by conventional centralized architectures.
Cryptanalysis of "an improvement over an image encryption method based on total shuffling"
NASA Astrophysics Data System (ADS)
Akhavan, A.; Samsudin, A.; Akhshani, A.
2015-09-01
In the past two decades, several image encryption algorithms based on chaotic systems had been proposed. Many of the proposed algorithms are meant to improve other chaos based and conventional cryptographic algorithms. Whereas, many of the proposed improvement methods suffer from serious security problems. In this paper, the security of the recently proposed improvement method for a chaos-based image encryption algorithm is analyzed. The results indicate the weakness of the analyzed algorithm against chosen plain-text.
Quantification of protein interaction kinetics in a micro droplet
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yin, L. L.; College of Chemistry and Chemical Engineering, Chongqing University, Chongqing 400044; Wang, S. P., E-mail: shaopeng.wang@asu.edu, E-mail: njtao@asu.edu
Characterization of protein interactions is essential to the discovery of disease biomarkers, the development of diagnostic assays, and the screening for therapeutic drugs. Conventional flow-through kinetic measurements need relative large amount of sample that is not feasible for precious protein samples. We report a novel method to measure protein interaction kinetics in a single droplet with sub microliter or less volume. A droplet in a humidity-controlled environmental chamber is replacing the microfluidic channels as the reactor for the protein interaction. The binding process is monitored by a surface plasmon resonance imaging (SPRi) system. Association curves are obtained from the averagemore » SPR image intensity in the center area of the droplet. The washing step required by conventional flow-through SPR method is eliminated in the droplet method. The association and dissociation rate constants and binding affinity of an antigen-antibody interaction are obtained by global fitting of association curves at different concentrations. The result obtained by this method is accurate as validated by conventional flow-through SPR system. This droplet-based method not only allows kinetic studies for proteins with limited supply but also opens the door for high-throughput protein interaction study in a droplet-based microarray format that enables measurement of many to many interactions on a single chip.« less
Quantification of protein interaction kinetics in a micro droplet
NASA Astrophysics Data System (ADS)
Yin, L. L.; Wang, S. P.; Shan, X. N.; Zhang, S. T.; Tao, N. J.
2015-11-01
Characterization of protein interactions is essential to the discovery of disease biomarkers, the development of diagnostic assays, and the screening for therapeutic drugs. Conventional flow-through kinetic measurements need relative large amount of sample that is not feasible for precious protein samples. We report a novel method to measure protein interaction kinetics in a single droplet with sub microliter or less volume. A droplet in a humidity-controlled environmental chamber is replacing the microfluidic channels as the reactor for the protein interaction. The binding process is monitored by a surface plasmon resonance imaging (SPRi) system. Association curves are obtained from the average SPR image intensity in the center area of the droplet. The washing step required by conventional flow-through SPR method is eliminated in the droplet method. The association and dissociation rate constants and binding affinity of an antigen-antibody interaction are obtained by global fitting of association curves at different concentrations. The result obtained by this method is accurate as validated by conventional flow-through SPR system. This droplet-based method not only allows kinetic studies for proteins with limited supply but also opens the door for high-throughput protein interaction study in a droplet-based microarray format that enables measurement of many to many interactions on a single chip.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slabodchikov, Vladimir A., E-mail: dipis1991@mail.ru; Borisov, Dmitry P., E-mail: borengin@mail.ru; Kuznetsov, Vladimir M., E-mail: kuznetsov@rec.tsu.ru
The paper reports on a new method of plasma immersion ion implantation for the surface modification of medical materials using the example of nickel-titanium (NiTi) alloys much used for manufacturing medical implants. The chemical composition and surface properties of NiTi alloys doped with silicon by conventional ion implantation and by the proposed plasma immersion method are compared. It is shown that the new plasma immersion method is more efficient than conventional ion beam treatment and provides Si implantation into NiTi surface layers through a depth of a hundred nanometers at low bias voltages (400 V) and temperatures (≤150°C) of the substrate.more » The research results suggest that the chemical composition and surface properties of materials required for medicine, e.g., NiTi alloys, can be successfully attained through modification by the proposed method of plasma immersion ion implantation and by other methods based on the proposed vacuum equipment without using any conventional ion beam treatment.« less
Daxini, S D; Prajapati, J M
2014-01-01
Meshfree methods are viewed as next generation computational techniques. With evident limitations of conventional grid based methods, like FEM, in dealing with problems of fracture mechanics, large deformation, and simulation of manufacturing processes, meshfree methods have gained much attention by researchers. A number of meshfree methods have been proposed till now for analyzing complex problems in various fields of engineering. Present work attempts to review recent developments and some earlier applications of well-known meshfree methods like EFG and MLPG to various types of structure mechanics and fracture mechanics applications like bending, buckling, free vibration analysis, sensitivity analysis and topology optimization, single and mixed mode crack problems, fatigue crack growth, and dynamic crack analysis and some typical applications like vibration of cracked structures, thermoelastic crack problems, and failure transition in impact problems. Due to complex nature of meshfree shape functions and evaluation of integrals in domain, meshless methods are computationally expensive as compared to conventional mesh based methods. Some improved versions of original meshfree methods and other techniques suggested by researchers to improve computational efficiency of meshfree methods are also reviewed here.
NASA Astrophysics Data System (ADS)
Jiao, Jieqing; Salinas, Cristian A.; Searle, Graham E.; Gunn, Roger N.; Schnabel, Julia A.
2012-02-01
Dynamic Positron Emission Tomography is a powerful tool for quantitative imaging of in vivo biological processes. The long scan durations necessitate motion correction, to maintain the validity of the dynamic measurements, which can be particularly challenging due to the low signal-to-noise ratio (SNR) and spatial resolution, as well as the complex tracer behaviour in the dynamic PET data. In this paper we develop a novel automated expectation-maximisation image registration framework that incorporates temporal tracer kinetic information to correct for inter-frame subject motion during dynamic PET scans. We employ the Zubal human brain phantom to simulate dynamic PET data using SORTEO (a Monte Carlo-based simulator), in order to validate the proposed method for its ability to recover imposed rigid motion. We have conducted a range of simulations using different noise levels, and corrupted the data with a range of rigid motion artefacts. The performance of our motion correction method is compared with pairwise registration using normalised mutual information as a voxel similarity measure (an approach conventionally used to correct for dynamic PET inter-frame motion based solely on intensity information). To quantify registration accuracy, we calculate the target registration error across the images. The results show that our new dynamic image registration method based on tracer kinetics yields better realignment of the simulated datasets, halving the target registration error when compared to the conventional method at small motion levels, as well as yielding smaller residuals in translation and rotation parameters. We also show that our new method is less affected by the low signal in the first few frames, which the conventional method based on normalised mutual information fails to realign.
Yuan, Jing; Liu, Fenghua
2017-01-01
Objective The present study aimed to undertake a review of available evidence assessing whether time-lapse imaging (TLI) has favorable outcomes for embryo incubation and selection compared with conventional methods in clinical in vitro fertilization (IVF). Methods Using PubMed, EMBASE, Cochrane library and ClinicalTrial.gov up to February 2017 to search for randomized controlled trials (RCTs) comparing TLI versus conventional methods. Both studies randomized women and oocytes were included. For studies randomized women, the primary outcomes were live birth and ongoing pregnancy, the secondary outcomes were clinical pregnancy and miscarriage; for studies randomized oocytes, the primary outcome was blastocyst rate, the secondary outcome was good quality embryo on Day 2/3. Subgroup analysis was conducted based on different incubation and embryo selection between groups. Results Ten RCTs were included, four randomized oocytes and six randomized women. For oocyte-based review, the pool-analysis observed no significant difference between TLI group and control group for blastocyst rate [relative risk (RR) 1.08, 95% CI 0.94–1.25, I2 = 0%, two studies, including 1154 embryos]. The quality of evidence was moderate for all outcomes in oocyte-based review. For woman-based review, only one study provided live birth rate (RR 1,23, 95% CI 1.06–1.44,I2 N/A, one study, including 842 women), the pooled result showed no significant difference in ongoing pregnancy rate (RR 1.04, 95% CI 0.80–1.36, I2 = 59%, four studies, including 1403 women) between two groups. The quality of the evidence was low or very low for all outcomes in woman-based review. Conclusions Currently there is insufficient evidence to support that TLI is superior to conventional methods for human embryo incubation and selection. In consideration of the limitations and flaws of included studies, more well designed RCTs are still in need to comprehensively evaluate the effectiveness of clinical TLI use. PMID:28570713
Yoo, Seunghwan; Song, Ho Young; Lee, Junghoon; Jang, Cheol-Yong; Jeong, Hakgeun
2012-11-20
In this article, we introduce a simple fabrication method for SiO(2)-based thin diffractive optical elements (DOEs) that uses the conventional processes widely used in the semiconductor industry. Photolithography and an inductively coupled plasma etching technique are easy and cost-effective methods for fabricating subnanometer-scale and thin DOEs with a refractive index of 1.45, based on SiO(2). After fabricating DOEs, we confirmed the shape of the output light emitted from the laser diode light source and applied to a light-emitting diode (LED) module. The results represent a new approach to mass-produce DOEs and realize a high-brightness LED module.
Murahashi, Shun-Ichi
2011-01-01
This review focuses on the development of ruthenium and flavin catalysts for environmentally benign oxidation reactions based on mimicking the functions of cytochrome P-450 and flavoenzymes, and low valent transition-metal catalysts that replace conventional acids and bases. Several new concepts and new types of catalytic reactions based on these concepts are described. (Communicated by Ryoji Noyori, M.J.A.).
Zhao, Weixiang; Davis, Cristina E.
2011-01-01
Objective This paper introduces a modified artificial immune system (AIS)-based pattern recognition method to enhance the recognition ability of the existing conventional AIS-based classification approach and demonstrates the superiority of the proposed new AIS-based method via two case studies of breast cancer diagnosis. Methods and materials Conventionally, the AIS approach is often coupled with the k nearest neighbor (k-NN) algorithm to form a classification method called AIS-kNN. In this paper we discuss the basic principle and possible problems of this conventional approach, and propose a new approach where AIS is integrated with the radial basis function – partial least square regression (AIS-RBFPLS). Additionally, both the two AIS-based approaches are compared with two classical and powerful machine learning methods, back-propagation neural network (BPNN) and orthogonal radial basis function network (Ortho-RBF network). Results The diagnosis results show that: (1) both the AIS-kNN and the AIS-RBFPLS proved to be a good machine leaning method for clinical diagnosis, but the proposed AIS-RBFPLS generated an even lower misclassification ratio, especially in the cases where the conventional AIS-kNN approach generated poor classification results because of possible improper AIS parameters. For example, based upon the AIS memory cells of “replacement threshold = 0.3”, the average misclassification ratios of two approaches for study 1 are 3.36% (AIS-RBFPLS) and 9.07% (AIS-kNN), and the misclassification ratios for study 2 are 19.18% (AIS-RBFPLS) and 28.36% (AIS-kNN); (2) the proposed AIS-RBFPLS presented its robustness in terms of the AIS-created memory cells, showing a smaller standard deviation of the results from the multiple trials than AIS-kNN. For example, using the result from the first set of AIS memory cells as an example, the standard deviations of the misclassification ratios for study 1 are 0.45% (AIS-RBFPLS) and 8.71% (AIS-kNN) and those for study 2 are 0.49% (AIS-RBFPLS) and 6.61% (AIS-kNN); and (3) the proposed AIS-RBFPLS classification approaches also yielded better diagnosis results than two classical neural network approaches of BPNN and Ortho-RBF network. Conclusion In summary, this paper proposed a new machine learning method for complex systems by integrating the AIS system with RBFPLS. This new method demonstrates its satisfactory effect on classification accuracy for clinical diagnosis, and also indicates its wide potential applications to other diagnosis and detection problems. PMID:21515033
Biochips: non-conventional strategies for biosensing elements immobilization.
Marquette, Christophe A; Corgier, Benjamin P; Heyries, Kevin A; Blum, Loic J
2008-01-01
The present article draws a general picture of non-conventional methods for biomolecules immobilization. The technologies presented are based either on original solid supports or on innovative immobilization processes. Polydimethylsiloxane elastomer will be presented as a popular immobilization support within the biochip developer community. Electro-addressing of biomolecules at the surface of conducting biochips will appear to be an interesting alternative to immobilization processes based on surface functionalization. Finally, bead-assisted biomolecules immobilization will be presented as an open field of research for biochip developments.
SU-G-IeP2-06: Evaluation of Registration Accuracy for Cone-Beam CT Reconstruction Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, J; Wang, P; Zhang, H
2016-06-15
Purpose: Cone-beam (CB) computed tomography (CT) is used for image guidance during radiotherapy treatment delivery. Conventional Feldkamp and compressed sensing (CS) based CBCT recon-struction techniques are compared for image registration. This study is to evaluate the image registration accuracy of conventional and CS CBCT for head-and-neck (HN) patients. Methods: Ten HN patients with oropharyngeal tumors were retrospectively selected. Each HN patient had one planning CT (CTP) and three CBCTs were acquired during an adaptive radiotherapy proto-col. Each CBCT was reconstructed by both the conventional (CBCTCON) and compressed sens-ing (CBCTCS) methods. Two oncologists manually labeled 23 landmarks of normal tissue andmore » implanted gold markers on both the CTP and CBCTCON. Subsequently, landmarks on CTp were propagated to CBCTs, using a b-spline-based deformable image registration (DIR) and rigid registration (RR). The errors of these registration methods between two CBCT methods were calcu-lated. Results: For DIR, the mean distance between the propagated and the labeled landmarks was 2.8 mm ± 0.52 for CBCTCS, and 3.5 mm ± 0.75 for CBCTCON. For RR, the mean distance between the propagated and the labeled landmarks was 6.8 mm ± 0.92 for CBCTCS, and 8.7 mm ± 0.95 CBCTCON. Conclusion: This study has demonstrated that CS CBCT is more accurate than conventional CBCT in image registration by both rigid and non-rigid methods. It is potentially suggested that CS CBCT is an improved image modality for image guided adaptive applications.« less
K, Jalal Deen; R, Ganesan; A, Merline
2017-07-27
Objective: Accurate segmentation of abnormal and healthy lungs is very crucial for a steadfast computer-aided disease diagnostics. Methods: For this purpose a stack of chest CT scans are processed. In this paper, novel methods are proposed for segmentation of the multimodal grayscale lung CT scan. In the conventional methods using Markov–Gibbs Random Field (MGRF) model the required regions of interest (ROI) are identified. Result: The results of proposed FCM and CNN based process are compared with the results obtained from the conventional method using MGRF model. The results illustrate that the proposed method can able to segment the various kinds of complex multimodal medical images precisely. Conclusion: However, in this paper, to obtain an exact boundary of the regions, every empirical dispersion of the image is computed by Fuzzy C-Means Clustering segmentation. A classification process based on the Convolutional Neural Network (CNN) classifier is accomplished to distinguish the normal tissue and the abnormal tissue. The experimental evaluation is done using the Interstitial Lung Disease (ILD) database. Creative Commons Attribution License
K, Jalal Deen; R, Ganesan; A, Merline
2017-01-01
Objective: Accurate segmentation of abnormal and healthy lungs is very crucial for a steadfast computer-aided disease diagnostics. Methods: For this purpose a stack of chest CT scans are processed. In this paper, novel methods are proposed for segmentation of the multimodal grayscale lung CT scan. In the conventional methods using Markov–Gibbs Random Field (MGRF) model the required regions of interest (ROI) are identified. Result: The results of proposed FCM and CNN based process are compared with the results obtained from the conventional method using MGRF model. The results illustrate that the proposed method can able to segment the various kinds of complex multimodal medical images precisely. Conclusion: However, in this paper, to obtain an exact boundary of the regions, every empirical dispersion of the image is computed by Fuzzy C-Means Clustering segmentation. A classification process based on the Convolutional Neural Network (CNN) classifier is accomplished to distinguish the normal tissue and the abnormal tissue. The experimental evaluation is done using the Interstitial Lung Disease (ILD) database. PMID:28749127
Weak wide-band signal detection method based on small-scale periodic state of Duffing oscillator
NASA Astrophysics Data System (ADS)
Hou, Jian; Yan, Xiao-peng; Li, Ping; Hao, Xin-hong
2018-03-01
The conventional Duffing oscillator weak signal detection method, which is based on a strong reference signal, has inherent deficiencies. To address these issues, the characteristics of the Duffing oscillatorʼs phase trajectory in a small-scale periodic state are analyzed by introducing the theory of stopping oscillation system. Based on this approach, a novel Duffing oscillator weak wide-band signal detection method is proposed. In this novel method, the reference signal is discarded, and the to-be-detected signal is directly used as a driving force. By calculating the cosine function of a phase space angle, a single Duffing oscillator can be used for weak wide-band signal detection instead of an array of uncoupled Duffing oscillators. Simulation results indicate that, compared with the conventional Duffing oscillator detection method, this approach performs better in frequency detection intervals, and reduces the signal-to-noise ratio detection threshold, while improving the real-time performance of the system. Project supported by the National Natural Science Foundation of China (Grant No. 61673066).
Improvement of Accuracy for Background Noise Estimation Method Based on TPE-AE
NASA Astrophysics Data System (ADS)
Itai, Akitoshi; Yasukawa, Hiroshi
This paper proposes a method of a background noise estimation based on the tensor product expansion with a median and a Monte carlo simulation. We have shown that a tensor product expansion with absolute error method is effective to estimate a background noise, however, a background noise might not be estimated by using conventional method properly. In this paper, it is shown that the estimate accuracy can be improved by using proposed methods.
NASA Astrophysics Data System (ADS)
Singh, Manpreet; Alabanza, Anginelle; Gonzalez, Lorelis E.; Wang, Weiwei; Reeves, W. Brian; Hahm, Jong-In
2016-02-01
Determining ultratrace amounts of protein biomarkers in patient samples in a straightforward and quantitative manner is extremely important for early disease diagnosis and treatment. Here, we successfully demonstrate the novel use of zinc oxide nanorods (ZnO NRs) in the ultrasensitive and quantitative detection of two acute kidney injury (AKI)-related protein biomarkers, tumor necrosis factor (TNF)-α and interleukin (IL)-8, directly from patient samples. We first validate the ZnO NRs-based IL-8 results via comparison with those obtained from using a conventional enzyme-linked immunosorbent method in samples from 38 individuals. We further assess the full detection capability of the ZnO NRs-based technique by quantifying TNF-α, whose levels in human urine are often below the detection limits of conventional methods. Using the ZnO NR platforms, we determine the TNF-α concentrations of all 46 patient samples tested, down to the fg per mL level. Subsequently, we screen for TNF-α levels in approximately 50 additional samples collected from different patient groups in order to demonstrate a potential use of the ZnO NRs-based assay in assessing cytokine levels useful for further clinical monitoring. Our research efforts demonstrate that ZnO NRs can be straightforwardly employed in the rapid, ultrasensitive, quantitative, and simultaneous detection of multiple AKI-related biomarkers directly in patient urine samples, providing an unparalleled detection capability beyond those of conventional analysis methods. Additional key advantages of the ZnO NRs-based approach include a fast detection speed, low-volume assay condition, multiplexing ability, and easy automation/integration capability to existing fluorescence instrumentation. Therefore, we anticipate that our ZnO NRs-based detection method will be highly beneficial for overcoming the frequent challenges in early biomarker development and treatment assessment, pertaining to the facile and ultrasensitive quantification of hard-to-trace biomolecules.Determining ultratrace amounts of protein biomarkers in patient samples in a straightforward and quantitative manner is extremely important for early disease diagnosis and treatment. Here, we successfully demonstrate the novel use of zinc oxide nanorods (ZnO NRs) in the ultrasensitive and quantitative detection of two acute kidney injury (AKI)-related protein biomarkers, tumor necrosis factor (TNF)-α and interleukin (IL)-8, directly from patient samples. We first validate the ZnO NRs-based IL-8 results via comparison with those obtained from using a conventional enzyme-linked immunosorbent method in samples from 38 individuals. We further assess the full detection capability of the ZnO NRs-based technique by quantifying TNF-α, whose levels in human urine are often below the detection limits of conventional methods. Using the ZnO NR platforms, we determine the TNF-α concentrations of all 46 patient samples tested, down to the fg per mL level. Subsequently, we screen for TNF-α levels in approximately 50 additional samples collected from different patient groups in order to demonstrate a potential use of the ZnO NRs-based assay in assessing cytokine levels useful for further clinical monitoring. Our research efforts demonstrate that ZnO NRs can be straightforwardly employed in the rapid, ultrasensitive, quantitative, and simultaneous detection of multiple AKI-related biomarkers directly in patient urine samples, providing an unparalleled detection capability beyond those of conventional analysis methods. Additional key advantages of the ZnO NRs-based approach include a fast detection speed, low-volume assay condition, multiplexing ability, and easy automation/integration capability to existing fluorescence instrumentation. Therefore, we anticipate that our ZnO NRs-based detection method will be highly beneficial for overcoming the frequent challenges in early biomarker development and treatment assessment, pertaining to the facile and ultrasensitive quantification of hard-to-trace biomolecules. Electronic supplementary information (ESI) available: Typical SEM images of the ZnO NRs used in the biomarker assays are provided in Fig. S1. See DOI: 10.1039/c5nr08706f
Model-based sensor-less wavefront aberration correction in optical coherence tomography.
Verstraete, Hans R G W; Wahls, Sander; Kalkman, Jeroen; Verhaegen, Michel
2015-12-15
Several sensor-less wavefront aberration correction methods that correct nonlinear wavefront aberrations by maximizing the optical coherence tomography (OCT) signal are tested on an OCT setup. A conventional coordinate search method is compared to two model-based optimization methods. The first model-based method takes advantage of the well-known optimization algorithm (NEWUOA) and utilizes a quadratic model. The second model-based method (DONE) is new and utilizes a random multidimensional Fourier-basis expansion. The model-based algorithms achieve lower wavefront errors with up to ten times fewer measurements. Furthermore, the newly proposed DONE method outperforms the NEWUOA method significantly. The DONE algorithm is tested on OCT images and shows a significantly improved image quality.
Fu, Szu-Wei; Li, Pei-Chun; Lai, Ying-Hui; Yang, Cheng-Chien; Hsieh, Li-Chun; Tsao, Yu
2017-11-01
Objective: This paper focuses on machine learning based voice conversion (VC) techniques for improving the speech intelligibility of surgical patients who have had parts of their articulators removed. Because of the removal of parts of the articulator, a patient's speech may be distorted and difficult to understand. To overcome this problem, VC methods can be applied to convert the distorted speech such that it is clear and more intelligible. To design an effective VC method, two key points must be considered: 1) the amount of training data may be limited (because speaking for a long time is usually difficult for postoperative patients); 2) rapid conversion is desirable (for better communication). Methods: We propose a novel joint dictionary learning based non-negative matrix factorization (JD-NMF) algorithm. Compared to conventional VC techniques, JD-NMF can perform VC efficiently and effectively with only a small amount of training data. Results: The experimental results demonstrate that the proposed JD-NMF method not only achieves notably higher short-time objective intelligibility (STOI) scores (a standardized objective intelligibility evaluation metric) than those obtained using the original unconverted speech but is also significantly more efficient and effective than a conventional exemplar-based NMF VC method. Conclusion: The proposed JD-NMF method may outperform the state-of-the-art exemplar-based NMF VC method in terms of STOI scores under the desired scenario. Significance: We confirmed the advantages of the proposed joint training criterion for the NMF-based VC. Moreover, we verified that the proposed JD-NMF can effectively improve the speech intelligibility scores of oral surgery patients. Objective: This paper focuses on machine learning based voice conversion (VC) techniques for improving the speech intelligibility of surgical patients who have had parts of their articulators removed. Because of the removal of parts of the articulator, a patient's speech may be distorted and difficult to understand. To overcome this problem, VC methods can be applied to convert the distorted speech such that it is clear and more intelligible. To design an effective VC method, two key points must be considered: 1) the amount of training data may be limited (because speaking for a long time is usually difficult for postoperative patients); 2) rapid conversion is desirable (for better communication). Methods: We propose a novel joint dictionary learning based non-negative matrix factorization (JD-NMF) algorithm. Compared to conventional VC techniques, JD-NMF can perform VC efficiently and effectively with only a small amount of training data. Results: The experimental results demonstrate that the proposed JD-NMF method not only achieves notably higher short-time objective intelligibility (STOI) scores (a standardized objective intelligibility evaluation metric) than those obtained using the original unconverted speech but is also significantly more efficient and effective than a conventional exemplar-based NMF VC method. Conclusion: The proposed JD-NMF method may outperform the state-of-the-art exemplar-based NMF VC method in terms of STOI scores under the desired scenario. Significance: We confirmed the advantages of the proposed joint training criterion for the NMF-based VC. Moreover, we verified that the proposed JD-NMF can effectively improve the speech intelligibility scores of oral surgery patients.
Strappini, Francesca; Gilboa, Elad; Pitzalis, Sabrina; Kay, Kendrick; McAvoy, Mark; Nehorai, Arye; Snyder, Abraham Z
2017-03-01
Temporal and spatial filtering of fMRI data is often used to improve statistical power. However, conventional methods, such as smoothing with fixed-width Gaussian filters, remove fine-scale structure in the data, necessitating a tradeoff between sensitivity and specificity. Specifically, smoothing may increase sensitivity (reduce noise and increase statistical power) but at the cost loss of specificity in that fine-scale structure in neural activity patterns is lost. Here, we propose an alternative smoothing method based on Gaussian processes (GP) regression for single subjects fMRI experiments. This method adapts the level of smoothing on a voxel by voxel basis according to the characteristics of the local neural activity patterns. GP-based fMRI analysis has been heretofore impractical owing to computational demands. Here, we demonstrate a new implementation of GP that makes it possible to handle the massive data dimensionality of the typical fMRI experiment. We demonstrate how GP can be used as a drop-in replacement to conventional preprocessing steps for temporal and spatial smoothing in a standard fMRI pipeline. We present simulated and experimental results that show the increased sensitivity and specificity compared to conventional smoothing strategies. Hum Brain Mapp 38:1438-1459, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
The equivalent magnetizing method applied to the design of gradient coils for MRI.
Lopez, Hector Sanchez; Liu, Feng; Crozier, Stuart
2008-01-01
This paper presents a new method for the design of gradient coils for Magnetic Resonance Imaging systems. The method is based on the equivalence between a magnetized volume surrounded by a conducting surface and its equivalent representation in surface current/charge density. We demonstrate that the curl of the vertical magnetization induces a surface current density whose stream line defines the coil current pattern. This method can be applied for coils wounds on arbitrary surface shapes. A single layer unshielded transverse gradient coil is designed and compared, with the designs obtained using two conventional methods. Through the presented example we demonstrate that the generated unconventional current patterns obtained using the magnetizing current method produces a superior gradient coil performance than coils designed by applying conventional methods.
Biolik, A; Heide, S; Lessig, R; Hachmann, V; Stoevesandt, D; Kellner, J; Jäschke, C; Watzke, S
2018-04-01
One option for improving the quality of medical post mortem examinations is through intensified training of medical students, especially in countries where such a requirement exists regardless of the area of specialisation. For this reason, new teaching and learning methods on this topic have recently been introduced. These new approaches include e-learning modules or SkillsLab stations; one way to objectify the resultant learning outcomes is by means of the OSCE process. However, despite offering several advantages, this examination format also requires considerable resources, in particular in regards to medical examiners. For this reason, many clinical disciplines have already implemented computer-based OSCE examination formats. This study investigates whether the conventional exam format for the OSCE forensic "Death Certificate" station could be replaced with a computer-based approach in future. For this study, 123 students completed the OSCE "Death Certificate" station, using both a computer-based and conventional format, half starting with the Computer the other starting with the conventional approach in their OSCE rotation. Assignment of examination cases was random. The examination results for the two stations were compared and both overall results and the individual items of the exam checklist were analysed by means of inferential statistics. Following statistical analysis of examination cases of varying difficulty levels and correction of the repeated measures effect, the results of both examination formats appear to be comparable. Thus, in the descriptive item analysis, while there were some significant differences between the computer-based and conventional OSCE stations, these differences were not reflected in the overall results after a correction factor was applied (e.g. point deductions for assistance from the medical examiner was possible only at the conventional station). Thus, we demonstrate that the computer-based OSCE "Death Certificate" station is a cost-efficient and standardised format for examination that yields results comparable to those from a conventional format exam. Moreover, the examination results also indicate the need to optimize both the test itself (adjusting the degree of difficulty of the case vignettes) and the corresponding instructional and learning methods (including, for example, the use of computer programmes to complete the death certificate in small group formats in the SkillsLab). Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Simultaneous HPLC determination of flavonoids and phenolic acids profile in Pêra-Rio orange juice.
Mesquita, E; Monteiro, M
2018-04-01
The aim of this study was to develop and validate an HPLC-DAD method to evaluate the phenolic compounds profile of organic and conventional Pêra-Rio orange juice. The proposed method was validated for 10 flavonoids and 6 phenolic acids. A wide linear range (0.01-223.4μg·g -1 ), good accuracy (79.5-129.2%) and precision (CV≤3.8%), low limits of detection (1-22ng·g -1 ) and quantification (0.7-7.4μg), and overall ruggedness were attained. Good recovery was achieved for all phenolic compounds after extraction and cleanup. The method was applied to organic and conventional Pêra-Rio orange juices from beginning, middle and end of the 2016 harvest. Flavones rutin, nobiletin and tangeretin, and flavanones hesperidin, narirutin and eriocitrin were identified and quantified in all organic and conventional juices. Identity was confirmed by mass spectrometry. Nineteen non-identified phenolic compounds were quantified based on DAD spectra characteristic of the chemical class: 7 cinnamic acid derivatives, 6 flavanones and 6 flavones. The phenolic compounds profile of Pêra-Rio orange juices changed during the harvest; levels increased in organic orange juices, and decreased or were about the same in conventional orange juices. Phenolic compounds levels were higher in organic (0.5-1143.7mg·100g -1 ) than in conventional orange juices (0.5-689.7mg·100g -1 ). PCA differentiated organic from conventional FS and NFC juices, and conventional FCOJ from conventional FS and NFC juices, thus differentiating cultivation and processing. Copyright © 2017. Published by Elsevier Ltd.
Parthipan, Sivashanmugam; Selvaraju, Sellappan; Somashekar, Lakshminarayana; Kolte, Atul P; Arangasamy, Arunachalam; Ravindra, Janivara Parameswaraiah
2015-08-01
Sperm RNA can be used to understand the past spermatogenic process, future successful fertilization, and embryo development. To study the sperm RNA composition and function, isolation of good quality RNA with sufficient quantity is essential. The objective of this study was to assess the influence of sperm input concentrations and RNA isolation methods on RNA yield and quality in bull sperm. The fresh semen samples from bulls (n = 6) were snap-frozen in liquid nitrogen and stored at -80 °C. The sperm RNA was isolated using membrane-based methods combined with TRIzol (RNeasy+TRIzol and PureLink+TRIzol) and conventional methods (TRIzol, Double TRIzol, and RNAzol RT). Based on fluorometric quantification, combined methods resulted in significantly (P < 0.05) higher total RNA yields (800-900 ng/30-40 × 10(6)) as compared with other methods and yielded 20 to 30 fg of RNA/spermatozoon. The quality of RNA isolated by membrane-based methods was superior to that isolated by conventional methods. The sperm RNA was observed to be intact as well as fragmented (50-2000 bp). The study revealed that the membrane-based methods with a cocktail of lysis solution and an optimal input concentration of 30 to 40 million sperm were optimal for maximum recovery of RNA from bull spermatozoa. Copyright © 2015 Elsevier Inc. All rights reserved.
Cochand-Priollet, Béatrix; Cartier, Isabelle; de Cremoux, Patricia; Le Galès, Catherine; Ziol, Marianne; Molinié, Vincent; Petitjean, Alain; Dosda, Anne; Merea, Estelle; Biaggi, Annonciade; Gouget, Isabelle; Arkwright, Sylviane; Vacher-Lavenu, Marie-Cécile; Vielh, Philippe; Coste, Joël
2005-11-01
Many articles concerning conventional Pap smears, ThinPrep liquid-based cytology (LBC) and Hybrid-Capture II HPV test (HC II) have been published. This study, carried out by the French Society of Clinical Cytology, may be conspicuous for several reasons: it was financially independent; it compared the efficiency of the conventional Pap smear and LBC, of the conventional Pap smear and HC II, and included an economic study based on real costs; for all the women, a "gold standard" reference method, colposcopy, was available and biopsies were performed whenever a lesion was detected; The conventional Pap smear, the LBC (split-sample technique), the colposcopy, and the biopsies were done at the same time. This study included 2,585 women shared into two groups: a group A of a high-risk population, a group B of a screening population. The statistical analysis of the results showed that conventional Pap smears consistently had superior or equivalent sensitivity and specificity than LBC for the lesions at threshold CIN-I (Cervical Intraepithelial Neoplasia) or CIN-II or higher. It underlined the low specificity of the HC II. Finally, the LBC mean cost was never covered by the Social Security tariff.
Prediction of Groundwater Level at Slope Areas using Electrical Resistivity Method
NASA Astrophysics Data System (ADS)
Baharuddin, M. F. T.; Hazreek, Z. A. M.; Azman, M. A. A.; Madun, A.
2018-04-01
Groundwater level plays an important role as an agent that triggers landslides. Commonly, the conventional method used to monitor the groundwater level is done by using standpipe piezometer. There were several disadvantages of the conventional method related to cost, time and data coverage. The aim of this study is to determine groundwater level at slope areas using electrical resistivity method and to verify groundwater level of the study area with standpipe piezometer data. The data acquisition was performed using ABEM Terrameter SAS4000. For data analysis and processing, RES2DINV and SURFER were used. The groundwater level was calibrated with reference of standpipe piezometer based on electrical resistivity value (ERV).
Giannaki, Christoforos D; Aphamis, George; Sakkis, Panikos; Hadjicharalambous, Marios
2016-04-01
High intensity interval training (HIIT) has been recently promoted as an effective, low volume and time-efficient training method for improving fitness and health related parameters. The aim of the current study was to examine the effect of a combination of a group-based HIIT and conventional gym training on physical fitness and body composition parameters in healthy adults. Thirty nine healthy adults volunteered to participate in this eight-week intervention study. Twenty three participants performed regular gym training 4 days a week (C group), whereas the remaining 16 participants engaged twice a week in HIIT and twice in regular gym training (HIIT-C group) as the other group. Total body fat and visceral adiposity levels were calculated using bioelectrical impedance analysis. Physical fitness parameters such as cardiorespiratory fitness, speed, lower limb explosiveness, flexibility and isometric arm strength were assessed through a battery of field tests. Both exercise programs were effective in reducing total body fat and visceral adiposity (P<0.05) and improving handgrip strength, sprint time, jumping ability and flexibility (P<0.05) whilst only the combination of HIIT and conventional training improved cardiorespiratory fitness levels (P<0.05). A between of group changes analysis revealed that HIIT-C resulted in significantly greater reduction in both abdominal girth and visceral adiposity compared with conventional training (P<0.05). Eight weeks of combined group-based HIIT and conventional training improve various physical fitness parameters and reduce both total and visceral fat levels. This type of training was also found to be superior compared with conventional exercise training alone in terms of reducing more visceral adiposity levels. Group-based HIIT may consider as a good methods for individuals who exercise in gyms and craving to acquire significant fitness benefits in relatively short period of time.
Oppugning the assumptions of spatial averaging of segment and joint orientations.
Pierrynowski, Michael Raymond; Ball, Kevin Arthur
2009-02-09
Movement scientists frequently calculate "arithmetic averages" when examining body segment or joint orientations. Such calculations appear routinely, yet are fundamentally flawed. Three-dimensional orientation data are computed as matrices, yet three-ordered Euler/Cardan/Bryant angle parameters are frequently used for interpretation. These parameters are not geometrically independent; thus, the conventional process of averaging each parameter is incorrect. The process of arithmetic averaging also assumes that the distances between data are linear (Euclidean); however, for the orientation data these distances are geodesically curved (Riemannian). Therefore we question (oppugn) whether use of the conventional averaging approach is an appropriate statistic. Fortunately, exact methods of averaging orientation data have been developed which both circumvent the parameterization issue, and explicitly acknowledge the Euclidean or Riemannian distance measures. The details of these matrix-based averaging methods are presented and their theoretical advantages discussed. The Euclidian and Riemannian approaches offer appealing advantages over the conventional technique. With respect to practical biomechanical relevancy, examinations of simulated data suggest that for sets of orientation data possessing characteristics of low dispersion, an isotropic distribution, and less than 30 degrees second and third angle parameters, discrepancies with the conventional approach are less than 1.1 degrees . However, beyond these limits, arithmetic averaging can have substantive non-linear inaccuracies in all three parameterized angles. The biomechanics community is encouraged to recognize that limitations exist with the use of the conventional method of averaging orientations. Investigations requiring more robust spatial averaging over a broader range of orientations may benefit from the use of matrix-based Euclidean or Riemannian calculations.
ERIC Educational Resources Information Center
Wang, Tianyou
2009-01-01
Holland and colleagues derived a formula for analytical standard error of equating using the delta-method for the kernel equating method. Extending their derivation, this article derives an analytical standard error of equating procedure for the conventional percentile rank-based equipercentile equating with log-linear smoothing. This procedure is…
Kanlaya, Rattiyaporn; Thongboonkerd, Visith
2016-08-01
Conventional method to purify/concentrate dengue virus (DENV) is time-consuming with low virus recovery yield. Herein, we applied cellufine sulfate column chromatography to purify/concentrate DENV based on the mimicry between heparan sulfate and DENV envelope protein. Comparative analysis demonstrated that this new method offered higher purity (as determined by less contamination of bovine serum albumin) and recovery yield (as determined by greater infectivity). Moreover, overall duration used for cellufine sulfate column chromatography to purify/concentrate DENV was approximately 1/20 of that of conventional method. Therefore, cellufine sulfate column chromatography serves as a simple, rapid, and effective alternative method for DENV purification/concentration. Copyright © 2016 Elsevier B.V. All rights reserved.
Revising the lower statistical limit of x-ray grating-based phase-contrast computed tomography.
Marschner, Mathias; Birnbacher, Lorenz; Willner, Marian; Chabior, Michael; Herzen, Julia; Noël, Peter B; Pfeiffer, Franz
2017-01-01
Phase-contrast x-ray computed tomography (PCCT) is currently investigated as an interesting extension of conventional CT, providing high soft-tissue contrast even if examining weakly absorbing specimen. Until now, the potential for dose reduction was thought to be limited compared to attenuation CT, since meaningful phase retrieval fails for scans with very low photon counts when using the conventional phase retrieval method via phase stepping. In this work, we examine the statistical behaviour of the reverse projection method, an alternative phase retrieval approach and compare the results to the conventional phase retrieval technique. We investigate the noise levels in the projections as well as the image quality and quantitative accuracy of the reconstructed tomographic volumes. The results of our study show that this method performs better in a low-dose scenario than the conventional phase retrieval approach, resulting in lower noise levels, enhanced image quality and more accurate quantitative values. Overall, we demonstrate that the lower statistical limit of the phase stepping procedure as proposed by recent literature does not apply to this alternative phase retrieval technique. However, further development is necessary to overcome experimental challenges posed by this method which would enable mainstream or even clinical application of PCCT.
Boeker, Martin; Andel, Peter; Vach, Werner; Frankenschmidt, Alexander
2013-01-01
Background When compared with more traditional instructional methods, Game-based e-learning (GbEl) promises a higher motivation of learners by presenting contents in an interactive, rule-based and competitive way. Most recent systematic reviews and meta-analysis of studies on Game-based learning and GbEl in the medical professions have shown limited effects of these instructional methods. Objectives To compare the effectiveness on the learning outcome of a Game-based e-learning (GbEl) instruction with a conventional script-based instruction in the teaching of phase contrast microscopy urinalysis under routine training conditions of undergraduate medical students. Methods A randomized controlled trial was conducted with 145 medical students in their third year of training in the Department of Urology at the University Medical Center Freiburg, Germany. 82 subjects where allocated for training with an educational adventure-game (GbEl group) and 69 subjects for conventional training with a written script-based approach (script group). Learning outcome was measured with a 34 item single choice test. Students' attitudes were collected by a questionnaire regarding fun with the training, motivation to continue the training and self-assessment of acquired knowledge. Results The students in the GbEl group achieved significantly better results in the cognitive knowledge test than the students in the script group: the mean score was 28.6 for the GbEl group and 26.0 for the script group of a total of 34.0 points with a Cohen's d effect size of 0.71 (ITT analysis). Attitudes towards the recent learning experience were significantly more positive with GbEl. Students reported to have more fun while learning with the game when compared to the script-based approach. Conclusions Game-based e-learning is more effective than a script-based approach for the training of urinalysis in regard to cognitive learning outcome and has a high positive motivational impact on learning. Game-based e-learning can be used as an effective teaching method for self-instruction. PMID:24349257
A New Approach to Detect Mover Position in Linear Motors Using Magnetic Sensors
Paul, Sarbajit; Chang, Junghwan
2015-01-01
A new method to detect the mover position of a linear motor is proposed in this paper. This method employs a simple cheap Hall Effect sensor-based magnetic sensor unit to detect the mover position of the linear motor. With the movement of the linear motor, Hall Effect sensor modules electrically separated 120° along with the idea of three phase balanced condition (va + vb + vc = 0) are used to produce three phase signals. The amplitude of the sensor output voltage signals are adjusted to unit amplitude to minimize the amplitude errors. With the unit amplitude signals three to two phase transformation is done to reduce the three multiples of harmonic components. The final output thus obtained is converted to position data by the use of arctangent function. The measurement accuracy of the new method is analyzed by experiments and compared with the conventional two phase method. Using the same number of sensor modules as the conventional two phase method, the proposed method gives more accurate position information compared to the conventional system where sensors are separated by 90° electrical angles. PMID:26506348
Masood, Athar; Stark, Ken D; Salem, Norman
2005-10-01
Conventional sample preparation for fatty acid analysis is a complicated, multiple-step process, and gas chromatography (GC) analysis alone can require >1 h per sample to resolve fatty acid methyl esters (FAMEs). Fast GC analysis was adapted to human plasma FAME analysis using a modified polyethylene glycol column with smaller internal diameters, thinner stationary phase films, increased carrier gas linear velocity, and faster temperature ramping. Our results indicated that fast GC analyses were comparable to conventional GC in peak resolution. A conventional transesterification method based on Lepage and Roy was simplified to a one-step method with the elimination of the neutralization and centrifugation steps. A robotics-amenable method was also developed, with lower methylation temperatures and in an open-tube format using multiple reagent additions. The simplified methods produced results that were quantitatively similar and with similar coefficients of variation as compared with the original Lepage and Roy method. The present streamlined methodology is suitable for the direct fatty acid analysis of human plasma, is appropriate for research studies, and will facilitate large clinical trials and make possible population studies.
Spacecraft mass estimation, relationships and engine data: Task 1.1 of the lunar base systems study
NASA Technical Reports Server (NTRS)
1988-01-01
A collection of scaling equations, weight statements, scaling factors, etc., useful for doing conceptual designs of spacecraft are given. Rules of thumb and methods of calculating quantities of interest are provided. Basic relationships for conventional, and several non-conventional, propulsion systems (nuclear, solar electric and solar thermal) are included. The equations and other data were taken from a number of sources and are not at all consistent with each other in level of detail or method, but provide useful references for early estimation purposes.
Massaroni, Carlo; Cassetta, Eugenio; Silvestri, Sergio
2017-10-01
Respiratory assessment can be carried out by using motion capture systems. A geometrical model is mandatory in order to compute the breathing volume as a function of time from the markers' trajectories. This study describes a novel model to compute volume changes and calculate respiratory parameters by using a motion capture system. The novel method, ie, prism-based method, computes the volume enclosed within the chest by defining 82 prisms from the 89 markers attached to the subject chest. Volumes computed with this method are compared to spirometry volumes and to volumes computed by a conventional method based on the tetrahedron's decomposition of the chest wall and integrated in a commercial motion capture system. Eight healthy volunteers were enrolled and 30 seconds of quiet breathing data collected from each of them. Results show a better agreement between volumes computed by the prism-based method and the spirometry (discrepancy of 2.23%, R 2 = .94) compared to the agreement between volumes computed by the conventional method and the spirometry (discrepancy of 3.56%, R 2 = .92). The proposed method also showed better performances in the calculation of respiratory parameters. Our findings open up prospects for the further use of the new method in the breathing assessment via motion capture systems.
Tablet-based cardiac arrest documentation: a pilot study.
Peace, Jack M; Yuen, Trevor C; Borak, Meredith H; Edelson, Dana P
2014-02-01
Conventional paper-based resuscitation transcripts are notoriously inaccurate, often lacking the precision that is necessary for recording a fast-paced resuscitation. The aim of this study was to evaluate whether a tablet computer-based application could improve upon conventional practices for resuscitation documentation. Nurses used either the conventional paper code sheet or a tablet application during simulated resuscitation events. Recorded events were compared to a gold standard record generated from video recordings of the simulations and a CPR-sensing defibrillator/monitor. Events compared included defibrillations, medication deliveries, and other interventions. During the study period, 199 unique interventions were observed in the gold standard record. Of these, 102 occurred during simulations recorded by the tablet application, 78 by the paper code sheet, and 19 during scenarios captured simultaneously by both documentation methods These occurred over 18 simulated resuscitation scenarios, in which 9 nurses participated. The tablet application had a mean sensitivity of 88.0% for all interventions, compared to 67.9% for the paper code sheet (P=0.001). The median time discrepancy was 3s for the tablet, and 77s for the paper code sheet when compared to the gold standard (P<0.001). Similar to prior studies, we found that conventional paper-based documentation practices are inaccurate, often misreporting intervention delivery times or missing their delivery entirely. However, our study also demonstrated that a tablet-based documentation method may represent a means to substantially improve resuscitation documentation quality, which could have implications for resuscitation quality improvement and research. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Untangling Brain-Wide Dynamics in Consciousness by Cross-Embedding
Tajima, Satohiro; Yanagawa, Toru; Fujii, Naotaka; Toyoizumi, Taro
2015-01-01
Brain-wide interactions generating complex neural dynamics are considered crucial for emergent cognitive functions. However, the irreducible nature of nonlinear and high-dimensional dynamical interactions challenges conventional reductionist approaches. We introduce a model-free method, based on embedding theorems in nonlinear state-space reconstruction, that permits a simultaneous characterization of complexity in local dynamics, directed interactions between brain areas, and how the complexity is produced by the interactions. We demonstrate this method in large-scale electrophysiological recordings from awake and anesthetized monkeys. The cross-embedding method captures structured interaction underlying cortex-wide dynamics that may be missed by conventional correlation-based analysis, demonstrating a critical role of time-series analysis in characterizing brain state. The method reveals a consciousness-related hierarchy of cortical areas, where dynamical complexity increases along with cross-area information flow. These findings demonstrate the advantages of the cross-embedding method in deciphering large-scale and heterogeneous neuronal systems, suggesting a crucial contribution by sensory-frontoparietal interactions to the emergence of complex brain dynamics during consciousness. PMID:26584045
Adjoint Sensitivity Analysis for Scale-Resolving Turbulent Flow Solvers
NASA Astrophysics Data System (ADS)
Blonigan, Patrick; Garai, Anirban; Diosady, Laslo; Murman, Scott
2017-11-01
Adjoint-based sensitivity analysis methods are powerful design tools for engineers who use computational fluid dynamics. In recent years, these engineers have started to use scale-resolving simulations like large-eddy simulations (LES) and direct numerical simulations (DNS), which resolve more scales in complex flows with unsteady separation and jets than the widely-used Reynolds-averaged Navier-Stokes (RANS) methods. However, the conventional adjoint method computes large, unusable sensitivities for scale-resolving simulations, which unlike RANS simulations exhibit the chaotic dynamics inherent in turbulent flows. Sensitivity analysis based on least-squares shadowing (LSS) avoids the issues encountered by conventional adjoint methods, but has a high computational cost even for relatively small simulations. The following talk discusses a more computationally efficient formulation of LSS, ``non-intrusive'' LSS, and its application to turbulent flows simulated with a discontinuous-Galkerin spectral-element-method LES/DNS solver. Results are presented for the minimal flow unit, a turbulent channel flow with a limited streamwise and spanwise domain.
Liu, S X; Zou, M S
2018-03-01
The radiation loading on a vibratory finite cylindrical shell is conventionally evaluated through the direct numerical integration (DNI) method. An alternative strategy via the fast Fourier transform algorithm is put forward in this work based on the general expression of radiation impedance. To check the feasibility and efficiency of the proposed method, a comparison with DNI is presented through numerical cases. The results obtained using the present method agree well with those calculated by DNI. More importantly, the proposed calculating strategy can significantly save the time cost compared with the conventional approach of straightforward numerical integration.
Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming
2014-10-24
An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators.
Improved Virtual Planning for Bimaxillary Orthognathic Surgery.
Hatamleh, Muhanad; Turner, Catherine; Bhamrah, Gurprit; Mack, Gavin; Osher, Jonas
2016-09-01
Conventional model surgery planning for bimaxillary orthognathic surgery can be laborious, time-consuming and may contain potential errors; hence three-dimensional (3D) virtual orthognathic planning has been proven to be an efficient, reliable, and cost-effective alternative. In this report, the 3D planning is described for a patient presenting with a Class III incisor relationship on a Skeletal III base with pan facial asymmetry complicated by reverse overjet and anterior open bite. A combined scan data of direct cone beam computer tomography and indirect dental scan were used in the planning. Additionally, a new method of establishing optimum intercuspation by scanning dental casts in final occlusion and positioning it to the composite-scans model was shown. Furthermore, conventional model surgery planning was carried out following in-house protocol. Intermediate and final intermaxillary splints were produced following the conventional method and 3D printing. Three-dimensional planning showed great accuracy and treatment outcome and reduced laboratory time in comparison with the conventional method. Establishing the final dental occlusion on casts and integrating it in final 3D planning enabled us to achieve the best possible intercuspation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Youngjun; Li, Ruijiang; Na, Yong Hum
2014-12-15
Purpose: 3D optical surface imaging has been applied to patient positioning in radiation therapy (RT). The optical patient positioning system is advantageous over conventional method using cone-beam computed tomography (CBCT) in that it is radiation free, frameless, and is capable of real-time monitoring. While the conventional radiographic method uses volumetric registration, the optical system uses surface matching for patient alignment. The relative accuracy of these two methods has not yet been sufficiently investigated. This study aims to investigate the theoretical accuracy of the surface registration based on a simulation study using patient data. Methods: This study compares the relative accuracymore » of surface and volumetric registration in head-and-neck RT. The authors examined 26 patient data sets, each consisting of planning CT data acquired before treatment and patient setup CBCT data acquired at the time of treatment. As input data of surface registration, patient’s skin surfaces were created by contouring patient skin from planning CT and treatment CBCT. Surface registration was performed using the iterative closest points algorithm by point–plane closest, which minimizes the normal distance between source points and target surfaces. Six degrees of freedom (three translations and three rotations) were used in both surface and volumetric registrations and the results were compared. The accuracy of each method was estimated by digital phantom tests. Results: Based on the results of 26 patients, the authors found that the average and maximum root-mean-square translation deviation between the surface and volumetric registrations were 2.7 and 5.2 mm, respectively. The residual error of the surface registration was calculated to have an average of 0.9 mm and a maximum of 1.7 mm. Conclusions: Surface registration may lead to results different from those of the conventional volumetric registration. Only limited accuracy can be achieved for patient positioning with an approach based solely on surface information.« less
Pla, Maria; La Paz, José-Luis; Peñas, Gisela; García, Nora; Palaudelmàs, Montserrat; Esteve, Teresa; Messeguer, Joaquima; Melé, Enric
2006-04-01
Maize is one of the main crops worldwide and an increasing number of genetically modified (GM) maize varieties are cultivated and commercialized in many countries in parallel to conventional crops. Given the labeling rules established e.g. in the European Union and the necessary coexistence between GM and non-GM crops, it is important to determine the extent of pollen dissemination from transgenic maize to other cultivars under field conditions. The most widely used methods for quantitative detection of GMO are based on real-time PCR, which implies the results are expressed in genome percentages (in contrast to seed or grain percentages). Our objective was to assess the accuracy of real-time PCR based assays to accurately quantify the contents of transgenic grains in non-GM fields in comparison with the real cross-fertilization rate as determined by phenotypical analysis. We performed this study in a region where both GM and conventional maize are normally cultivated and used the predominant transgenic maize Mon810 in combination with a conventional maize variety which displays the characteristic of white grains (therefore allowing cross-pollination quantification as percentage of yellow grains). Our results indicated an excellent correlation between real-time PCR results and number of cross-fertilized grains at Mon810 levels of 0.1-10%. In contrast, Mon810 percentage estimated by weight of grains produced less accurate results. Finally, we present and discuss the pattern of pollen-mediated gene flow from GM to conventional maize in an example case under field conditions.
Yeo, Junyeob; Hong, Sukjoon; Lee, Daehoo; Hotz, Nico; Lee, Ming-Tsang; Grigoropoulos, Costas P.; Ko, Seung Hwan
2012-01-01
Flexible electronics opened a new class of future electronics. The foldable, light and durable nature of flexible electronics allows vast flexibility in applications such as display, energy devices and mobile electronics. Even though conventional electronics fabrication methods are well developed for rigid substrates, direct application or slight modification of conventional processes for flexible electronics fabrication cannot work. The future flexible electronics fabrication requires totally new low-temperature process development optimized for flexible substrate and it should be based on new material too. Here we present a simple approach to developing a flexible electronics fabrication without using conventional vacuum deposition and photolithography. We found that direct metal patterning based on laser-induced local melting of metal nanoparticle ink is a promising low-temperature alternative to vacuum deposition– and photolithography-based conventional metal patterning processes. The “digital” nature of the proposed direct metal patterning process removes the need for expensive photomask and allows easy design modification and short turnaround time. This new process can be extremely useful for current small-volume, large-variety manufacturing paradigms. Besides, simple, scalable, fast and low-temperature processes can lead to cost-effective fabrication methods on a large-area polymer substrate. The developed process was successfully applied to demonstrate high-quality Ag patterning (2.1 µΩ·cm) and high-performance flexible organic field effect transistor arrays. PMID:22900011
Yeo, Junyeob; Hong, Sukjoon; Lee, Daehoo; Hotz, Nico; Lee, Ming-Tsang; Grigoropoulos, Costas P; Ko, Seung Hwan
2012-01-01
Flexible electronics opened a new class of future electronics. The foldable, light and durable nature of flexible electronics allows vast flexibility in applications such as display, energy devices and mobile electronics. Even though conventional electronics fabrication methods are well developed for rigid substrates, direct application or slight modification of conventional processes for flexible electronics fabrication cannot work. The future flexible electronics fabrication requires totally new low-temperature process development optimized for flexible substrate and it should be based on new material too. Here we present a simple approach to developing a flexible electronics fabrication without using conventional vacuum deposition and photolithography. We found that direct metal patterning based on laser-induced local melting of metal nanoparticle ink is a promising low-temperature alternative to vacuum deposition- and photolithography-based conventional metal patterning processes. The "digital" nature of the proposed direct metal patterning process removes the need for expensive photomask and allows easy design modification and short turnaround time. This new process can be extremely useful for current small-volume, large-variety manufacturing paradigms. Besides, simple, scalable, fast and low-temperature processes can lead to cost-effective fabrication methods on a large-area polymer substrate. The developed process was successfully applied to demonstrate high-quality Ag patterning (2.1 µΩ·cm) and high-performance flexible organic field effect transistor arrays.
PSO-based PID Speed Control of Traveling Wave Ultrasonic Motor under Temperature Disturbance
NASA Astrophysics Data System (ADS)
Arifin Mat Piah, Kamal; Yusoff, Wan Azhar Wan; Azmi, Nur Iffah Mohamed; Romlay, Fadhlur Rahman Mohd
2018-03-01
Traveling wave ultrasonic motors (TWUSMs) have a time varying dynamics characteristics. Temperature rise in TWUSMs remains a problem particularly in sustaining optimum speed performance. In this study, a PID controller is used to control the speed of TWUSM under temperature disturbance. Prior to developing the controller, a linear approximation model which relates the speed to the temperature is developed based on the experimental data. Two tuning methods are used to determine PID parameters: conventional Ziegler-Nichols(ZN) and particle swarm optimization (PSO). The comparison of speed control performance between PSO-PID and ZN-PID is presented. Modelling, simulation and experimental work is carried out utilizing Fukoku-Shinsei USR60 as the chosen TWUSM. The results of the analyses and experimental work reveal that PID tuning using PSO-based optimization has the advantage over the conventional Ziegler-Nichols method.
Fast measurement of bacterial susceptibility to antibiotics
NASA Technical Reports Server (NTRS)
Chappelle, E. W.; Picciolo, G. L.; Schrock, C. G.
1977-01-01
Method, based on photoanalysis of adenosine triphosphate using light-emitting reaction with luciferase-luciferin technique, saves time by eliminating isolation period required by conventional methods. Technique is also used to determine presence of infection as well as susceptibilities to several antibiotics.
Lü, Fan; Shao, Li-Ming; Zhang, Hua; Fu, Wen-Ding; Feng, Shi-Jin; Zhan, Liang-Tong; Chen, Yun-Min; He, Pin-Jing
2018-01-01
Bio-stability is a key feature for the utilization and final disposal of biowaste-derived residues, such as aerobic compost or vermicompost of food waste, bio-dried waste, anaerobic digestate or landfilled waste. The present paper reviews conventional methods and advanced techniques used for the assessment of bio-stability. The conventional methods are reclassified into two categories. Advanced techniques, including spectroscopic (fluorescent, ultraviolet-visible, infrared, Raman, nuclear magnetic resonance), thermogravimetric and thermochemolysis analysis, are emphasized for their application in bio-stability assessment in recent years. Their principles, pros and cons are critically discussed. These advanced techniques are found to be convenient in sample preparation and to supply diversified information. However, the viability of these techniques as potential indicators for bio-stability assessment ultimately lies in the establishment of the relationship of advanced ones with the conventional methods, especially with the methods based on biotic response. Furthermore, some misuses in data explanation should be noted. Copyright © 2017 Elsevier Ltd. All rights reserved.
Yeşilyaprak, Sevgi Sevi; Yıldırım, Meriç Şenduran; Tomruk, Murat; Ertekin, Özge; Algun, Z Candan
2016-01-01
There is limited information on effective balance training techniques including virtual reality (VR)-based balance exercises in residential settings and no studies have been designed to compare the effects of VR-based balance exercises with conventional balance exercises in older adults living in nursing homes in Turkey. The objective of our study was to investigate the effects of VR-based balance exercises on balance and fall risk in comparison to conventional balance exercises in older adults living in nursing homes. A total sample of 18 subjects (65-82 years of age) with fall history who were randomly assigned to either the VR group (Group 1, n = 7) or the conventional exercise group (Group 2, n = 11) completed the exercise training. In both groups, Berg balance score (BBS), timed up & go duration, and left leg stance and tandem stance duration with eyes closed significantly improved with time (p < 0.05), but changes were similar in both groups (p > 0.05) after training, indicating that neither the exercise method was superior. Similar improvements were found in balance and fall risk with VR-based balance training and conventional balance training in older adults living in the nursing home. Both exercise trainings can be preferable by health care professionals considering fall prevention. Appropriate patient selection is essential.
NASA Astrophysics Data System (ADS)
Zhang, Hua; Yang, Hui; Li, Hongxing; Huang, Guangnan; Ding, Zheyi
2018-04-01
The attenuation of random noise is important for improving the signal to noise ratio (SNR). However, the precondition for most conventional denoising methods is that the noisy data must be sampled on a uniform grid, making the conventional methods unsuitable for non-uniformly sampled data. In this paper, a denoising method capable of regularizing the noisy data from a non-uniform grid to a specified uniform grid is proposed. Firstly, the denoising method is performed for every time slice extracted from the 3D noisy data along the source and receiver directions, then the 2D non-equispaced fast Fourier transform (NFFT) is introduced in the conventional fast discrete curvelet transform (FDCT). The non-equispaced fast discrete curvelet transform (NFDCT) can be achieved based on the regularized inversion of an operator that links the uniformly sampled curvelet coefficients to the non-uniformly sampled noisy data. The uniform curvelet coefficients can be calculated by using the inversion algorithm of the spectral projected-gradient for ℓ1-norm problems. Then local threshold factors are chosen for the uniform curvelet coefficients for each decomposition scale, and effective curvelet coefficients are obtained respectively for each scale. Finally, the conventional inverse FDCT is applied to the effective curvelet coefficients. This completes the proposed 3D denoising method using the non-equispaced curvelet transform in the source-receiver domain. The examples for synthetic data and real data reveal the effectiveness of the proposed approach in applications to noise attenuation for non-uniformly sampled data compared with the conventional FDCT method and wavelet transformation.
Solving coupled groundwater flow systems using a Jacobian Free Newton Krylov method
NASA Astrophysics Data System (ADS)
Mehl, S.
2012-12-01
Jacobian Free Newton Kyrlov (JFNK) methods can have several advantages for simulating coupled groundwater flow processes versus conventional methods. Conventional methods are defined here as those based on an iterative coupling (rather than a direct coupling) and/or that use Picard iteration rather than Newton iteration. In an iterative coupling, the systems are solved separately, coupling information is updated and exchanged between the systems, and the systems are re-solved, etc., until convergence is achieved. Trusted simulators, such as Modflow, are based on these conventional methods of coupling and work well in many cases. An advantage of the JFNK method is that it only requires calculation of the residual vector of the system of equations and thus can make use of existing simulators regardless of how the equations are formulated. This opens the possibility of coupling different process models via augmentation of a residual vector by each separate process, which often requires substantially fewer changes to the existing source code than if the processes were directly coupled. However, appropriate perturbation sizes need to be determined for accurate approximations of the Frechet derivative, which is not always straightforward. Furthermore, preconditioning is necessary for reasonable convergence of the linear solution required at each Kyrlov iteration. Existing preconditioners can be used and applied separately to each process which maximizes use of existing code and robust preconditioners. In this work, iteratively coupled parent-child local grid refinement models of groundwater flow and groundwater flow models with nonlinear exchanges to streams are used to demonstrate the utility of the JFNK approach for Modflow models. Use of incomplete Cholesky preconditioners with various levels of fill are examined on a suite of nonlinear and linear models to analyze the effect of the preconditioner. Comparisons of convergence and computer simulation time are made using conventional iteratively coupled methods and those based on Picard iteration to those formulated with JFNK to gain insights on the types of nonlinearities and system features that make one approach advantageous. Results indicate that nonlinearities associated with stream/aquifer exchanges are more problematic than those resulting from unconfined flow.
Hybrid charge division multiplexing method for silicon photomultiplier based PET detectors
NASA Astrophysics Data System (ADS)
Park, Haewook; Ko, Guen Bae; Lee, Jae Sung
2017-06-01
Silicon photomultiplier (SiPM) is widely utilized in various positron emission tomography (PET) detectors and systems. However, the individual recording of SiPM output signals is still challenging owing to the high granularity of the SiPM; thus, charge division multiplexing is commonly used in PET detectors. Resistive charge division method is well established for reducing the number of output channels in conventional multi-channel photosensors, but it degrades the timing performance of SiPM-based PET detectors by yielding a large resistor-capacitor (RC) constant. Capacitive charge division method, on the other hand, yields a small RC constant and provides a faster timing response than the resistive method, but it suffers from an output signal undershoot. Therefore, in this study, we propose a hybrid charge division method which can be implemented by cascading the parallel combination of a resistor and a capacitor throughout the multiplexing network. In order to compare the performance of the proposed method with the conventional methods, a 16-channel Hamamatsu SiPM (S11064-050P) was coupled with a 4 × 4 LGSO crystal block (3 × 3 × 20 mm3) and a 9 × 9 LYSO crystal block (1.2 × 1.2 × 10 mm3). In addition, we tested a time-over-threshold (TOT) readout using the digitized position signals to further demonstrate the feasibility of the time-based readout of multiplexed signals based on the proposed method. The results indicated that the proposed method exhibited good energy and timing performance, thus inheriting only the advantages of conventional resistive and capacitive methods. Moreover, the proposed method showed excellent pulse shape uniformity that does not depend on the position of the interacted crystal. Accordingly, we can conclude that the hybrid charge division method is useful for effectively reducing the number of output channels of the SiPM array.
Liquid-based cytology for primary cervical cancer screening: a multi-centre study
Monsonego, J; Autillo-Touati, A; Bergeron, C; Dachez, R; Liaras, J; Saurel, J; Zerat, L; Chatelain, P; Mottot, C
2001-01-01
The aim of this six-centre, split-sample study was to compare ThinPrep fluid-based cytology to the conventional Papanicolaou smear. Six cytopathology laboratories and 35 gynaecologists participated. 5428 patients met the inclusion criteria (age > 18 years old, intact cervix, informed consent). Each cervical sample was used first to prepare a conventional Pap smear, then the sampling device was rinsed into a PreservCyt vial, and a ThinPrep slide was made. Screening of slide pairs was blinded (n = 5428). All non-negative concordant cases (n = 101), all non-concordant cases (n = 206), and a 5% random sample of concordant negative cases (n = 272) underwent review by one independent pathologist then by the panel of 6 investigators. Initial (blinded) screening results for ThinPrep and conventional smears were correlated. Initial diagnoses were correlated with consensus cytological diagnoses. Differences in disease detection were evaluated using McNemar's test. On initial screening, 29% more ASCUS cases and 39% more low-grade squamous intraepithelial lesions (LSIL) and more severe lesions (LSIL+) were detected on the ThinPrep slides than on the conventional smears (P = 0.001), including 50% more LSIL and 18% more high-grade SIL (HSIL). The ASCUS:SIL ratio was lower for the ThinPrep method (115:132 = 0.87:1) than for the conventional smear method (89:94 = 0.95:1). The same trend was observed for the ASCUS/AGUS:LSIL ratio. Independent and consensus review confirmed 145 LSIL+ diagnoses; of these, 18% more had been detected initially on the ThinPrep slides than on the conventional smears (P = 0.041). The ThinPrep Pap Test is more accurate than the conventional Pap test and has the potential to optimize the effectiveness of primary cervical cancer screening. © 2001 Cancer Research Campaign http://www.bjcancer.com PMID:11161401
Kim, Byeong Hak; Kim, Min Young; Chae, You Seong
2017-01-01
Unmanned aerial vehicles (UAVs) are equipped with optical systems including an infrared (IR) camera such as electro-optical IR (EO/IR), target acquisition and designation sights (TADS), or forward looking IR (FLIR). However, images obtained from IR cameras are subject to noise such as dead pixels, lines, and fixed pattern noise. Nonuniformity correction (NUC) is a widely employed method to reduce noise in IR images, but it has limitations in removing noise that occurs during operation. Methods have been proposed to overcome the limitations of the NUC method, such as two-point correction (TPC) and scene-based NUC (SBNUC). However, these methods still suffer from unfixed pattern noise. In this paper, a background registration-based adaptive noise filtering (BRANF) method is proposed to overcome the limitations of conventional methods. The proposed BRANF method utilizes background registration processing and robust principle component analysis (RPCA). In addition, image quality verification methods are proposed that can measure the noise filtering performance quantitatively without ground truth images. Experiments were performed for performance verification with middle wave infrared (MWIR) and long wave infrared (LWIR) images obtained from practical military optical systems. As a result, it is found that the image quality improvement rate of BRANF is 30% higher than that of conventional NUC. PMID:29280970
Kim, Byeong Hak; Kim, Min Young; Chae, You Seong
2017-12-27
Unmanned aerial vehicles (UAVs) are equipped with optical systems including an infrared (IR) camera such as electro-optical IR (EO/IR), target acquisition and designation sights (TADS), or forward looking IR (FLIR). However, images obtained from IR cameras are subject to noise such as dead pixels, lines, and fixed pattern noise. Nonuniformity correction (NUC) is a widely employed method to reduce noise in IR images, but it has limitations in removing noise that occurs during operation. Methods have been proposed to overcome the limitations of the NUC method, such as two-point correction (TPC) and scene-based NUC (SBNUC). However, these methods still suffer from unfixed pattern noise. In this paper, a background registration-based adaptive noise filtering (BRANF) method is proposed to overcome the limitations of conventional methods. The proposed BRANF method utilizes background registration processing and robust principle component analysis (RPCA). In addition, image quality verification methods are proposed that can measure the noise filtering performance quantitatively without ground truth images. Experiments were performed for performance verification with middle wave infrared (MWIR) and long wave infrared (LWIR) images obtained from practical military optical systems. As a result, it is found that the image quality improvement rate of BRANF is 30% higher than that of conventional NUC.
NASA Astrophysics Data System (ADS)
Park, J.; Yoo, K.
2013-12-01
For groundwater resource conservation, it is important to accurately assess groundwater pollution sensitivity or vulnerability. In this work, we attempted to use data mining approach to assess groundwater pollution vulnerability in a TCE (trichloroethylene) contaminated Korean industrial site. The conventional DRASTIC method failed to describe TCE sensitivity data with a poor correlation with hydrogeological properties. Among the different data mining methods such as Artificial Neural Network (ANN), Multiple Logistic Regression (MLR), Case Base Reasoning (CBR), and Decision Tree (DT), the accuracy and consistency of Decision Tree (DT) was the best. According to the following tree analyses with the optimal DT model, the failure of the conventional DRASTIC method in fitting with TCE sensitivity data may be due to the use of inaccurate weight values of hydrogeological parameters for the study site. These findings provide a proof of concept that DT based data mining approach can be used in predicting and rule induction of groundwater TCE sensitivity without pre-existing information on weights of hydrogeological properties.
Practice Makes Perfect: Using a Computer-Based Business Simulation in Entrepreneurship Education
ERIC Educational Resources Information Center
Armer, Gina R. M.
2011-01-01
This article explains the use of a specific computer-based simulation program as a successful experiential learning model and as a way to increase student motivation while augmenting conventional methods of business instruction. This model is based on established adult learning principles.
Students concept understanding of fluid static based on the types of teaching
NASA Astrophysics Data System (ADS)
Rahmawati, I. D.; Suparmi; Sunarno, W.
2018-03-01
This research aims to know the concept understanding of student are taught by guided inquiry based learning and conventional based learning. Subjects in this study are high school students as much as 2 classes and each class consists of 32 students, both classes are homogen. The data was collected by conceptual test in the multiple choice form with the students argumentation of the answer. The data analysis used is qualitative descriptive method. The results of the study showed that the average of class that was using guided inquiry based learning is 78.44 while the class with use conventional based learning is 65.16. Based on these data, the guided inquiry model is an effective learning model used to improve students concept understanding.
The impact of changing dental needs on cost savings from fluoridation.
Campain, A C; Mariño, R J; Wright, F A C; Harrison, D; Bailey, D L; Morgan, M V
2010-03-01
Although community water fluoridation has been one of the cornerstone strategies for the prevention and control of dental caries, questions are still raised regarding its cost-effectiveness. This study assessed the impact of changing dental needs on the cost savings from community water fluoridation in Australia. Net costs were estimated as Costs((programme)) minus Costs((averted caries).) Averted costs were estimated as the product of caries increment in non-fluoridated community, effectiveness of fluoridation and the cost of a carious surface. Modelling considered four age-cohorts: 6-20, 21-45, 46-65 and 66+ years and three time points 1970s, 1980s, and 1990s. Cost of a carious surface was estimated by conventional and complex methods. Real discount rates (4, 7 (base) and 10%) were utilized. With base-case assumptions, the average annual cost savings/person, using Australian dollars at the 2005 level, ranged from $56.41 (1970s) to $17.75 (1990s) (conventional method) and from $249.45 (1970s) to $69.86 (1990s) (complex method). Under worst-case assumptions fluoridation remained cost-effective with cost savings ranging from $24.15 (1970s) to $3.87 (1990s) (conventional method) and $107.85 (1970s) and $24.53 (1990s) (complex method). For 66+ years cohort (1990s) fluoridation did not show a cost saving, but costs/person were marginal. Community water fluoridation remains a cost-effective preventive measure in Australia.
Full waveform inversion in the frequency domain using classified time-domain residual wavefields
NASA Astrophysics Data System (ADS)
Son, Woohyun; Koo, Nam-Hyung; Kim, Byoung-Yeop; Lee, Ho-Young; Joo, Yonghwan
2017-04-01
We perform the acoustic full waveform inversion in the frequency domain using residual wavefields that have been separated in the time domain. We sort the residual wavefields in the time domain according to the order of absolute amplitudes. Then, the residual wavefields are separated into several groups in the time domain. To analyze the characteristics of the residual wavefields, we compare the residual wavefields of conventional method with those of our residual separation method. From the residual analysis, the amplitude spectrum obtained from the trace before separation appears to have little energy at the lower frequency bands. However, the amplitude spectrum obtained from our strategy is regularized by the separation process, which means that the low-frequency components are emphasized. Therefore, our method helps to emphasize low-frequency components of residual wavefields. Then, we generate the frequency-domain residual wavefields by taking the Fourier transform of the separated time-domain residual wavefields. With these wavefields, we perform the gradient-based full waveform inversion in the frequency domain using back-propagation technique. Through a comparison of gradient directions, we confirm that our separation method can better describe the sub-salt image than the conventional approach. The proposed method is tested on the SEG/EAGE salt-dome model. The inversion results show that our algorithm is better than the conventional gradient based waveform inversion in the frequency domain, especially for deeper parts of the velocity model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nawrocki, G.J.; Seaver, C.L.; Kowalkowski, J.B.
As controls needs at the Advanced Photon Source matured from an installation phase to an operational phase, the need to monitor the existing conventional facilities control system with the EPICS-based accelerator control system was realized. This existing conventional facilities control network is based on a proprietary system from Johnson Controls called Metasys. Initially read-only monitoring of the Metasys parameters will be provided; however, the ability for possible future expansion to full control is available. This paper describes a method of using commercially available hardware and existing EPICS software as a bridge between the Metasys and EPICS control systems.
Michael D. Erickson; Curt C. Hassler; Chris B. LeDoux
1991-01-01
Continuous time and motion study techniques were used to develop productivity and cost estimators for the skidding component of ground-based logging systems, operating on steep terrain using preplanned skid roads. Comparisons of productivity and costs were analyzed for an overland random access skidding method, verses a skidding method utilizing a network of preplanned...
Performance-Based Seismic Design of Steel Frames Utilizing Colliding Bodies Algorithm
Veladi, H.
2014-01-01
A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm. PMID:25202717
Performance-based seismic design of steel frames utilizing colliding bodies algorithm.
Veladi, H
2014-01-01
A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm.
USDA-ARS?s Scientific Manuscript database
Conventional multivariate statistical methods have been used for decades to calculate environmental indicators. These methods generally work fine if they are used in a situation where the method can be tailored to the data. But there is some skepticism that the methods might fail in the context of s...
Marin-Oyaga, Victor A; Salavati, Ali; Houshmand, Sina; Pasha, Ahmed Khurshid; Gharavi, Mohammad; Saboury, Babak; Basu, Sandip; Torigian, Drew A; Alavi, Abass
2015-01-01
Treatment of malignant pleural mesothelioma (MPM) remains very challenging. Assessment of response to treatment is necessary for modifying treatment and using new drugs. Global disease assessment (GDA) by implementing image processing methods to extract more information out of positron emission tomography (PET) images may provide reliable information. In this study we show the feasibility of this method of semi-quantification in patients with mesothelioma, and compare it with the conventional methods. We also present a review of the literature about this topic. Nineteen subjects with histologically proven MPM who had undergone fluoride-18-fluorodeoxyglucose PET/computed tomography ((18)F-FDG PET/CT) before and after treatment were included in this study. An adaptive contrast-oriented thresholding algorithm was used for the image analysis and semi-quantification. Metabolic tumor volume (MTV), maximum and mean standardized uptake volume (SUVmax, SUVmean) and total lesion glycolysis (TLG) were calculated for each region of interest. The global tumor glycolysis (GTG) was obtained by summing up all TLG. Treatment response was assessed by the European Organisation for Research and Treatment of Cancer (EORTC) criteria and the changes of GTG. Agreement between global disease assessment and conventional method was also determined. In patients with progressive disease based on EORTC criteria, GTG showed an increase of 150.7 but in patients with stable or partial response, GTG showed a decrease of 433.1. The SUVmax of patients before treatment was 5.95 (SD: 2.93) and after the treatment it increased to 6.38 (SD: 3.19). Overall concordance of conventional method with GDA method was 57%. Concordance of progression of disease based on conventional method was 44%, stable disease was 85% and partial response was 33%. Discordance was 55%, 14% and 66%. Adaptive contrast-oriented thresholding algorithm is a promising method to quantify the whole tumor glycolysis in patients with mesothelioma. We are able to assess the total metabolic lesion volume, lesion glycolysis, SUVmax, tumor SUVmean and GTG for this particular tumor. Also we were able to demonstrate the potential use of this technique in the monitoring of treatment response. More studies comparing this technique with conventional and other global disease assessment methods are needed in order to clarify its role in the assessment of treatment response and prognosis of these patients.
Laser-Based Pedestrian Tracking in Outdoor Environments by Multiple Mobile Robots
Ozaki, Masataka; Kakimuma, Kei; Hashimoto, Masafumi; Takahashi, Kazuhiko
2012-01-01
This paper presents an outdoors laser-based pedestrian tracking system using a group of mobile robots located near each other. Each robot detects pedestrians from its own laser scan image using an occupancy-grid-based method, and the robot tracks the detected pedestrians via Kalman filtering and global-nearest-neighbor (GNN)-based data association. The tracking data is broadcast to multiple robots through intercommunication and is combined using the covariance intersection (CI) method. For pedestrian tracking, each robot identifies its own posture using real-time-kinematic GPS (RTK-GPS) and laser scan matching. Using our cooperative tracking method, all the robots share the tracking data with each other; hence, individual robots can always recognize pedestrians that are invisible to any other robot. The simulation and experimental results show that cooperating tracking provides the tracking performance better than conventional individual tracking does. Our tracking system functions in a decentralized manner without any central server, and therefore, this provides a degree of scalability and robustness that cannot be achieved by conventional centralized architectures. PMID:23202171
Combining global and local approximations
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.
1991-01-01
A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.
Development and In Vitro Bioactivity Profiling of Alternative Sustainable Nanomaterials
Sustainable, environmentally benign nanomaterials (NMs) are being designed as alternatives based on functionality to conventional metal-based nanomaterials (NMs) in order to minimize potential risk to human health and the environment. Development of rapid methods to evaluate the ...
NASA Astrophysics Data System (ADS)
Tiecher, Tales; Caner, Laurent; Gomes Minella, Jean Paolo; Henrique Ciotti, Lucas; Antônio Bender, Marcos; dos Santos Rheinheimer, Danilo
2014-05-01
Conventional fingerprinting methods based on geochemical composition still require a time-consuming and critical preliminary sample preparation. Thus, fingerprinting characteristics that can be measured in a rapid and cheap way requiring a minimal sample preparation, such as spectroscopy methods, should be used. The present study aimed to evaluate the sediment sources contribution in a rural catchment by using conventional method based on geochemical composition and on an alternative method based on near-infrared spectroscopy. This study was carried out in a rural catchment with an area of 1,19 km2 located in southern Brazil. The sediment sources evaluated were crop fields (n=20), unpaved roads (n=10) and stream channels (n=10). Thirty suspended sediment samples were collected from eight significant storm runoff events between 2009 and 2011. Sources and sediment samples were dried at 50oC and sieved at 63 µm. The total concentration of Ag, As, B, Ba, Be, Ca, Cd, Co, Cr, Cu, Fe, K, La, Li, Mg, Mn, Mo, Na, Ni, P, Pb, Sb, Se, Sr, Ti, Tl, V and Zn were estimated by ICP-OES after microwave assisted digestion with concentrated HNO3 and HCl. Total organic carbon (TOC) was estimated by wet oxidation with K2Cr2O7 and H2SO4. The near-infrared spectra scan range was 4000 to 10000 cm-1 at a resolution of 2 cm-1, with 100 co added scans per spectrum. The steps used in the conventional method were: i) tracer selection based on Kruskal-Wallis test, ii) selection of the best set of tracers using discriminant analyses and finally iii) the use of a mixed linear model to calculate the sediment sources contribution. The steps used in the alternative method were i) principal component analyses to reduce the number of variables, ii) discriminant analyses to determine the tracer potential of the near-infrared spectroscopy, and finally iii) the use of past least square based on 48 mixtures of the sediment sources in various weight proportions to calculate the sediment sources contribution. Both conventional and alternative methods were capable to discriminate 100% of the sediment sources. Conventional fingerprinting method provided a sediment sources contribution of 33±19% by crop fields, 25±13% by unpaved roads and 42±19% by stream channels. The contribution of sediment sources obtained by alternative fingerprinting method using near-infrared spectroscopy was 71±22% of crop fields, 21±12% of unpaved roads and 14±19% of stream channels. No correlation was observed between source contribution assessed by the two methods. Notwithstanding, the average contribution of the unpaved roads was similar by both methods. The highest difference in the average contribution of crop fields and stream channels estimated by the two methods was due to similar organic matter content of these two sediment sources which hampers their discrimination by assessing the near-infrared spectra, where much of the bands are highly correlated with the TOC levels. Efforts should be taken to try to combine both the geochemical composition and near-infrared spectroscopy information on a single estimative of the sediment sources contribution.
Ngamwonglumlert, Luxsika; Devahastin, Sakamon; Chiewchan, Naphaporn
2017-10-13
Natural colorants from plant-based materials have gained increasing popularity due to health consciousness of consumers. Among the many steps involved in the production of natural colorants, pigment extraction is one of the most important. Soxhlet extraction, maceration, and hydrodistillation are conventional methods that have been widely used in industry and laboratory for such a purpose. Recently, various non-conventional methods, such as supercritical fluid extraction, pressurized liquid extraction, microwave-assisted extraction, ultrasound-assisted extraction, pulsed-electric field extraction, and enzyme-assisted extraction have emerged as alternatives to conventional methods due to the advantages of the former in terms of smaller solvent consumption, shorter extraction time, and more environment-friendliness. Prior to the extraction step, pretreatment of plant materials to enhance the stability of natural pigments is another important step that must be carefully taken care of. In this paper, a comprehensive review of appropriate pretreatment and extraction methods for chlorophylls, carotenoids, betalains, and anthocyanins, which are major classes of plant pigments, is provided by using pigment stability and extraction yield as assessment criteria.
NASA Astrophysics Data System (ADS)
Miyata, Kazunori; Nakajima, Masayuki
1995-04-01
A method is given for synthesizing a texture by using the interface of a conventional drawing tool. The majority of conventional texture generation methods are based on the procedural approach, and can generate a variety of textures that are adequate for generating a realistic image. But it is hard for a user to imagine what kind of texture will be generated simply by looking at its parameters. Furthermore, it is difficult to design a new texture freely without a knowledge of all the procedures for texture generation. Our method offers a solution to these problems, and has the following four merits: First, a variety of textures can be obtained by combining a set of feature lines and attribute functions. Second, data definitions are flexible. Third, the user can preview a texture together with its feature lines. Fourth, people can design their own textures interactively and freely by using the interface of a conventional drawing tool. For users who want to build this texture generation method into their own programs, we also give the language specifications for generating a texture. This method can interactively provide a variety of textures, and can also be used for typographic design.
Han, Jubong; Lee, K B; Lee, Jong-Man; Park, Tae Soon; Oh, J S; Oh, Pil-Jei
2016-03-01
We discuss a new method to incorporate Type B uncertainty into least-squares procedures. The new method is based on an extension of the likelihood function from which a conventional least-squares function is derived. The extended likelihood function is the product of the original likelihood function with additional PDFs (Probability Density Functions) that characterize the Type B uncertainties. The PDFs are considered to describe one's incomplete knowledge on correction factors being called nuisance parameters. We use the extended likelihood function to make point and interval estimations of parameters in the basically same way as the least-squares function used in the conventional least-squares method is derived. Since the nuisance parameters are not of interest and should be prevented from appearing in the final result, we eliminate such nuisance parameters by using the profile likelihood. As an example, we present a case study for a linear regression analysis with a common component of Type B uncertainty. In this example we compare the analysis results obtained from using our procedure with those from conventional methods. Copyright © 2015. Published by Elsevier Ltd.
Kato, Takehito; Oinuma, Chihiro; Otsuka, Munechika; Hagiwara, Naoki
2017-01-10
The photoactive layer of a typical organic thin-film bulk-heterojunction (BHJ) solar cell commonly uses fullerene derivatives as the electron-accepting material. However, fullerene derivatives are air-sensitive; therefore, air-stable material is needed as an alternative. In the present study, we propose and describe the properties of Ti-alkoxide as an alternative electron-accepting material to fullerene derivatives to create highly air-stable BHJ solar cells. It is well-known that controlling the morphology in the photoactive layer, which is constructed with fullerene derivatives as the electron acceptor, is important for obtaining a high overall efficiency through the solvent method. The conventional solvent method is useful for high-solubility materials, such as fullerene derivatives. However, for Ti-alkoxides, the conventional solvent method is insufficient, because they only dissolve in specific solvents. Here, we demonstrate a new approach to morphology control that uses the molecular bulkiness of Ti-alkoxides without the conventional solvent method. That is, this method is one approach to obtain highly efficient, air-stable, organic-inorganic bulk-heterojunction solar cells.
High-Accuracy Ultrasound Contrast Agent Detection Method for Diagnostic Ultrasound Imaging Systems.
Ito, Koichi; Noro, Kazumasa; Yanagisawa, Yukari; Sakamoto, Maya; Mori, Shiro; Shiga, Kiyoto; Kodama, Tetsuya; Aoki, Takafumi
2015-12-01
An accurate method for detecting contrast agents using diagnostic ultrasound imaging systems is proposed. Contrast agents, such as microbubbles, passing through a blood vessel during ultrasound imaging are detected as blinking signals in the temporal axis, because their intensity value is constantly in motion. Ultrasound contrast agents are detected by evaluating the intensity variation of a pixel in the temporal axis. Conventional methods are based on simple subtraction of ultrasound images to detect ultrasound contrast agents. Even if the subject moves only slightly, a conventional detection method will introduce significant error. In contrast, the proposed technique employs spatiotemporal analysis of the pixel intensity variation over several frames. Experiments visualizing blood vessels in the mouse tail illustrated that the proposed method performs efficiently compared with conventional approaches. We also report that the new technique is useful for observing temporal changes in microvessel density in subiliac lymph nodes containing tumors. The results are compared with those of contrast-enhanced computed tomography. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Clinical applications of cell-based approaches in alveolar bone augmentation: a systematic review.
Shanbhag, Siddharth; Shanbhag, Vivek
2015-01-01
Cell-based approaches, utilizing adult mesenchymal stem cells (MSCs), are reported to overcome the limitations of conventional bone augmentation procedures. The study aims to systematically review the available evidence on the characteristics and clinical effectiveness of cell-based ridge augmentation, socket preservation, and sinus-floor augmentation, compared to current evidence-based methods in human adult patients. MEDLINE, EMBASE, and CENTRAL databases were searched for related literature. Both observational and experimental studies reporting outcomes of "tissue engineered" or "cell-based" augmentation in ≥5 adult patients alone, or in comparison with non-cell-based (conventional) augmentation methods, were eligible for inclusion. Primary outcome was histomorphometric analysis of new bone formation. Effectiveness of cell-based augmentation was evaluated based on outcomes of controlled studies. Twenty-seven eligible studies were identified. Of these, 15 included a control group (8 randomized controlled trials [RCTs]), and were judged to be at a moderate-to-high risk of bias. Most studies reported the combined use of cultured autologous MSCs with an osteoconductive bone substitute (BS) scaffold. Iliac bone marrow and mandibular periosteum were frequently reported sources of MSCs. In vitro culture of MSCs took between 12 days and 1.5 months. A range of autogenous, allogeneic, xenogeneic, and alloplastic scaffolds was identified. Bovine bone mineral scaffold was frequently reported with favorable outcomes, while polylactic-polyglycolic acid copolymer (PLGA) scaffold resulted in graft failure in three studies. The combination of MSCs and BS resulted in outcomes similar to autogenous bone (AB) and BS. Three RCTs and one controlled trial reported significantly greater bone formation in cell-based than conventionally grafted sites after 3 to 8 months. Based on limited controlled evidence at a moderate-to-high risk of bias, cell-based approaches are comparable, if not superior, to current evidence-based bone grafting methods, with a significant advantage of avoiding AB harvesting. Future clinical trials should additionally evaluate patient-based outcomes and the time-/cost-effectiveness of these approaches. © 2013 Wiley Periodicals, Inc.
Mixture of autoregressive modeling orders and its implication on single trial EEG classification
Atyabi, Adham; Shic, Frederick; Naples, Adam
2016-01-01
Autoregressive (AR) models are of commonly utilized feature types in Electroencephalogram (EEG) studies due to offering better resolution, smoother spectra and being applicable to short segments of data. Identifying correct AR’s modeling order is an open challenge. Lower model orders poorly represent the signal while higher orders increase noise. Conventional methods for estimating modeling order includes Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Final Prediction Error (FPE). This article assesses the hypothesis that appropriate mixture of multiple AR orders is likely to better represent the true signal compared to any single order. Better spectral representation of underlying EEG patterns can increase utility of AR features in Brain Computer Interface (BCI) systems by increasing timely & correctly responsiveness of such systems to operator’s thoughts. Two mechanisms of Evolutionary-based fusion and Ensemble-based mixture are utilized for identifying such appropriate mixture of modeling orders. The classification performance of the resultant AR-mixtures are assessed against several conventional methods utilized by the community including 1) A well-known set of commonly used orders suggested by the literature, 2) conventional order estimation approaches (e.g., AIC, BIC and FPE), 3) blind mixture of AR features originated from a range of well-known orders. Five datasets from BCI competition III that contain 2, 3 and 4 motor imagery tasks are considered for the assessment. The results indicate superiority of Ensemble-based modeling order mixture and evolutionary-based order fusion methods within all datasets. PMID:28740331
NASA Astrophysics Data System (ADS)
Shao, Xupeng
2017-04-01
Glutenite bodies are widely developed in northern Minfeng zone of Dongying Sag. Their litho-electric relationship is not clear. In addition, as the conventional sequence stratigraphic research method drawbacks of involving too many subjective human factors, it has limited deepening of the regional sequence stratigraphic research. The wavelet transform technique based on logging data and the time-frequency analysis technique based on seismic data have advantages of dividing sequence stratigraphy quantitatively comparing with the conventional methods. Under the basis of the conventional sequence research method, this paper used the above techniques to divide the fourth-order sequence of the upper Es4 in northern Minfeng zone of Dongying Sag. The research shows that the wavelet transform technique based on logging data and the time-frequency analysis technique based on seismic data are essentially consistent, both of which divide sequence stratigraphy quantitatively in the frequency domain; wavelet transform technique has high resolutions. It is suitable for areas with wells. The seismic time-frequency analysis technique has wide applicability, but a low resolution. Both of the techniques should be combined; the upper Es4 in northern Minfeng zone of Dongying Sag is a complete set of third-order sequence, which can be further subdivided into 5 fourth-order sequences that has the depositional characteristics of fine-upward sequence in granularity. Key words: Dongying sag, northern Minfeng zone, wavelet transform technique, time-frequency analysis technique ,the upper Es4, sequence stratigraphy
Fourier Deconvolution Methods for Resolution Enhancement in Continuous-Wave EPR Spectroscopy.
Reed, George H; Poyner, Russell R
2015-01-01
An overview of resolution enhancement of conventional, field-swept, continuous-wave electron paramagnetic resonance spectra using Fourier transform-based deconvolution methods is presented. Basic steps that are involved in resolution enhancement of calculated spectra using an implementation based on complex discrete Fourier transform algorithms are illustrated. Advantages and limitations of the method are discussed. An application to an experimentally obtained spectrum is provided to illustrate the power of the method for resolving overlapped transitions. © 2015 Elsevier Inc. All rights reserved.
Continuous Blood Pressure Monitoring in Daily Life
NASA Astrophysics Data System (ADS)
Lopez, Guillaume; Shuzo, Masaki; Ushida, Hiroyuki; Hidaka, Keita; Yanagimoto, Shintaro; Imai, Yasushi; Kosaka, Akio; Delaunay, Jean-Jacques; Yamada, Ichiro
Continuous monitoring of blood pressure in daily life could improve early detection of cardiovascular disorders, as well as promoting healthcare. Conventional ambulatory blood pressure monitoring (ABPM) equipment can measure blood pressure at regular intervals for 24 hours, but is limited by long measuring time, low sampling rate, and constrained measuring posture. In this paper, we demonstrate a new method for continuous real-time measurement of blood pressure during daily activities. Our method is based on blood pressure estimation from pulse wave velocity (PWV) calculation, which formula we improved to take into account changes in the inner diameter of blood vessels. Blood pressure estimation results using our new method showed a greater precision of measured data during exercise, and a better accuracy than the conventional PWV method.
Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter.
Choi, Jihoon; Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il
2017-09-13
This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected.
Li, Hui; Li, Wei; Li, Shuhua; Ma, Jing
2008-06-12
Molecular fragmentation quantum mechanics (QM) calculations have been combined with molecular mechanics (MM) to construct the fragmentation QM/MM method for simulations of dilute solutions of macromolecules. We adopt the electrostatics embedding QM/MM model, where the low-cost generalized energy-based fragmentation calculations are employed for the QM part. Conformation energy calculations, geometry optimizations, and Born-Oppenheimer molecular dynamics simulations of poly(ethylene oxide), PEO(n) (n = 6-20), and polyethylene, PE(n) ( n = 9-30), in aqueous solution have been performed within the framework of both fragmentation and conventional QM/MM methods. The intermolecular hydrogen bonding and chain configurations obtained from the fragmentation QM/MM simulations are consistent with the conventional QM/MM method. The length dependence of chain conformations and dynamics of PEO and PE oligomers in aqueous solutions is also investigated through the fragmentation QM/MM molecular dynamics simulations.
Denoising time-domain induced polarisation data using wavelet techniques
NASA Astrophysics Data System (ADS)
Deo, Ravin N.; Cull, James P.
2016-05-01
Time-domain induced polarisation (TDIP) methods are routinely used for near-surface evaluations in quasi-urban environments harbouring networks of buried civil infrastructure. A conventional technique for improving signal to noise ratio in such environments is by using analogue or digital low-pass filtering followed by stacking and rectification. However, this induces large distortions in the processed data. In this study, we have conducted the first application of wavelet based denoising techniques for processing raw TDIP data. Our investigation included laboratory and field measurements to better understand the advantages and limitations of this technique. It was found that distortions arising from conventional filtering can be significantly avoided with the use of wavelet based denoising techniques. With recent advances in full-waveform acquisition and analysis, incorporation of wavelet denoising techniques can further enhance surveying capabilities. In this work, we present the rationale for utilising wavelet denoising methods and discuss some important implications, which can positively influence TDIP methods.
Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter
Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il
2017-01-01
This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected. PMID:28902154
Model-based spectral estimation of Doppler signals using parallel genetic algorithms.
Solano González, J; Rodríguez Vázquez, K; García Nocetti, D F
2000-05-01
Conventional spectral analysis methods use a fast Fourier transform (FFT) on consecutive or overlapping windowed data segments. For Doppler ultrasound signals, this approach suffers from an inadequate frequency resolution due to the time segment duration and the non-stationarity characteristics of the signals. Parametric or model-based estimators can give significant improvements in the time-frequency resolution at the expense of a higher computational complexity. This work describes an approach which implements in real-time a parametric spectral estimator method using genetic algorithms (GAs) in order to find the optimum set of parameters for the adaptive filter that minimises the error function. The aim is to reduce the computational complexity of the conventional algorithm by using the simplicity associated to GAs and exploiting its parallel characteristics. This will allow the implementation of higher order filters, increasing the spectrum resolution, and opening a greater scope for using more complex methods.
Fostering Sustainable Behavior: An Introduction to Community-Based Social Marketing.
ERIC Educational Resources Information Center
McKenzie-Mohr, Doug; Smith, William
This book discusses incorporating community-based social marketing techniques programs. The first chapter explains why programs that rely heavily on conventional methods to promote behavior change are often ineffective, and introduces community-based social marketing as an attractive alternative for the delivery of programs. Chapter 2 describes…
Nonlinear inversion of electrical resistivity imaging using pruning Bayesian neural networks
NASA Astrophysics Data System (ADS)
Jiang, Fei-Bo; Dai, Qian-Wei; Dong, Li
2016-06-01
Conventional artificial neural networks used to solve electrical resistivity imaging (ERI) inversion problem suffer from overfitting and local minima. To solve these problems, we propose to use a pruning Bayesian neural network (PBNN) nonlinear inversion method and a sample design method based on the K-medoids clustering algorithm. In the sample design method, the training samples of the neural network are designed according to the prior information provided by the K-medoids clustering results; thus, the training process of the neural network is well guided. The proposed PBNN, based on Bayesian regularization, is used to select the hidden layer structure by assessing the effect of each hidden neuron to the inversion results. Then, the hyperparameter α k , which is based on the generalized mean, is chosen to guide the pruning process according to the prior distribution of the training samples under the small-sample condition. The proposed algorithm is more efficient than other common adaptive regularization methods in geophysics. The inversion of synthetic data and field data suggests that the proposed method suppresses the noise in the neural network training stage and enhances the generalization. The inversion results with the proposed method are better than those of the BPNN, RBFNN, and RRBFNN inversion methods as well as the conventional least squares inversion.
Leap-frog-based BPM (LF-BPM) method for solving nanophotonic structures
NASA Astrophysics Data System (ADS)
Ayoub, Ahmad B.; Swillam, Mohamed A.
2018-02-01
In this paper, we propose an efficient approach to solve the BPM equation. By splitting the complex field into real and imaginary parts, the method is proved to be at least 30% faster than the conventional BPM. This method was tested on several optical components to test the accuracy.
Wei, Ting-Yen; Yen, Tzung-Hai; Cheng, Chao-Min
2018-01-01
Acute pesticide intoxication is a common method of suicide globally. This article reviews current diagnostic methods and makes suggestions for future development. In the case of paraquat intoxication, it is characterized by multi-organ failure, causing substantial mortality and morbidity. Early diagnosis may save the life of a paraquat intoxication patient. Conventional paraquat intoxication diagnostic methods, such as symptom review and urine sodium dithionite assay, are time-consuming and impractical in resource-scarce areas where most intoxication cases occur. Several experimental and clinical studies have shown the potential of portable Surface Enhanced Raman Scattering (SERS), paper-based devices, and machine learning for paraquat intoxication diagnosis. Portable SERS and new SERS substrates maintain the sensitivity of SERS while being less costly and more convenient than conventional SERS. Paper-based devices provide the advantages of price and portability. Machine learning algorithms can be implemented as a mobile phone application and facilitate diagnosis in resource-limited areas. Although these methods have not yet met all features of an ideal diagnostic method, the combination and development of these methods offer much promise.
Channel estimation based on quantized MMP for FDD massive MIMO downlink
NASA Astrophysics Data System (ADS)
Guo, Yao-ting; Wang, Bing-he; Qu, Yi; Cai, Hua-jie
2016-10-01
In this paper, we consider channel estimation for Massive MIMO systems operating in frequency division duplexing mode. By exploiting the sparsity of propagation paths in Massive MIMO channel, we develop a compressed sensing(CS) based channel estimator which can reduce the pilot overhead. As compared with the conventional least squares (LS) and linear minimum mean square error(LMMSE) estimation, the proposed algorithm is based on the quantized multipath matching pursuit - MMP - reduced the pilot overhead and performs better than other CS algorithms. The simulation results demonstrate the advantage of the proposed algorithm over various existing methods including the LS, LMMSE, CoSaMP and conventional MMP estimators.
Development of Mycoplasma synoviae (MS) core genome multilocus sequence typing (cgMLST) scheme.
Ghanem, Mostafa; El-Gazzar, Mohamed
2018-05-01
Mycoplasma synoviae (MS) is a poultry pathogen with reported increased prevalence and virulence in recent years. MS strain identification is essential for prevention, control efforts and epidemiological outbreak investigations. Multiple multilocus based sequence typing schemes have been developed for MS, yet the resolution of these schemes could be limited for outbreak investigation. The cost of whole genome sequencing became close to that of sequencing the seven MLST targets; however, there is no standardized method for typing MS strains based on whole genome sequences. In this paper, we propose a core genome multilocus sequence typing (cgMLST) scheme as a standardized and reproducible method for typing MS based whole genome sequences. A diverse set of 25 MS whole genome sequences were used to identify 302 core genome genes as cgMLST targets (35.5% of MS genome) and 44 whole genome sequences of MS isolates from six countries in four continents were used for typing applying this scheme. cgMLST based phylogenetic trees displayed a high degree of agreement with core genome SNP based analysis and available epidemiological information. cgMLST allowed evaluation of two conventional MLST schemes of MS. The high discriminatory power of cgMLST allowed differentiation between samples of the same conventional MLST type. cgMLST represents a standardized, accurate, highly discriminatory, and reproducible method for differentiation between MS isolates. Like conventional MLST, it provides stable and expandable nomenclature, allowing for comparing and sharing the typing results between different laboratories worldwide. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kadem, L.; Knapp, Y.; Pibarot, P.; Bertrand, E.; Garcia, D.; Durand, L. G.; Rieu, R.
2005-12-01
The effective orifice area (EOA) is the most commonly used parameter to assess the severity of aortic valve stenosis as well as the performance of valve substitutes. Particle image velocimetry (PIV) may be used for in vitro estimation of valve EOA. In the present study, we propose a new and simple method based on Howe’s developments of Lighthill’s aero-acoustic theory. This method is based on an acoustical source term (AST) to estimate the EOA from the transvalvular flow velocity measurements obtained by PIV. The EOAs measured by the AST method downstream of three sharp-edged orifices were in excellent agreement with the EOAs predicted from the potential flow theory used as the reference method in this study. Moreover, the AST method was more accurate than other conventional PIV methods based on streamlines, inflexion point or vorticity to predict the theoretical EOAs. The superiority of the AST method is likely due to the nonlinear form of the AST. There was also an excellent agreement between the EOAs measured by the AST method downstream of the three sharp-edged orifices as well as downstream of a bioprosthetic valve with those obtained by the conventional clinical method based on Doppler-echocardiographic measurements of transvalvular velocity. The results of this study suggest that this new simple PIV method provides an accurate estimation of the aortic valve flow EOA. This new method may thus be used as a reference method to estimate the EOA in experimental investigation of the performance of valve substitutes and to validate Doppler-echocardiographic measurements under various physiologic and pathologic flow conditions.
A Direct Cell Quenching Method for Cell-Culture Based Metabolomics
A crucial step in metabolomic analysis of cellular extracts is the cell quenching process. The conventional method first uses trypsin to detach cells from their growth surface. This inevitably changes the profile of cellular metabolites since the detachment of cells from the extr...
Multidimensional Compressed Sensing MRI Using Tensor Decomposition-Based Sparsifying Transform
Yu, Yeyang; Jin, Jin; Liu, Feng; Crozier, Stuart
2014-01-01
Compressed Sensing (CS) has been applied in dynamic Magnetic Resonance Imaging (MRI) to accelerate the data acquisition without noticeably degrading the spatial-temporal resolution. A suitable sparsity basis is one of the key components to successful CS applications. Conventionally, a multidimensional dataset in dynamic MRI is treated as a series of two-dimensional matrices, and then various matrix/vector transforms are used to explore the image sparsity. Traditional methods typically sparsify the spatial and temporal information independently. In this work, we propose a novel concept of tensor sparsity for the application of CS in dynamic MRI, and present the Higher-order Singular Value Decomposition (HOSVD) as a practical example. Applications presented in the three- and four-dimensional MRI data demonstrate that HOSVD simultaneously exploited the correlations within spatial and temporal dimensions. Validations based on cardiac datasets indicate that the proposed method achieved comparable reconstruction accuracy with the low-rank matrix recovery methods and, outperformed the conventional sparse recovery methods. PMID:24901331
Theory of viscous transonic flow over airfoils at high Reynolds number
NASA Technical Reports Server (NTRS)
Melnik, R. E.; Chow, R.; Mead, H. R.
1977-01-01
This paper considers viscous flows with unseparated turbulent boundary layers over two-dimensional airfoils at transonic speeds. Conventional theoretical methods are based on boundary layer formulations which do not account for the effect of the curved wake and static pressure variations across the boundary layer in the trailing edge region. In this investigation an extended viscous theory is developed that accounts for both effects. The theory is based on a rational analysis of the strong turbulent interaction at airfoil trailing edges. The method of matched asymptotic expansions is employed to develop formal series solutions of the full Reynolds equations in the limit of Reynolds numbers tending to infinity. Procedures are developed for combining the local trailing edge solution with numerical methods for solving the full potential flow and boundary layer equations. Theoretical results indicate that conventional boundary layer methods account for only about 50% of the viscous effect on lift, the remaining contribution arising from wake curvature and normal pressure gradient effects.
NASA Astrophysics Data System (ADS)
Fang, Jingyu; Xu, Haisong; Wang, Zhehong; Wu, Xiaomin
2016-05-01
With colorimetric characterization, digital cameras can be used as image-based tristimulus colorimeters for color communication. In order to overcome the restriction of fixed capture settings adopted in the conventional colorimetric characterization procedures, a novel method was proposed considering capture settings. The method calculating colorimetric value of the measured image contains five main steps, including conversion from RGB values to equivalent ones of training settings through factors based on imaging system model so as to build the bridge between different settings, scaling factors involved in preparation steps for transformation mapping to avoid errors resulted from nonlinearity of polynomial mapping for different ranges of illumination levels. The experiment results indicate that the prediction error of the proposed method, which was measured by CIELAB color difference formula, reaches less than 2 CIELAB units under different illumination levels and different correlated color temperatures. This prediction accuracy for different capture settings remains the same level as the conventional method for particular lighting condition.
Tavakoli-Ardakani, M; Neman, B; Mehdizadeh, M; Hajifathali, A; Salamzadeh, J; Tabarraee, M
2013-07-01
Malnutrition in patients undergoing hematopoietic SCT is known as a risk factor for adverse effects and is directly or indirectly responsible for excess mortality and morbidity. We designed the present study to evaluate the effects of individualized parenteral nutrition (PN) and compare the present method to the conventional PN. Individualized PN based on the Harris-Benedict equation was administered to 30 patients after hematopoietic SCT and was compared with an age, gender and disease matched group of patients who underwent hematopoietic SCT with conventional PN. These two groups were compared on clinical, hematological, nutritional outcomes. Comparing duration of hospital stay (P value<0.0001), infection (P value = 0.01), time to platelet engraftment (P value = 0.02), units of packed cell transfusion (P value = 0.006) and decrease in body weight (P value = 0.004) showed significant differences between the two groups. In conclusion, the use of individualized PN seems more beneficial than conventional PN.
Baksi, B Güniz; Ermis, R Banu
2007-10-01
To test the efficacy of conventional radiometry with indirect digital image analysis in the assessment of the relative radiopacity of dental cements used as liners or bases compared to human enamel and dentin. Disks of 15 different dental cements, 5 mm in diameter and 2 mm thick, were exposed to radiation together with 2-mm-thick disks of enamel and dentin and an aluminum step wedge. Density was evaluated by digital transmission densitometry and with the histogram function of an image analysis program following digitization of the radiographs with a flatbed scanner. A higher number of dental cements were discriminated from both dentin and enamel with conventional radiographic densitometer. All the cements examined, except Ionoseal (Voco) and Ionobond (Voco), were more radiopaque than dentin. With both methods, Chelon-Silver (3M ESPE) had the highest radiopacity and glass-ionomer cements the lowest. Radiodensity of dental cements can be differentiated with a high probability with the conventional radiometric method.
Information Gain Based Dimensionality Selection for Classifying Text Documents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumidu Wijayasekara; Milos Manic; Miles McQueen
2013-06-01
Selecting the optimal dimensions for various knowledge extraction applications is an essential component of data mining. Dimensionality selection techniques are utilized in classification applications to increase the classification accuracy and reduce the computational complexity. In text classification, where the dimensionality of the dataset is extremely high, dimensionality selection is even more important. This paper presents a novel, genetic algorithm based methodology, for dimensionality selection in text mining applications that utilizes information gain. The presented methodology uses information gain of each dimension to change the mutation probability of chromosomes dynamically. Since the information gain is calculated a priori, the computational complexitymore » is not affected. The presented method was tested on a specific text classification problem and compared with conventional genetic algorithm based dimensionality selection. The results show an improvement of 3% in the true positives and 1.6% in the true negatives over conventional dimensionality selection methods.« less
The Measurement of Term Importance in Automatic Indexing.
ERIC Educational Resources Information Center
Salton, G.; And Others
1981-01-01
Reviews major term-weighting theories, presents methods for estimating the relevance properties of terms based on their frequency characteristics in a document collection, and compares weighting systems using term relevance properties with more conventional frequency-based methodologies. Eighteen references are cited. (Author/FM)
Cat-eye effect target recognition with single-pixel detectors
NASA Astrophysics Data System (ADS)
Jian, Weijian; Li, Li; Zhang, Xiaoyue
2015-12-01
A prototype of cat-eye effect target recognition with single-pixel detectors is proposed. Based on the framework of compressive sensing, it is possible to recognize cat-eye effect targets by projecting a series of known random patterns and measuring the backscattered light with three single-pixel detectors in different locations. The prototype only requires simpler, less expensive detectors and extends well beyond the visible spectrum. The simulations are accomplished to evaluate the feasibility of the proposed prototype. We compared our results to that obtained from conventional cat-eye effect target recognition methods using area array sensor. The experimental results show that this method is feasible and superior to the conventional method in dynamic and complicated backgrounds.
Kwak, Kichang; Yoon, Uicheul; Lee, Dong-Kyun; Kim, Geon Ha; Seo, Sang Won; Na, Duk L; Shim, Hack-Joon; Lee, Jong-Min
2013-09-01
The hippocampus has been known to be an important structure as a biomarker for Alzheimer's disease (AD) and other neurological and psychiatric diseases. However, it requires accurate, robust and reproducible delineation of hippocampal structures. In this study, an automated hippocampal segmentation method based on a graph-cuts algorithm combined with atlas-based segmentation and morphological opening was proposed. First of all, the atlas-based segmentation was applied to define initial hippocampal region for a priori information on graph-cuts. The definition of initial seeds was further elaborated by incorporating estimation of partial volume probabilities at each voxel. Finally, morphological opening was applied to reduce false positive of the result processed by graph-cuts. In the experiments with twenty-seven healthy normal subjects, the proposed method showed more reliable results (similarity index=0.81±0.03) than the conventional atlas-based segmentation method (0.72±0.04). Also as for segmentation accuracy which is measured in terms of the ratios of false positive and false negative, the proposed method (precision=0.76±0.04, recall=0.86±0.05) produced lower ratios than the conventional methods (0.73±0.05, 0.72±0.06) demonstrating its plausibility for accurate, robust and reliable segmentation of hippocampus. Copyright © 2013 Elsevier Inc. All rights reserved.
Zhen, Chen; QuiuLi, Zhang; YuanQi, An; Casado, Verónica Vocero; Fan, Yuan
2016-01-01
Currently, conventional enzyme immunoassays which use manual gold immunoassays and colloidal tests (GICTs) are used as screening tools to detect Treponema pallidum (syphilis), hepatitis B virus (HBV), hepatitis C virus (HCV), human immunodeficiency virus type 1 (HIV-1), and HIV-2 in patients undergoing surgery. The present observational, cross-sectional study compared the sensitivity, specificity, and work flow characteristics of the conventional algorithm with manual GICTs with those of a newly proposed algorithm that uses the automated Bio-Flash technology as a screening tool in patients undergoing gastrointestinal (GI) endoscopy. A total of 956 patients were examined for the presence of serological markers of infection with HIV-1/2, HCV, HBV, and T. pallidum. The proposed algorithm with the Bio-Flash technology was superior for the detection of all markers (100.0% sensitivity and specificity for detection of anti-HIV and anti-HCV antibodies, HBV surface antigen [HBsAg], and T. pallidum) compared with the conventional algorithm based on the manual method (80.0% sensitivity and 98.6% specificity for the detection of anti-HIV, 75.0% sensitivity for the detection of anti-HCV, 94.7% sensitivity for the detection of HBsAg, and 100% specificity for the detection of anti-HCV and HBsAg) in these patients. The automated Bio-Flash technology-based screening algorithm also reduced the operation time by 85.0% (205 min) per day, saving up to 24 h/week. In conclusion, the use of the newly proposed screening algorithm based on the automated Bio-Flash technology can provide an advantage over the use of conventional algorithms based on manual methods for screening for HIV, HBV, HCV, and syphilis before GI endoscopy. PMID:27707942
Jun, Zhou; Zhen, Chen; QuiuLi, Zhang; YuanQi, An; Casado, Verónica Vocero; Fan, Yuan
2016-12-01
Currently, conventional enzyme immunoassays which use manual gold immunoassays and colloidal tests (GICTs) are used as screening tools to detect Treponema pallidum (syphilis), hepatitis B virus (HBV), hepatitis C virus (HCV), human immunodeficiency virus type 1 (HIV-1), and HIV-2 in patients undergoing surgery. The present observational, cross-sectional study compared the sensitivity, specificity, and work flow characteristics of the conventional algorithm with manual GICTs with those of a newly proposed algorithm that uses the automated Bio-Flash technology as a screening tool in patients undergoing gastrointestinal (GI) endoscopy. A total of 956 patients were examined for the presence of serological markers of infection with HIV-1/2, HCV, HBV, and T. pallidum The proposed algorithm with the Bio-Flash technology was superior for the detection of all markers (100.0% sensitivity and specificity for detection of anti-HIV and anti-HCV antibodies, HBV surface antigen [HBsAg], and T. pallidum) compared with the conventional algorithm based on the manual method (80.0% sensitivity and 98.6% specificity for the detection of anti-HIV, 75.0% sensitivity for the detection of anti-HCV, 94.7% sensitivity for the detection of HBsAg, and 100% specificity for the detection of anti-HCV and HBsAg) in these patients. The automated Bio-Flash technology-based screening algorithm also reduced the operation time by 85.0% (205 min) per day, saving up to 24 h/week. In conclusion, the use of the newly proposed screening algorithm based on the automated Bio-Flash technology can provide an advantage over the use of conventional algorithms based on manual methods for screening for HIV, HBV, HCV, and syphilis before GI endoscopy. Copyright © 2016 Jun et al.
Chest CT window settings with multiscale adaptive histogram equalization: pilot study.
Fayad, Laura M; Jin, Yinpeng; Laine, Andrew F; Berkmen, Yahya M; Pearson, Gregory D; Freedman, Benjamin; Van Heertum, Ronald
2002-06-01
Multiscale adaptive histogram equalization (MAHE), a wavelet-based algorithm, was investigated as a method of automatic simultaneous display of the full dynamic contrast range of a computed tomographic image. Interpretation times were significantly lower for MAHE-enhanced images compared with those for conventionally displayed images. Diagnostic accuracy, however, was insufficient in this pilot study to allow recommendation of MAHE as a replacement for conventional window display.
NASA Astrophysics Data System (ADS)
Zhang, Rui; Xin, Binjie
2016-08-01
Yarn density is always considered as the fundamental structural parameter used for the quality evaluation of woven fabrics. The conventional yarn density measurement method is based on one-side analysis. In this paper, a novel density measurement method is developed for yarn-dyed woven fabrics based on a dual-side fusion technique. Firstly, a lab-used dual-side imaging system is established to acquire both face-side and back-side images of woven fabric and the affine transform is used for the alignment and fusion of the dual-side images. Then, the color images of the woven fabrics are transferred from the RGB to the CIE-Lab color space, and the intensity information of the image extracted from the L component is used for texture fusion and analysis. Subsequently, three image fusion methods are developed and utilized to merge the dual-side images: the weighted average method, wavelet transform method and Laplacian pyramid blending method. The fusion efficacy of each method is evaluated by three evaluation indicators and the best of them is selected to do the reconstruction of the complete fabric texture. Finally, the yarn density of the fused image is measured based on the fast Fourier transform, and the yarn alignment image could be reconstructed using the inverse fast Fourier transform. Our experimental results show that the accuracy of density measurement by using the proposed method is close to 99.44% compared with the traditional method and the robustness of this new proposed method is better than that of conventional analysis methods.
TiO2-coated mesoporous carbon: conventional vs. microwave-annealing process.
Coromelci-Pastravanu, Cristina; Ignat, Maria; Popovici, Evelini; Harabagiu, Valeria
2014-08-15
The study of coating mesoporous carbon materials with titanium oxide nanoparticles is now becoming a promising and challenging area of research. To optimize the use of carbon materials in various applications, it is necessary to attach functional groups or other nanostructures to their surface. The combination of the distinctive properties of mesoporous carbon materials and titanium oxide is expected to be applied in field emission displays, nanoelectronic devices, novel catalysts, and polymer or ceramic reinforcement. But, their synthesis is still largely based on conventional techniques, such as wet impregnation followed by chemical reduction of the metal nanoparticle precursors, which takes time and money. The thermal heating based techniques are time consuming and often lack control of particle size and morphology. Hence, since there is a growing interest in microwave technology, an alternative way of power input into chemical reactions through dielectric heating is the use of microwaves. This work is focused on the advantages of microwave-assisted synthesis of TiO2-coated mesoporous carbon over conventional thermal heating method. The reviewed studies showed that the microwave-assisted synthesis of such composites allows processes to be completed within a shorter reaction time allowing the nanoparticles formation with superior properties than that obtained by conventional method. Copyright © 2014 Elsevier B.V. All rights reserved.
Zhong, Z W; Wu, R G; Wang, Z P; Tan, H L
2015-09-01
Conventional microfluidic devices are typically complex and expensive. The devices require the use of pneumatic control systems or highly precise pumps to control the flow in the devices. This work investigates an alternative method using paper based microfluidic devices to replace conventional microfluidic devices. Size based separation and extraction experiments conducted were able to separate free dye from a mixed protein and dye solution. Experimental results showed that pure fluorescein isothiocyanate could be separated from a solution of mixed fluorescein isothiocyanate and fluorescein isothiocyanate labeled bovine serum albumin. The analysis readings obtained from a spectrophotometer clearly show that the extracted tartrazine sample did not contain any amount of Blue-BSA, because its absorbance value was 0.000 measured at a wavelength of 590nm, which correlated to Blue-BSA. These demonstrate that paper based microfluidic devices, which are inexpensive and easy to implement, can potentially replace their conventional counterparts by the use of simple geometry designs and the capillary action. These findings will potentially help in future developments of paper based microfluidic devices. Copyright © 2015 Elsevier B.V. All rights reserved.
Adaptive sensor-based ultra-high accuracy solar concentrator tracker
NASA Astrophysics Data System (ADS)
Brinkley, Jordyn; Hassanzadeh, Ali
2017-09-01
Conventional solar trackers use information of the sun's position, either by direct sensing or by GPS. Our method uses the shading of the receiver. This, coupled with nonimaging optics design allows us to achieve ultra-high concentration. Incorporating a sensor based shadow tracking method with a two stage concentration solar hybrid parabolic trough allows the system to maintain high concentration with acute accuracy.
Boeker, Martin; Andel, Peter; Vach, Werner; Frankenschmidt, Alexander
2013-01-01
When compared with more traditional instructional methods, Game-based e-learning (GbEl) promises a higher motivation of learners by presenting contents in an interactive, rule-based and competitive way. Most recent systematic reviews and meta-analysis of studies on Game-based learning and GbEl in the medical professions have shown limited effects of these instructional methods. To compare the effectiveness on the learning outcome of a Game-based e-learning (GbEl) instruction with a conventional script-based instruction in the teaching of phase contrast microscopy urinalysis under routine training conditions of undergraduate medical students. A randomized controlled trial was conducted with 145 medical students in their third year of training in the Department of Urology at the University Medical Center Freiburg, Germany. 82 subjects where allocated for training with an educational adventure-game (GbEl group) and 69 subjects for conventional training with a written script-based approach (script group). Learning outcome was measured with a 34 item single choice test. Students' attitudes were collected by a questionnaire regarding fun with the training, motivation to continue the training and self-assessment of acquired knowledge. The students in the GbEl group achieved significantly better results in the cognitive knowledge test than the students in the script group: the mean score was 28.6 for the GbEl group and 26.0 for the script group of a total of 34.0 points with a Cohen's d effect size of 0.71 (ITT analysis). Attitudes towards the recent learning experience were significantly more positive with GbEl. Students reported to have more fun while learning with the game when compared to the script-based approach. Game-based e-learning is more effective than a script-based approach for the training of urinalysis in regard to cognitive learning outcome and has a high positive motivational impact on learning. Game-based e-learning can be used as an effective teaching method for self-instruction.
Goodwin, Cody R; Sherrod, Stacy D; Marasco, Christina C; Bachmann, Brian O; Schramm-Sapyta, Nicole; Wikswo, John P; McLean, John A
2014-07-01
A metabolic system is composed of inherently interconnected metabolic precursors, intermediates, and products. The analysis of untargeted metabolomics data has conventionally been performed through the use of comparative statistics or multivariate statistical analysis-based approaches; however, each falls short in representing the related nature of metabolic perturbations. Herein, we describe a complementary method for the analysis of large metabolite inventories using a data-driven approach based upon a self-organizing map algorithm. This workflow allows for the unsupervised clustering, and subsequent prioritization of, correlated features through Gestalt comparisons of metabolic heat maps. We describe this methodology in detail, including a comparison to conventional metabolomics approaches, and demonstrate the application of this method to the analysis of the metabolic repercussions of prolonged cocaine exposure in rat sera profiles.
An intelligent switch with back-propagation neural network based hybrid power system
NASA Astrophysics Data System (ADS)
Perdana, R. H. Y.; Fibriana, F.
2018-03-01
The consumption of conventional energy such as fossil fuels plays the critical role in the global warming issues. The carbon dioxide, methane, nitrous oxide, etc. could lead the greenhouse effects and change the climate pattern. In fact, 77% of the electrical energy is generated from fossil fuels combustion. Therefore, it is necessary to use the renewable energy sources for reducing the conventional energy consumption regarding electricity generation. This paper presents an intelligent switch to combine both energy resources, i.e., the solar panels as the renewable energy with the conventional energy from the State Electricity Enterprise (PLN). The artificial intelligence technology with the back-propagation neural network was designed to control the flow of energy that is distributed dynamically based on renewable energy generation. By the continuous monitoring on each load and source, the dynamic pattern of the intelligent switch was better than the conventional switching method. The first experimental results for 60 W solar panels showed the standard deviation of the trial at 0.7 and standard deviation of the experiment at 0.28. The second operation for a 900 W of solar panel obtained the standard deviation of the trial at 0.05 and 0.18 for the standard deviation of the experiment. Moreover, the accuracy reached 83% using this method. By the combination of the back-propagation neural network with the observation of energy usage of the load using wireless sensor network, each load can be evenly distributed and will impact on the reduction of conventional energy usage.
The new analysis method of PWQ in the DRAM pattern
NASA Astrophysics Data System (ADS)
Han, Daehan; Chang, Jinman; Kim, Taeheon; Lee, Kyusun; Kim, Yonghyeon; Kang, Jinyoung; Hong, Aeran; Choi, Bumjin; Lee, Joosung; Kim, Hyoung Jun; Lee, Kweonjae; Hong, Hyoungsun; Jin, Gyoyoung
2016-03-01
In a sub 2Xnm node process, the feedback of pattern weak points is more and more significant. Therefore, it is very important to extract the systemic defect in Double Patterning Technology(DPT), however, it is impossible to predict exact systemic defect at the recent photo simulation tool.[1] Therefore, the method of Process Window Qualification (PWQ) is very serious and essential these days. Conventional PWQ methods are die to die image comparison by using an e-beam or bright field machine. Results are evaluated by the person, who reviews the images, in some cases. However, conventional die to die comparison method has critical problem. If reference die and comparison die have same problem, such as both of dies have pattern problems, the issue patterns are not detected by current defect detecting approach. Aside from the inspection accuracy, reviewing the wafer requires much effort and time to justify the genuine issue patterns. Therefore, our company adopts die to data based matching PWQ method that is using NGR machine. The main features of the NGR are as follows. First, die to data based matching, second High speed, finally massive data were used for evaluation of pattern inspection.[2] Even though our die to data based matching PWQ method measures the mass data, our margin decision process is based on image shape. Therefore, it has some significant problems. First, because of the long analysis time, the developing period of new device is increased. Moreover, because of the limitation of resources, it may not examine the full chip area. Consequently, the result of PWQ weak points cannot represent the all the possible defects. Finally, since the PWQ margin is not decided by the mathematical value, to make the solid definition of killing defect is impossible. To overcome these problems, we introduce a statistical values base process window qualification method that increases the accuracy of process margin and reduces the review time. Therefore, it is possible to see the genuine margin of the critical pattern issue which we cannot see on our conventional PWQ inspection; hence we can enhance the accuracy of PWQ margin.
Charpentier, R.R.; Klett, T.R.
2005-01-01
During the last 30 years, the methodology for assessment of undiscovered conventional oil and gas resources used by the Geological Survey has undergone considerable change. This evolution has been based on five major principles. First, the U.S. Geological Survey has responsibility for a wide range of U.S. and world assessments and requires a robust methodology suitable for immaturely explored as well as maturely explored areas. Second, the assessments should be based on as comprehensive a set of geological and exploration history data as possible. Third, the perils of methods that solely use statistical methods without geological analysis are recognized. Fourth, the methodology and course of the assessment should be documented as transparently as possible, within the limits imposed by the inevitable use of subjective judgement. Fifth, the multiple uses of the assessments require a continuing effort to provide the documentation in such ways as to increase utility to the many types of users. Undiscovered conventional oil and gas resources are those recoverable volumes in undiscovered, discrete, conventional structural or stratigraphic traps. The USGS 2000 methodology for these resources is based on a framework of assessing numbers and sizes of undiscovered oil and gas accumulations and the associated risks. The input is standardized on a form termed the Seventh Approximation Data Form for Conventional Assessment Units. Volumes of resource are then calculated using a Monte Carlo program named Emc2, but an alternative analytic (non-Monte Carlo) program named ASSESS also can be used. The resource assessment methodology continues to change. Accumulation-size distributions are being examined to determine how sensitive the results are to size-distribution assumptions. The resource assessment output is changing to provide better applicability for economic analysis. The separate methodology for assessing continuous (unconventional) resources also has been evolving. Further studies of the relationship between geologic models of conventional and continuous resources will likely impact the respective resource assessment methodologies. ?? 2005 International Association for Mathematical Geology.
Jiang, Lianghai; Dong, Liang; Tan, Mingsheng; Qi, Yingna; Yang, Feng; Yi, Ping; Tang, Xiangsheng
2017-01-01
Background Atlantoaxial posterior pedicle screw fixation has been widely used for treatment of atlantoaxial instability (AAI). However, precise and safe insertion of atlantoaxial pedicle screws remains challenging. This study presents a modified drill guide template based on a previous template for atlantoaxial pedicle screw placement. Material/Methods Our study included 54 patients (34 males and 20 females) with AAI. All the patients underwent posterior atlantoaxial pedicle screw fixation: 25 patients underwent surgery with the use of a modified drill guide template (template group) and 29 patients underwent surgery via the conventional method (conventional group). In the template group, a modified drill guide template was designed for each patient. The modified drill guide template and intraoperative fluoroscopy were used for surgery in the template group, while only intraoperative fluoroscopy was used in the conventional group. Results Of the 54 patients, 52 (96.3%) completed the follow-up for more than 12 months. The template group had significantly lower intraoperative fluoroscopy frequency (p<0.001) and higher accuracy of screw insertion (p=0.045) than the conventional group. There were no significant differences in surgical duration, intraoperative blood loss, or improvement of neurological function between the 2 groups (p>0.05). Conclusions Based on the results of this study, it is feasible to use the modified drill guide template for atlantoaxial pedicle screw placement. Using the template can significantly lower the screw malposition rate and the frequency of intraoperative fluoroscopy. PMID:28301445
A Vision-Based Motion Sensor for Undergraduate Laboratories.
ERIC Educational Resources Information Center
Salumbides, Edcel John; Maristela, Joyce; Uy, Alfredson; Karremans, Kees
2002-01-01
Introduces an alternative method to determine the mechanics of a moving object that uses computer vision algorithms with a charge-coupled device (CCD) camera as a recording device. Presents two experiments, pendulum motion and terminal velocity, to compare results of the alternative and conventional methods. (YDS)
Formal hardware verification of digital circuits
NASA Technical Reports Server (NTRS)
Joyce, J.; Seger, C.-J.
1991-01-01
The use of formal methods to verify the correctness of digital circuits is less constrained by the growing complexity of digital circuits than conventional methods based on exhaustive simulation. This paper briefly outlines three main approaches to formal hardware verification: symbolic simulation, state machine analysis, and theorem-proving.
Applications of discrete element method in modeling of grain postharvest operations
USDA-ARS?s Scientific Manuscript database
Grain kernels are finite and discrete materials. Although flowing grain can behave like a continuum fluid at times, the discontinuous behavior exhibited by grain kernels cannot be simulated solely with conventional continuum-based computer modeling such as finite-element or finite-difference methods...
Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming
2014-01-01
An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators. PMID:25342000
Dimensional Changes of Acrylic Resin Denture Bases: Conventional Versus Injection-Molding Technique
Gharechahi, Jafar; Asadzadeh, Nafiseh; Shahabian, Foad; Gharechahi, Maryam
2014-01-01
Objective: Acrylic resin denture bases undergo dimensional changes during polymerization. Injection molding techniques are reported to reduce these changes and thereby improve physical properties of denture bases. The aim of this study was to compare dimensional changes of specimens processed by conventional and injection-molding techniques. Materials and Methods: SR-Ivocap Triplex Hot resin was used for conventional pressure-packed and SR-Ivocap High Impact was used for injection-molding techniques. After processing, all the specimens were stored in distilled water at room temperature until measured. For dimensional accuracy evaluation, measurements were recorded at 24-hour, 48-hour and 12-day intervals using a digital caliper with an accuracy of 0.01 mm. Statistical analysis was carried out by SPSS (SPSS Inc., Chicago, IL, USA) using t-test and repeated-measures ANOVA. Statistical significance was defined at P<0.05. Results: After each water storage period, the acrylic specimens produced by injection exhibited less dimensional changes compared to those produced by the conventional technique. Curing shrinkage was compensated by water sorption with an increase in water storage time decreasing dimensional changes. Conclusion: Within the limitations of this study, dimensional changes of acrylic resin specimens were influenced by the molding technique used and SR-Ivocap injection procedure exhibited higher dimensional accuracy compared to conventional molding. PMID:25584050
Schneiderman, Eva; Colón, Ellen; White, Donald J; St John, Samuel
2015-01-01
The purpose of this study was to compare the abrasivity of commercial dentifrices by two techniques: the conventional gold standard radiotracer-based Radioactive Dentin Abrasivity (RDA) method; and a newly validated technique based on V8 brushing that included a profilometry-based evaluation of dentin wear. This profilometry-based method is referred to as RDA-Profilometry Equivalent, or RDA-PE. A total of 36 dentifrices were sourced from four global dentifrice markets (Asia Pacific [including China], Europe, Latin America, and North America) and tested blindly using both the standard radiotracer (RDA) method and the new profilometry method (RDA-PE), taking care to follow specific details related to specimen preparation and treatment. Commercial dentifrices tested exhibited a wide range of abrasivity, with virtually all falling well under the industry accepted upper limit of 250; that is, 2.5 times the level of abrasion measured using an ISO 11609 abrasivity reference calcium pyrophosphate as the reference control. RDA and RDA-PE comparisons were linear across the entire range of abrasivity (r2 = 0.7102) and both measures exhibited similar reproducibility with replicate assessments. RDA-PE assessments were not just linearly correlated, but were also proportional to conventional RDA measures. The linearity and proportionality of the results of the current study support that both methods (RDA or RDA-PE) provide similar results and justify a rationale for making the upper abrasivity limit of 250 apply to both RDA and RDA-PE.
Vision-based system identification technique for building structures using a motion capture system
NASA Astrophysics Data System (ADS)
Oh, Byung Kwan; Hwang, Jin Woo; Kim, Yousok; Cho, Tongjun; Park, Hyo Seon
2015-11-01
This paper presents a new vision-based system identification (SI) technique for building structures by using a motion capture system (MCS). The MCS with outstanding capabilities for dynamic response measurements can provide gage-free measurements of vibrations through the convenient installation of multiple markers. In this technique, from the dynamic displacement responses measured by MCS, the dynamic characteristics (natural frequency, mode shape, and damping ratio) of building structures are extracted after the processes of converting the displacement from MCS to acceleration and conducting SI by frequency domain decomposition. A free vibration experiment on a three-story shear frame was conducted to validate the proposed technique. The SI results from the conventional accelerometer-based method were compared with those from the proposed technique and showed good agreement, which confirms the validity and applicability of the proposed vision-based SI technique for building structures. Furthermore, SI directly employing MCS measured displacements to FDD was performed and showed identical results to those of conventional SI method.
Holland, Tanja; Blessing, Daniel; Hellwig, Stephan; Sack, Markus
2013-10-01
Radio frequency impedance spectroscopy (RFIS) is a robust method for the determination of cell biomass during fermentation. RFIS allows non-invasive in-line monitoring of the passive electrical properties of cells in suspension and can distinguish between living and dead cells based on their distinct behavior in an applied radio frequency field. We used continuous in situ RFIS to monitor batch-cultivated plant suspension cell cultures in stirred-tank bioreactors and compared the in-line data to conventional off-line measurements. RFIS-based analysis was more rapid and more accurate than conventional biomass determination, and was sensitive to changes in cell viability. The higher resolution of the in-line measurement revealed subtle changes in cell growth which were not accessible using conventional methods. Thus, RFIS is well suited for correlating such changes with intracellular states and product accumulation, providing unique opportunities for employing systems biotechnology and process analytical technology approaches to increase product yield and quality. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Negative Transference Numbers in Polymer Electrolytes
NASA Astrophysics Data System (ADS)
Pesko, Danielle; Timachova, Ksenia; Balsara, Nitash
Energy density and safety of conventional lithium-ion batteries is limited by the use of flammable organic liquids as a solvent for lithium salts. Polymer electrolytes have the potential to address both limitations. The poor performance of batteries with polymer electrolytes is generally attributed to low ionic conductivity. The purpose of our work is to show that another transport property, the cation transference number, t +, of polymer electrolytes is fundamentally different from that of conventional electrolytes. Our experimental approach, based on concentrated solution theory, indicates that t + of mixtures of poly(ethylene oxide) and LiTFSI salt are negative over most of the accessible concentration window. In contrast, approaches based on dilute solution theory suggest that t + in the same system is positive. In addition to presenting a new approach for determining t +, we also present data obtained from the steady-state current method, pulsed-field-gradient NMR, and the current-interrupt method. Discrepancies between different approaches are resolved. Our work implies that in the absence of concentration gradients, the net fluxes of both cations and anions are directed toward the positive electrode. Conventional liquid electrolytes do not suffer from this constraint.
Arslan, Mustafa; Murat, Sema; Alp, Gulce; Zaimoglu, Ali
2018-01-01
The objectives of this in vitro study were to evaluate the flexural strength (FS), surface roughness (Ra), and hydrophobicity of polymethylmethacrylate (PMMA)-based computer-aided design/computer-aided manufacturing (CAD/CAM) polymers and to compare the properties of different CAD/CAM PMMA-based polymers with conventional heat-polymerized PMMA following thermal cycling. Twenty rectangular-shaped specimens (64 × 10 × 3.3 mm) were fabricated from three CAD/CAM PMMA-based polymers (M-PM Disc [M], AvaDent Puck Disc [A], and Pink CAD/CAM Disc Polident [P], and one conventional heat-polymerized PMMA (Promolux [C]), according to ISO 20795-1:2013 standards. The specimens were divided into two subgroups (n = 10), a control and a thermocycled group. The specimens in the thermocycled group were subjected to 5000 thermal cycling procedures (5 to 55°C; 30 s dwell times). The Ra value was measured using a profilometer. Contact angle (CA) was assessed using the sessile drop method to evaluate surface hydrophobicity. In addition, the FS of the specimens was tested in a universal testing machine at a crosshead speed of 1.0 mm/min. Surface texture of the materials was assessed using scanning electron microscope (SEM). The data were analyzed using two-way analysis of variance (ANOVA), followed by Tukey's HSD post-hoc test (α < 0.05). CAD/CAM PMMA-based polymers showed significantly higher FS than conventional heat-polymerized PMMA for each group (P < 0.001). CAD/CAM PMMA-based polymer [P] showed the highest FS, whereas conventional PMMA [C] showed the lowest FS before and after thermal cycling (P < 0.001). There were no significant differences among the Ra values of the tested denture base polymers in the control group (P > 0.05). In the thermocycled group, the lowest Ra value was observed for CAD/CAM PMMA-based polymer [M] (P < 0.001), whereas CAD/CAM PMMA-based polymers [A] and [P], and conventional PMMA [C] had similar Ra values (P > 0.05). Conventional PMMA [C] had a significantly lower CA and consequently lower hydrophobicity compared to the CAD/CAM polymers in the control group (P < 0.001). In the thermocycled group, CAD/CAM PMMA-based polymer [A] and conventional PMMA [C] had significantly higher CA, and consequently higher hydrophobicity when compared to CAD/CAM polymers [M] and [P] (P < 0.001). However, no significant differences were found among the other materials (P > 0.05). The FS and hydrophobicity of the CAD/CAM PMMA-based polymers were higher than the conventional heat-polymerized PMMA, whereas the CAD/CAM PMMA-based polymers had similar Ra values to the conventional PMMA. Thermocycling had a significant effect on FS and hydrophobicity except for the Ra of denture base materials.
Mi, Jianing; Zhang, Min; Zhang, Hongyang; Wang, Yuerong; Wu, Shikun; Hu, Ping
2013-02-01
A high-efficient and environmental-friendly method for the preparation of ginsenosides from Radix Ginseng using the method of coupling of ultrasound-assisted extraction with expanded bed adsorption is described. Based on the optimal extraction conditions screened by surface response methodology, ginsenosides were extracted and adsorbed, then eluted by the two-step elution protocol. The comparison results between the coupling of ultrasound-assisted extraction with expanded bed adsorption method and conventional method showed that the former was better than the latter in both process efficiency and greenness. The process efficiency and energy efficiency of the coupling of ultrasound-assisted extraction with expanded bed adsorption method rapidly increased by 1.4-fold and 18.5-fold of the conventional method, while the environmental cost and CO(2) emission of the conventional method were 12.9-fold and 17.0-fold of the new method. Furthermore, the theoretical model for the extraction of targets was derived. The results revealed that the theoretical model suitably described the process of preparing ginsenosides by the coupling of ultrasound-assisted extraction with expanded bed adsorption system. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Control algorithms and applications of the wavefront sensorless adaptive optics
NASA Astrophysics Data System (ADS)
Ma, Liang; Wang, Bin; Zhou, Yuanshen; Yang, Huizhen
2017-10-01
Compared with the conventional adaptive optics (AO) system, the wavefront sensorless (WFSless) AO system need not to measure the wavefront and reconstruct it. It is simpler than the conventional AO in system architecture and can be applied to the complex conditions. Based on the analysis of principle and system model of the WFSless AO system, wavefront correction methods of the WFSless AO system were divided into two categories: model-free-based and model-based control algorithms. The WFSless AO system based on model-free-based control algorithms commonly considers the performance metric as a function of the control parameters and then uses certain control algorithm to improve the performance metric. The model-based control algorithms include modal control algorithms, nonlinear control algorithms and control algorithms based on geometrical optics. Based on the brief description of above typical control algorithms, hybrid methods combining the model-free-based control algorithm with the model-based control algorithm were generalized. Additionally, characteristics of various control algorithms were compared and analyzed. We also discussed the extensive applications of WFSless AO system in free space optical communication (FSO), retinal imaging in the human eye, confocal microscope, coherent beam combination (CBC) techniques and extended objects.
Modified Involute Helical Gears: Computerized Design, Simulation of Meshing, and Stress Analysis
NASA Technical Reports Server (NTRS)
Handschuh, Robert (Technical Monitor); Litvin, Faydor L.; Gonzalez-Perez, Ignacio; Carnevali, Luca; Kawasaki, Kazumasa; Fuentes-Aznar, Alfonso
2003-01-01
The computerized design, methods for generation, simulation of meshing, and enhanced stress analysis of modified involute helical gears is presented. The approaches proposed for modification of conventional involute helical gears are based on conjugation of double-crowned pinion with a conventional helical involute gear. Double-crowning of the pinion means deviation of cross-profile from an involute one and deviation in longitudinal direction from a helicoid surface. Using the method developed, the pinion-gear tooth surfaces are in point-contact, the bearing contact is localized and oriented longitudinally, and edge contact is avoided. Also, the influence of errors of aligment on the shift of bearing contact, vibration, and noise are reduced substantially. The theory developed is illustrated with numerical examples that confirm the advantages of the gear drives of the modified geometry in comparison with conventional helical involute gears.
Modified Involute Helical Gears: Computerized Design, Simulation of Meshing and Stress Analysis
NASA Technical Reports Server (NTRS)
2003-01-01
The computerized design, methods for generation, simulation of meshing, and enhanced stress analysis of modified involute helical gears is presented. The approaches proposed for modification of conventional involute helical gears are based on conjugation of double-crowned pinion with a conventional helical involute gear. Double-crowning of the pinion means deviation of cross-profile from an involute one and deviation in longitudinal direction from a helicoid surface. Using the method developed, the pinion-gear tooth surfaces are in point-contact, the bearing contact is localized and oriented longitudinally, and edge contact is avoided. Also, the influence of errors of alignment on the shift of bearing contact, vibration, and noise are reduced substantially. The theory developed is illustrated with numerical examples that confirm the advantages of the gear drives of the modified geometry in comparison with conventional helical involute gears.
Xie, Jun; Xu, Guanghua; Wang, Jing; Li, Min; Han, Chengcheng; Jia, Yaguang
Steady-state visual evoked potentials (SSVEP) based paradigm is a conventional BCI method with the advantages of high information transfer rate, high tolerance to artifacts and the robust performance across users. But the occurrence of mental load and fatigue when users stare at flickering stimuli is a critical problem in implementation of SSVEP-based BCIs. Based on electroencephalography (EEG) power indices α, θ, θ + α, ratio index θ/α and response properties of amplitude and SNR, this study quantitatively evaluated the mental load and fatigue in both of conventional flickering and the novel motion-reversal visual attention tasks. Results over nine subjects revealed significant mental load alleviation in motion-reversal task rather than flickering task. The interaction between factors of "stimulation type" and "fatigue level" also illustrated the motion-reversal stimulation as a superior anti-fatigue solution for long-term BCI operation. Taken together, our work provided an objective method favorable for the design of more practically applicable steady-state evoked potential based BCIs.
Recent developments of nano-structured materials as the catalysts for oxygen reduction reaction
NASA Astrophysics Data System (ADS)
Kang, SungYeon; Kim, HuiJung; Chung, Yong-Ho
2018-04-01
Developments of high efficient materials for electrocatalyst are significant topics of numerous researches since a few decades. Recent global interests related with energy conversion and storage lead to the expansion of efforts to find cost-effective catalysts that can substitute conventional catalytic materials. Especially, in the field of fuel cell, novel materials for oxygen reduction reaction (ORR) have been noticed to overcome disadvantages of conventional platinum-based catalysts. Various approaching methods have been attempted to achieve low cost and high electrochemical activity comparable with Pt-based catalysts, including reducing Pt consumption by the formation of hybrid materials, Pt-based alloys, and not-Pt metal or carbon based materials. To enhance catalytic performance and stability, numerous methods such as structural modifications and complex formations with other functional materials are proposed, and they are basically based on well-defined and well-ordered catalytic active sites by exquisite control at nanoscale. In this review, we highlight the development of nano-structured catalytic materials for ORR based on recent findings, and discuss about an outlook for the direction of future researches.
ERIC Educational Resources Information Center
Preston, Janet E.; Kunz, Margie H.
This study compared student learning in secondary consumer and homemaking foods classes using three different methods of teacher preparation. In Method 1, teachers were provided with lists of competencies and workshop training for using the competencies. In Method 2, teachers were provided with lists of competencies and no workshop training; and…
Edu-Mining for Book Recommendation for Pupils
ERIC Educational Resources Information Center
Nagata, Ryo; Takeda, Keigo; Suda, Koji; Kakegawa, Junichi; Morihiro, Koichiro
2009-01-01
This paper proposes a novel method for recommending books to pupils based on a framework called Edu-mining. One of the properties of the proposed method is that it uses only loan histories (pupil ID, book ID, date of loan) whereas the conventional methods require additional information such as taste information from a great number of users which…
Design Tool Using a New Optimization Method Based on a Stochastic Process
NASA Astrophysics Data System (ADS)
Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio
Conventional optimization methods are based on a deterministic approach since their purpose is to find out an exact solution. However, such methods have initial condition dependence and the risk of falling into local solution. In this paper, we propose a new optimization method based on the concept of path integrals used in quantum mechanics. The method obtains a solution as an expected value (stochastic average) using a stochastic process. The advantages of this method are that it is not affected by initial conditions and does not require techniques based on experiences. We applied the new optimization method to a hang glider design. In this problem, both the hang glider design and its flight trajectory were optimized. The numerical calculation results prove that performance of the method is sufficient for practical use.
A new algorithm for modeling friction in dynamic mechanical systems
NASA Technical Reports Server (NTRS)
Hill, R. E.
1988-01-01
A method of modeling friction forces that impede the motion of parts of dynamic mechanical systems is described. Conventional methods in which the friction effect is assumed a constant force, or torque, in a direction opposite to the relative motion, are applicable only to those cases where applied forces are large in comparison to the friction, and where there is little interest in system behavior close to the times of transitions through zero velocity. An algorithm is described that provides accurate determination of friction forces over a wide range of applied force and velocity conditions. The method avoids the simulation errors resulting from a finite integration interval used in connection with a conventional friction model, as is the case in many digital computer-based simulations. The algorithm incorporates a predictive calculation based on initial conditions of motion, externally applied forces, inertia, and integration step size. The predictive calculation in connection with an external integration process provides an accurate determination of both static and Coulomb friction forces and resulting motions in dynamic simulations. Accuracy of the results is improved over that obtained with conventional methods and a relatively large integration step size is permitted. A function block for incorporation in a specific simulation program is described. The general form of the algorithm facilitates implementation with various programming languages such as FORTRAN or C, as well as with other simulation programs.
Sreemany, Arpita; Bera, Melinda Kumar; Sarkar, Anindya
2017-12-30
The elaborate sampling and analytical protocol associated with conventional dual-inlet isotope ratio mass spectrometry has long hindered high-resolution climate studies from biogenic accretionary carbonates. Laser-based on-line systems, in comparison, produce rapid data, but suffer from unresolvable matrix effects. It is, therefore, necessary to resolve these matrix effects to take advantage of the automated laser-based method. Two marine bivalve shells (one aragonite and one calcite) and one fish otolith (aragonite) were first analysed using a CO 2 laser ablation system attached to a continuous flow isotope ratio mass spectrometer under different experimental conditions (different laser power, sample untreated vs vacuum roasted). The shells and the otolith were then micro-drilled and the isotopic compositions of the powders were measured in a dual-inlet isotope ratio mass spectrometer following the conventional acid digestion method. The vacuum-roasted samples (both aragonite and calcite) produced mean isotopic ratios (with a reproducibility of ±0.2 ‰ for both δ 18 O and δ 13 C values) almost identical to the values obtained using the conventional acid digestion method. As the isotopic ratio of the acid digested samples fall within the analytical precision (±0.2 ‰) of the laser ablation system, this suggests the usefulness of the method for studying the biogenic accretionary carbonate matrix. When using laser-based continuous flow isotope ratio mass spectrometry for the high-resolution isotopic measurements of biogenic carbonates, the employment of a vacuum-roasting step will reduce the matrix effect. This method will be of immense help to geologists and sclerochronologists in exploring short-term changes in climatic parameters (e.g. seasonality) in geological times. Copyright © 2017 John Wiley & Sons, Ltd.
Pixel-based speckle adjustment for noise reduction in Fourier-domain OCT images.
Zhang, Anqi; Xi, Jiefeng; Sun, Jitao; Li, Xingde
2017-03-01
Speckle resides in OCT signals and inevitably effects OCT image quality. In this work, we present a novel method for speckle noise reduction in Fourier-domain OCT images, which utilizes the phase information of complex OCT data. In this method, speckle area is pre-delineated pixelwise based on a phase-domain processing method and then adjusted by the results of wavelet shrinkage of the original image. Coefficient shrinkage method such as wavelet or contourlet is applied afterwards for further suppressing the speckle noise. Compared with conventional methods without speckle adjustment, the proposed method demonstrates significant improvement of image quality.
Electronic method for autofluorography of macromolecules on two-D matrices
Davidson, Jackson B.; Case, Arthur L.
1983-01-01
A method for detecting, localizing, and quantifying macromolecules contained in a two-dimensional matrix is provided which employs a television-based position sensitive detection system. A molecule-containing matrix may be produced by conventional means to produce spots of light at the molecule locations which are detected by the television system. The matrix, such as a gel matrix, is exposed to an electronic camera system including an image-intensifier and secondary electron conduction camera capable of light integrating times of many minutes. A light image stored in the form of a charge image on the camera tube target is scanned by conventional television techniques, digitized, and stored in a digital memory. Intensity of any point on the image may be determined from the number at the memory address of the point. The entire image may be displayed on a television monitor for inspection and photographing or individual spots may be analyzed through selected readout of the memory locations. Compared to conventional film exposure methods, the exposure time may be reduced 100-1000 times.
Rapid automated method for screening of enteric pathogens from stool specimens.
Villasante, P A; Agulla, A; Merino, F J; Pérez, T; Ladrón de Guevara, C; Velasco, A C
1987-01-01
A total of 800 colonies suggestive of Salmonella, Shigella, or Yersinia species isolated on stool differential agar media were inoculated onto both conventional biochemical test media (triple sugar iron agar, urea agar, and phenylalanine agar) and Entero Pathogen Screen cards of the AutoMicrobic system (Vitek Systems, Inc., Hazelwood, Mo.). Based on the conventional tests, the AutoMicrobic system method yielded the following results: 587 true-negatives, 185 true-positives, 2 false-negatives, and 26 false-positives (sensitivity, 99%; specificity, 96%). Both true-positive and true-negative results were achieved considerably earlier than false results (P less than 0.001). The Entero Pathogen Screen card method is a fast, easy, and sensitive method for screening for Salmonella, Shigella, or Yersinia species. The impossibility of screening for oxidase-positive pathogens is a minor disadvantage of this method. PMID:3553230
Seino, Junji; Nakai, Hiromi
2012-06-28
An accurate and efficient scheme for two-component relativistic calculations at the spin-free infinite-order Douglas-Kroll-Hess (IODKH) level is presented. The present scheme, termed local unitary transformation (LUT), is based on the locality of the relativistic effect. Numerical assessments of the LUT scheme were performed in diatomic molecules such as HX and X(2) (X = F, Cl, Br, I, and At) and hydrogen halide clusters, (HX)(n) (X = F, Cl, Br, and I). Total energies obtained by the LUT method agree well with conventional IODKH results. The computational costs of the LUT method are drastically lower than those of conventional methods since in the former there is linear-scaling with respect to the system size and a small prefactor.
Optimization of white matter tractography for pre-surgical planning and image-guided surgery.
Arfanakis, Konstantinos; Gui, Minzhi; Lazar, Mariana
2006-01-01
Accurate localization of white matter fiber tracts in relation to brain tumors is a goal of critical importance to the neurosurgical community. White matter fiber tractography by means of diffusion tensor magnetic resonance imaging (DTI) is the only non-invasive method that can provide estimates of brain connectivity. However, conventional tractography methods are based on data acquisition techniques that suffer from image distortions and artifacts. Thus, a large percentage of white matter fiber bundles are distorted, and/or terminated early, while others are completely undetected. This severely limits the potential of fiber tractography in pre-surgical planning and image-guided surgery. In contrast, Turboprop-DTI is a technique that provides images with significantly fewer distortions and artifacts than conventional DTI data acquisition methods. The purpose of this study was to evaluate fiber tracking results obtained from Turboprop-DTI data. It was demonstrated that Turboprop may be a more appropriate DTI data acquisition technique for tracing white matter fibers than conventional DTI methods, especially in applications such as pre-surgical planning and image-guided surgery.
Kaneko, K
1989-09-01
A heating method using micro-waves was utilized to obtain strong thermosetting resin for crown and bridge. The physical and mechanical properties of the thermosetting resin were examined. The resin was cured in a shorter time by the micro-waves heating method than by the conventional heat curing method and the working time was reduced markedly. The base resins of the thermosetting resin for crown and bridge for the micro-waves heating method were 2 PA and diluent 3 G. A compounding volume of 30 wt% for diluent 3 G was considered good the results of compressive strength, bending strength and diametral tensile strength. Grams of 200-230 of the filler compounded to the base resins of 2 PA-3 G system provided optimal compressive strength, bending strength and diametral tensile strength. A filler gram of 230 provided optimal hardness and curing shrinkage rate, the coefficient of thermal expansion became smaller with the increase of the compounding volume of the filler. The trial thermosetting resin for crown and bridge formed by the micro-waves heating method was not inferior to the conventional resin by the heat curing method or the light curing method.
NASA Astrophysics Data System (ADS)
Wu, Peilin; Zhang, Qunying; Fei, Chunjiao; Fang, Guangyou
2017-04-01
Aeromagnetic gradients are typically measured by optically pumped magnetometers mounted on an aircraft. Any aircraft, particularly helicopters, produces significant levels of magnetic interference. Therefore, aeromagnetic compensation is essential, and least square (LS) is the conventional method used for reducing interference levels. However, the LSs approach to solving the aeromagnetic interference model has a few difficulties, one of which is in handling multicollinearity. Therefore, we propose an aeromagnetic gradient compensation method, specifically targeted for helicopter use but applicable on any airborne platform, which is based on the ɛ-support vector regression algorithm. The structural risk minimization criterion intrinsic to the method avoids multicollinearity altogether. Local aeromagnetic anomalies can be retained, and platform-generated fields are suppressed simultaneously by constructing an appropriate loss function and kernel function. The method was tested using an unmanned helicopter and obtained improvement ratios of 12.7 and 3.5 in the vertical and horizontal gradient data, respectively. Both of these values are probably better than those that would have been obtained from the conventional method applied to the same data, had it been possible to do so in a suitable comparative context. The validity of the proposed method is demonstrated by the experimental result.
Hagberg, Gisela E; Mamedov, Ilgar; Power, Anthony; Beyerlein, Michael; Merkle, Hellmut; Kiselev, Valerij G; Dhingra, Kirti; Kubìček, Vojtĕch; Angelovski, Goran; Logothetis, Nikos K
2014-01-01
Calcium-sensitive MRI contrast agents can only yield quantitative results if the agent concentration in the tissue is known. The agent concentration could be determined by diffusion modeling, if relevant parameters were available. We have established an MRI-based method capable of determining diffusion properties of conventional and calcium-sensitive agents. Simulations and experiments demonstrate that the method is applicable both for conventional contrast agents with a fixed relaxivity value and for calcium-sensitive contrast agents. The full pharmacokinetic time-course of gadolinium concentration estimates was observed by MRI before, during and after intracerebral administration of the agent, and the effective diffusion coefficient D* was determined by voxel-wise fitting of the solution to the diffusion equation. The method yielded whole brain coverage with a high spatial and temporal sampling. The use of two types of MRI sequences for sampling of the diffusion time courses was investigated: Look-Locker-based quantitative T(1) mapping, and T(1) -weighted MRI. The observation times of the proposed MRI method is long (up to 20 h) and consequently the diffusion distances covered are also long (2-4 mm). Despite this difference, the D* values in vivo were in agreement with previous findings using optical measurement techniques, based on observation times of a few minutes. The effective diffusion coefficient determined for the calcium-sensitive contrast agents may be used to determine local tissue concentrations and to design infusion protocols that maintain the agent concentration at a steady state, thereby enabling quantitative sensing of the local calcium concentration. Copyright © 2014 John Wiley & Sons, Ltd.
Thalanany, Mariamma M; Mugford, Miranda; Hibbert, Clare; Cooper, Nicola J; Truesdale, Ann; Robinson, Steven; Tiruvoipati, Ravindranath; Elbourne, Diana R; Peek, Giles J; Clemens, Felicity; Hardy, Polly; Wilson, Andrew
2008-01-01
Background Extracorporeal Membrane Oxygenation (ECMO) is a technology used in treatment of patients with severe but potentially reversible respiratory failure. A multi-centre randomised controlled trial (CESAR) was funded in the UK to compare care including ECMO with conventional intensive care management. The protocol and funding for the CESAR trial included plans for economic data collection and analysis. Given the high cost of treatment, ECMO is considered an expensive technology for many funding systems. However, conventional treatment for severe respiratory failure is also one of the more costly forms of care in any health system. Methods/Design The objectives of the economic evaluation are to compare the costs of a policy of referral for ECMO with those of conventional treatment; to assess cost-effectiveness and the cost-utility at 6 months follow-up; and to assess the cost-utility over a predicted lifetime. Resources used by patients in the trial are identified. Resource use data are collected from clinical report forms and through follow up interviews with patients. Unit costs of hospital intensive care resources are based on parallel research on cost functions in UK NHS intensive care units. Other unit costs are based on published NHS tariffs. Cost effectiveness analysis uses the outcome: survival without severe disability. Cost utility analysis is based on quality adjusted life years gained based on the Euroqol EQ-5D at 6 months. Sensitivity analysis is planned to vary assumptions about transport costs and method of costing intensive care. Uncertainty will also be expressed in analysis of individual patient data. Probabilities of cost effectiveness given different funding thresholds will be estimated. Discussion In our view it is important to record our methods in detail and present them before publication of the results of the trial so that a record of detail not normally found in the final trial reports can be made available in the public domain. Trial Registrations The CESAR trial registration number is ISRCTN47279827. PMID:18447931
St Jacques, Jeannine-Marie; Cumming, Brian F; Sauchyn, David J; Smol, John P
2015-01-01
The inference of past temperatures from a sedimentary pollen record depends upon the stationarity of the pollen-climate relationship. However, humans have altered vegetation independent of changes to climate, and consequently modern pollen deposition is a product of landscape disturbance and climate, which is different from the dominance of climate-derived processes in the past. This problem could cause serious signal distortion in pollen-based reconstructions. In the north-central United States, direct human impacts have strongly altered the modern vegetation and hence the pollen rain since Euro-American settlement in the mid-19th century. Using instrumental temperature data from the early 1800 s from Fort Snelling (Minnesota), we assessed the signal distortion and bias introduced by using the conventional method of inferring temperature from pollen assemblages in comparison to a calibration set from pre-settlement pollen assemblages and the earliest instrumental climate data. The early post-settlement calibration set provides more accurate reconstructions of the 19th century instrumental record, with less bias, than the modern set does. When both modern and pre-industrial calibration sets are used to reconstruct past temperatures since AD 1116 from pollen counts from a varve-dated record from Lake Mina, Minnesota, the conventional inference method produces significant low-frequency (centennial-scale) signal attenuation and positive bias of 0.8-1.7 °C, resulting in an overestimation of Little Ice Age temperature and likely an underestimation of the extent and rate of anthropogenic warming in this region. However, high-frequency (annual-scale) signal attenuation exists with both methods. Hence, we conclude that any past pollen spectra from before Euro-American settlement in this region should be interpreted using a pre-Euro-American settlement pollen set, paired to the earliest instrumental climate records. It remains to be explored how widespread this problem is when conventional pollen-based inference methods are used, and consequently how seriously regional manifestations of global warming have been underestimated with traditional pollen-based techniques.
Web-based surveys as an alternative to traditional mail methods.
Fleming, Christopher M; Bowden, Mark
2009-01-01
Environmental economists have long used surveys to gather information about people's preferences. A recent innovation in survey methodology has been the advent of web-based surveys. While the Internet appears to offer a promising alternative to conventional survey administration modes, concerns exist over potential sampling biases associated with web-based surveys and the effect these may have on valuation estimates. This paper compares results obtained from a travel cost questionnaire of visitors to Fraser Island, Australia, that was conducted using two alternate survey administration modes; conventional mail and web-based. It is found that response rates and the socio-demographic make-up of respondents to the two survey modes are not statistically different. Moreover, both modes yield similar consumer surplus estimates.
NASA Astrophysics Data System (ADS)
Iwaki, Sunao; Ueno, Shoogo
1998-06-01
The weighted minimum-norm estimation (wMNE) is a popular method to obtain the source distribution in the human brain from magneto- and electro- encephalograpic measurements when detailed information about the generator profile is not available. We propose a method to reconstruct current distributions in the human brain based on the wMNE technique with the weighting factors defined by a simplified multiple signal classification (MUSIC) prescanning. In this method, in addition to the conventional depth normalization technique, weighting factors of the wMNE were determined by the cost values previously calculated by a simplified MUSIC scanning which contains the temporal information of the measured data. We performed computer simulations of this method and compared it with the conventional wMNE method. The results show that the proposed method is effective for the reconstruction of the current distributions from noisy data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Worm, Esben S., E-mail: esbeworm@rm.dk; Department of Medical Physics, Aarhus University Hospital, Aarhus; Hoyer, Morten
2012-05-01
Purpose: To develop and evaluate accurate and objective on-line patient setup based on a novel semiautomatic technique in which three-dimensional marker trajectories were estimated from two-dimensional cone-beam computed tomography (CBCT) projections. Methods and Materials: Seven treatment courses of stereotactic body radiotherapy for liver tumors were delivered in 21 fractions in total to 6 patients by a linear accelerator. Each patient had two to three gold markers implanted close to the tumors. Before treatment, a CBCT scan with approximately 675 two-dimensional projections was acquired during a full gantry rotation. The marker positions were segmented in each projection. From this, the three-dimensionalmore » marker trajectories were estimated using a probability based method. The required couch shifts for patient setup were calculated from the mean marker positions along the trajectories. A motion phantom moving with known tumor trajectories was used to examine the accuracy of the method. Trajectory-based setup was retrospectively used off-line for the first five treatment courses (15 fractions) and on-line for the last two treatment courses (6 fractions). Automatic marker segmentation was compared with manual segmentation. The trajectory-based setup was compared with setup based on conventional CBCT guidance on the markers (first 15 fractions). Results: Phantom measurements showed that trajectory-based estimation of the mean marker position was accurate within 0.3 mm. The on-line trajectory-based patient setup was performed within approximately 5 minutes. The automatic marker segmentation agreed with manual segmentation within 0.36 {+-} 0.50 pixels (mean {+-} SD; pixel size, 0.26 mm in isocenter). The accuracy of conventional volumetric CBCT guidance was compromised by motion smearing ({<=}21 mm) that induced an absolute three-dimensional setup error of 1.6 {+-} 0.9 mm (maximum, 3.2) relative to trajectory-based setup. Conclusions: The first on-line clinical use of trajectory estimation from CBCT projections for precise setup in stereotactic body radiotherapy was demonstrated. Uncertainty in the conventional CBCT-based setup procedure was eliminated with the new method.« less
Algorithms that Defy the Gravity of Learning Curve
2017-04-28
three nearest neighbour-based anomaly detectors, i.e., an ensemble of nearest neigh- bours, a recent nearest neighbour-based ensemble method called iNNE...streams. Note that the change in sample size does not alter the geometrical data characteristics discussed here. 3.1 Experimental Methodology ...need to be answered. 3.6 Comparison with conventional ensemble methods Given the theoretical results, the third aim of this project (i.e., identify the
Improvement of spatial resolution in a Timepix based CdTe photon counting detector using ToT method
NASA Astrophysics Data System (ADS)
Park, Kyeongjin; Lee, Daehee; Lim, Kyung Taek; Kim, Giyoon; Chang, Hojong; Yi, Yun; Cho, Gyuseong
2018-05-01
Photon counting detectors (PCDs) have been recognized as potential candidates in X-ray radiography and computed tomography due to their many advantages over conventional energy-integrating detectors. In particular, a PCD-based X-ray system shows an improved contrast-to-noise ratio, reduced radiation exposure dose, and more importantly, exhibits a capability for material decomposition with energy binning. For some applications, a very high resolution is required, which translates into smaller pixel size. Unfortunately, small pixels may suffer from energy spectral distortions (distortion in energy resolution) due to charge sharing effects (CSEs). In this work, we propose a method for correcting CSEs by measuring the point of interaction of an incident X-ray photon by the time-of-threshold (ToT) method. Moreover, we also show that it is possible to obtain an X-ray image with a reduced pixel size by using the concept of virtual pixels at a given pixel size. To verify the proposed method, modulation transfer function (MTF) and signal-to-noise ratio (SNR) measurements were carried out with the Timepix chip combined with the CdTe pixel sensor. The X-ray test condition was set at 80 kVp with 5 μA, and a tungsten edge phantom and a lead line phantom were used for the measurements. Enhanced spatial resolution was achieved by applying the proposed method when compared to that of the conventional photon counting method. From experiment results, MTF increased from 6.3 (conventional counting method) to 8.3 lp/mm (proposed method) at 0.3 MTF. On the other hand, the SNR decreased from 33.08 to 26.85 dB due to four virtual pixels.
Superpixel-based segmentation of glottal area from videolaryngoscopy images
NASA Astrophysics Data System (ADS)
Turkmen, H. Irem; Albayrak, Abdulkadir; Karsligil, M. Elif; Kocak, Ismail
2017-11-01
Segmentation of the glottal area with high accuracy is one of the major challenges for the development of systems for computer-aided diagnosis of vocal-fold disorders. We propose a hybrid model combining conventional methods with a superpixel-based segmentation approach. We first employed a superpixel algorithm to reveal the glottal area by eliminating the local variances of pixels caused by bleedings, blood vessels, and light reflections from mucosa. Then, the glottal area was detected by exploiting a seeded region-growing algorithm in a fully automatic manner. The experiments were conducted on videolaryngoscopy images obtained from both patients having pathologic vocal folds as well as healthy subjects. Finally, the proposed hybrid approach was compared with conventional region-growing and active-contour model-based glottal area segmentation algorithms. The performance of the proposed method was evaluated in terms of segmentation accuracy and elapsed time. The F-measure, true negative rate, and dice coefficients of the hybrid method were calculated as 82%, 93%, and 82%, respectively, which are superior to the state-of-art glottal-area segmentation methods. The proposed hybrid model achieved high success rates and robustness, making it suitable for developing a computer-aided diagnosis system that can be used in clinical routines.
Kim, Seung-Cheol; Dong, Xiao-Bin; Kwon, Min-Woo; Kim, Eun-Soo
2013-05-06
A novel approach for fast generation of video holograms of three-dimensional (3-D) moving objects using a motion compensation-based novel-look-up-table (MC-N-LUT) method is proposed. Motion compensation has been widely employed in compression of conventional 2-D video data because of its ability to exploit high temporal correlation between successive video frames. Here, this concept of motion-compensation is firstly applied to the N-LUT based on its inherent property of shift-invariance. That is, motion vectors of 3-D moving objects are extracted between the two consecutive video frames, and with them motions of the 3-D objects at each frame are compensated. Then, through this process, 3-D object data to be calculated for its video holograms are massively reduced, which results in a dramatic increase of the computational speed of the proposed method. Experimental results with three kinds of 3-D video scenarios reveal that the average number of calculated object points and the average calculation time for one object point of the proposed method, have found to be reduced down to 86.95%, 86.53% and 34.99%, 32.30%, respectively compared to those of the conventional N-LUT and temporal redundancy-based N-LUT (TR-N-LUT) methods.
Porwal, Anand; Khandelwal, Meenakshi; Punia, Vikas; Sharma, Vivek
2017-01-01
Aim: The purpose of this study was to evaluate the effect of different denture cleansers on the color stability, surface hardness, and roughness of different denture base resins. Materials and Methods: Three denture base resin materials (conventional heat cure resin, high impact resin, and polyamide denture base resin) were immersed for 180 days in commercially available two denture cleansers (sodium perborate and sodium hypochlorite). Color, surface roughness, and hardness were measured for each sample before and after immersion procedure. Statistical Analysis: One-way analysis of variance and Tukey's post hoc honestly significant difference test were used to evaluate color, surface roughness, and hardness data before and after immersion in denture cleanser (α =0.05). Results: All denture base resins tested exhibited a change in color, surface roughness, and hardness to some degree in both denture cleansers. Polyamides resin immersed in sodium perborate showed a maximum change in color after immersion for 180 days. Conventional heat cure resin immersed in sodium hypochlorite showed a maximum change in surface roughness and conventional heat cure immersed in sodium perborate showed a maximum change in hardness. Conclusion: Color changes of all denture base resins were within the clinically accepted range for color difference. Surface roughness change of conventional heat cure resin was not within the clinically accepted range of surface roughness. The choice of denture cleanser for different denture base resins should be based on the chemistry of resin and cleanser, denture cleanser concentration, and duration of immersion. PMID:28216847
Superpave binder implementation : final report.
DOT National Transportation Integrated Search
1999-01-01
Oregon Department of Transportation (ODOT) has specified performance-based asphalts (PBAs) since 1991. Developed by the Pacific Coast Conference on Asphalt Specifications (PCCAS) in 1990, the PBA concept uses conventional test methods for classificat...
Schlett, Christopher L; Doll, Hinnerk; Dahmen, Janosch; Polacsek, Ole; Federkeil, Gero; Fischer, Martin R; Bamberg, Fabian; Butzlaff, Martin
2010-01-14
Problem-based Learning (PBL) has been suggested as a key educational method of knowledge acquisition to improve medical education. We sought to evaluate the differences in medical school education between graduates from PBL-based and conventional curricula and to what extent these curricula fit job requirements. Graduates from all German medical schools who graduated between 1996 and 2002 were eligible for this study. Graduates self-assessed nine competencies as required at their day-to-day work and as taught in medical school on a 6-point Likert scale. Results were compared between graduates from a PBL-based curriculum (University Witten/Herdecke) and conventional curricula. Three schools were excluded because of low response rates. Baseline demographics between graduates of the PBL-based curriculum (n = 101, 49% female) and the conventional curricula (n = 4720, 49% female) were similar. No major differences were observed regarding job requirements with priorities for "Independent learning/working" and "Practical medical skills". All competencies were rated to be better taught in PBL-based curriculum compared to the conventional curricula (all p < 0.001), except for "Medical knowledge" and "Research competence". Comparing competencies required at work and taught in medical school, PBL was associated with benefits in "Interdisciplinary thinking" (Delta + 0.88), "Independent learning/working" (Delta + 0.57), "Psycho-social competence" (Delta + 0.56), "Teamwork" (Delta + 0.39) and "Problem-solving skills" (Delta + 0.36), whereas "Research competence" (Delta--1.23) and "Business competence" (Delta--1.44) in the PBL-based curriculum needed improvement. Among medical graduates in Germany, PBL demonstrated benefits with regard to competencies which were highly required in the job of physicians. Research and business competence deserve closer attention in future curricular development.
Research on ionospheric tomography based on variable pixel height
NASA Astrophysics Data System (ADS)
Zheng, Dunyong; Li, Peiqing; He, Jie; Hu, Wusheng; Li, Chaokui
2016-05-01
A novel ionospheric tomography technique based on variable pixel height was developed for the tomographic reconstruction of the ionospheric electron density distribution. The method considers the height of each pixel as an unknown variable, which is retrieved during the inversion process together with the electron density values. In contrast to conventional computerized ionospheric tomography (CIT), which parameterizes the model with a fixed pixel height, the variable-pixel-height computerized ionospheric tomography (VHCIT) model applies a disturbance to the height of each pixel. In comparison with conventional CIT models, the VHCIT technique achieved superior results in a numerical simulation. A careful validation of the reliability and superiority of VHCIT was performed. According to the results of the statistical analysis of the average root mean square errors, the proposed model offers an improvement by 15% compared with conventional CIT models.
NASA Astrophysics Data System (ADS)
Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio
The conventional optimization methods were based on a deterministic approach, since their purpose is to find out an exact solution. However, these methods have initial condition dependence and risk of falling into local solution. In this paper, we propose a new optimization method based on a concept of path integral method used in quantum mechanics. The method obtains a solutions as an expected value (stochastic average) using a stochastic process. The advantages of this method are not to be affected by initial conditions and not to need techniques based on experiences. We applied the new optimization method to a design of the hang glider. In this problem, not only the hang glider design but also its flight trajectory were optimized. The numerical calculation results showed that the method has a sufficient performance.
van IJsseldijk, E A; Valstar, E R; Stoel, B C; Nelissen, R G H H; Baka, N; Van't Klooster, R; Kaptein, B L
2016-08-01
An important measure for the diagnosis and monitoring of knee osteoarthritis is the minimum joint space width (mJSW). This requires accurate alignment of the x-ray beam with the tibial plateau, which may not be accomplished in practice. We investigate the feasibility of a new mJSW measurement method from stereo radiographs using 3D statistical shape models (SSM) and evaluate its sensitivity to changes in the mJSW and its robustness to variations in patient positioning and bone geometry. A validation study was performed using five cadaver specimens. The actual mJSW was varied and images were acquired with variation in the cadaver positioning. For comparison purposes, the mJSW was also assessed from plain radiographs. To study the influence of SSM model accuracy, the 3D mJSW measurement was repeated with models from the actual bones, obtained from CT scans. The SSM-based measurement method was more robust (consistent output for a wide range of input data/consistent output under varying measurement circumstances) than the conventional 2D method, showing that the 3D reconstruction indeed reduces the influence of patient positioning. However, the SSM-based method showed comparable sensitivity to changes in the mJSW with respect to the conventional method. The CT-based measurement was more accurate than the SSM-based measurement (smallest detectable differences 0.55 mm versus 0. 82 mm, respectively). The proposed measurement method is not a substitute for the conventional 2D measurement due to limitations in the SSM model accuracy. However, further improvement of the model accuracy and optimisation technique can be obtained. Combined with the promising options for applications using quantitative information on bone morphology, SSM based 3D reconstructions of natural knees are attractive for further development.Cite this article: E. A. van IJsseldijk, E. R. Valstar, B. C. Stoel, R. G. H. H. Nelissen, N. Baka, R. van't Klooster, B. L. Kaptein. Three dimensional measurement of minimum joint space width in the knee from stereo radiographs using statistical shape models. Bone Joint Res 2016;320-327. DOI: 10.1302/2046-3758.58.2000626. © 2016 van IJsseldijk et al.
On the fusion of tuning parameters of fuzzy rules and neural network
NASA Astrophysics Data System (ADS)
Mamuda, Mamman; Sathasivam, Saratha
2017-08-01
Learning fuzzy rule-based system with neural network can lead to a precise valuable empathy of several problems. Fuzzy logic offers a simple way to reach at a definite conclusion based upon its vague, ambiguous, imprecise, noisy or missing input information. Conventional learning algorithm for tuning parameters of fuzzy rules using training input-output data usually end in a weak firing state, this certainly powers the fuzzy rule and makes it insecure for a multiple-input fuzzy system. In this paper, we introduce a new learning algorithm for tuning the parameters of the fuzzy rules alongside with radial basis function neural network (RBFNN) in training input-output data based on the gradient descent method. By the new learning algorithm, the problem of weak firing using the conventional method was addressed. We illustrated the efficiency of our new learning algorithm by means of numerical examples. MATLAB R2014(a) software was used in simulating our result The result shows that the new learning method has the best advantage of training the fuzzy rules without tempering with the fuzzy rule table which allowed a membership function of the rule to be used more than one time in the fuzzy rule base.
Liang, Shanshan; Yuan, Fusong; Luo, Xu; Yu, Zhuoren; Tang, Zhihui
2018-04-05
Marginal discrepancy is key to evaluating the accuracy of fixed dental prostheses. An improved method of evaluating marginal discrepancy is needed. The purpose of this in vitro study was to evaluate the absolute marginal discrepancy of ceramic crowns fabricated using conventional and digital methods with a digital method for the quantitative evaluation of absolute marginal discrepancy. The novel method was based on 3-dimensional scanning, iterative closest point registration techniques, and reverse engineering theory. Six standard tooth preparations for the right maxillary central incisor, right maxillary second premolar, right maxillary second molar, left mandibular lateral incisor, left mandibular first premolar, and left mandibular first molar were selected. Ten conventional ceramic crowns and 10 CEREC crowns were fabricated for each tooth preparation. A dental cast scanner was used to obtain 3-dimensional data of the preparations and ceramic crowns, and the data were compared with the "virtual seating" iterative closest point technique. Reverse engineering software used edge sharpening and other functional modules to extract the margins of the preparations and crowns. Finally, quantitative evaluation of the absolute marginal discrepancy of the ceramic crowns was obtained from the 2-dimensional cross-sectional straight-line distance between points on the margin of the ceramic crowns and the standard preparations based on the circumferential function module along the long axis. The absolute marginal discrepancy of the ceramic crowns fabricated using conventional methods was 115 ±15.2 μm, and 110 ±14.3 μm for those fabricated using the digital technique was. ANOVA showed no statistical difference between the 2 methods or among ceramic crowns for different teeth (P>.05). The digital quantitative evaluation method for the absolute marginal discrepancy of ceramic crowns was established. The evaluations determined that the absolute marginal discrepancies were within a clinically acceptable range. This method is acceptable for the digital evaluation of the accuracy of complete crowns. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Pick- and waveform-based techniques for real-time detection of induced seismicity
NASA Astrophysics Data System (ADS)
Grigoli, Francesco; Scarabello, Luca; Böse, Maren; Weber, Bernd; Wiemer, Stefan; Clinton, John F.
2018-05-01
The monitoring of induced seismicity is a common operation in many industrial activities, such as conventional and non-conventional hydrocarbon production or mining and geothermal energy exploitation, to cite a few. During such operations, we generally collect very large and strongly noise-contaminated data sets that require robust and automated analysis procedures. Induced seismicity data sets are often characterized by sequences of multiple events with short interevent times or overlapping events; in these cases, pick-based location methods may struggle to correctly assign picks to phases and events, and errors can lead to missed detections and/or reduced location resolution and incorrect magnitudes, which can have significant consequences if real-time seismicity information are used for risk assessment frameworks. To overcome these issues, different waveform-based methods for the detection and location of microseismicity have been proposed. The main advantages of waveform-based methods is that they appear to perform better and can simultaneously detect and locate seismic events providing high-quality locations in a single step, while the main disadvantage is that they are computationally expensive. Although these methods have been applied to different induced seismicity data sets, an extensive comparison with sophisticated pick-based detection methods is still missing. In this work, we introduce our improved waveform-based detector and we compare its performance with two pick-based detectors implemented within the SeiscomP3 software suite. We test the performance of these three approaches with both synthetic and real data sets related to the induced seismicity sequence at the deep geothermal project in the vicinity of the city of St. Gallen, Switzerland.
Experimental study of geotextile as plinth beam in a pile group-supported modeled building frame
NASA Astrophysics Data System (ADS)
Ravi Kumar Reddy, C.; Gunneswara Rao, T. D.
2017-12-01
This paper presents the experimental results of static vertical load tests on a model building frame with geotextile as plinth beam supported by pile groups embedded in cohesionless soil (sand). The experimental results have been compared with those obtained from the nonlinear FEA and conventional method of analysis. The results revealed that the conventional method of analysis gives a shear force of about 53%, bending moment at the top of the column about 17% and at the base of the column about 50-98% higher than that by the nonlinear FEA for the frame with geotextile as plinth beam.
Ultrasonically controlled particle size distribution of explosives: a safe method.
Patil, Mohan Narayan; Gore, G M; Pandit, Aniruddha B
2008-03-01
Size reduction of the high energy materials (HEM's) by conventional methods (mechanical means) is not safe as they are very sensitive to friction and impact. Modified crystallization techniques can be used for the same purpose. The solute is dissolved in the solvent and crystallized via cooling or is precipitated out using an antisolvent. The various crystallization parameters such as temperature, antisolvent addition rate and agitation are adjusted to get the required final crystal size and morphology. The solvent-antisolvent ratio, time of crystallization and yield of the product are the key factors for controlling antisolvent based precipitation process. The advantages of cavitationally induced nucleation can be coupled with the conventional crystallization process. This study includes the effect of the ultrasonically generated acoustic cavitation phenomenon on the solvent antisolvent based precipitation process. CL20, a high-energy explosive compound, is a polyazapolycyclic caged polynitramine. CL-20 has greater energy output than existing (in-use) energetic ingredients while having an acceptable level of insensitivity to shock and other external stimuli. The size control and size distribution manipulation of the high energy material (CL20) has been successfully carried out safely and quickly along with an increase in the final mass yield, compared to the conventional antisolvent based precipitation process.
Research on the output bit error rate of 2DPSK signal based on stochastic resonance theory
NASA Astrophysics Data System (ADS)
Yan, Daqin; Wang, Fuzhong; Wang, Shuo
2017-12-01
Binary differential phase-shift keying (2DPSK) signal is mainly used for high speed data transmission. However, the bit error rate of digital signal receiver is high in the case of wicked channel environment. In view of this situation, a novel method based on stochastic resonance (SR) is proposed, which is aimed to reduce the bit error rate of 2DPSK signal by coherent demodulation receiving. According to the theory of SR, a nonlinear receiver model is established, which is used to receive 2DPSK signal under small signal-to-noise ratio (SNR) circumstances (between -15 dB and 5 dB), and compared with the conventional demodulation method. The experimental results demonstrate that when the input SNR is in the range of -15 dB to 5 dB, the output bit error rate of nonlinear system model based on SR has a significant decline compared to the conventional model. It could reduce 86.15% when the input SNR equals -7 dB. Meanwhile, the peak value of the output signal spectrum is 4.25 times as that of the conventional model. Consequently, the output signal of the system is more likely to be detected and the accuracy can be greatly improved.
NASA Astrophysics Data System (ADS)
Ikeda, Hayato; Nagaoka, Ryo; Lafond, Maxime; Yoshizawa, Shin; Iwasaki, Ryosuke; Maeda, Moe; Umemura, Shin-ichiro; Saijo, Yoshifumi
2018-07-01
High-intensity focused ultrasound is a noninvasive treatment applied by externally irradiating ultrasound to the body to coagulate the target tissue thermally. Recently, it has been proposed as a noninvasive treatment for vascular occlusion to replace conventional invasive treatments. Cavitation bubbles generated by the focused ultrasound can accelerate the effect of thermal coagulation. However, the tissues surrounding the target may be damaged by cavitation bubbles generated outside the treatment area. Conventional methods based on Doppler analysis only in the time domain are not suitable for monitoring blood flow in the presence of cavitation. In this study, we proposed a novel filtering method based on the differences in spatiotemporal characteristics, to separate tissue, blood flow, and cavitation by employing singular value decomposition. Signals from cavitation and blood flow were extracted automatically using spatial and temporal covariance matrices.
Sudo, Hirotaka; O'driscoll, Michael; Nishiwaki, Kenji; Kawamoto, Yuji; Gammell, Philip; Schramm, Gerhard; Wertli, Toni; Prinz, Heino; Mori, Atsuhide; Sako, Kazuhiro
2012-01-01
The application of a head space analyzer for oxygen concentration was examined to develop a novel ampoule leak test method. Studies using ampoules filled with ethanol-based solution and with nitrogen in the headspace demonstrated that the head space analysis (HSA) method showed sufficient sensitivity in detecting an ampoule crack. The proposed method is the use of HSA in conjunction with the pretreatment of an overpressurising process known as bombing to facilitate the oxygen flow through the crack in the ampoule. The method was examined in comparative studies with a conventional dye ingress method, and the results showed that the HSA method exhibits sensitivity superior to the dye method. The results indicate that the HSA method in combination with the bombing treatment provides potential application as a leak test for the detection of container defects not only for ampoule products with ethanol-based solutions, but also for testing lyophilized products in vials with nitrogen in the head space. The application of a head space analyzer for oxygen concentration was examined to develop a novel ampoule leak test method. The proposed method is the use of head space analysis (HSA) in conjunction with the pretreatment of an overpressurising process known as bombing to facilitate oxygen flow through the crack in the ampoule for use in routine production. The result of the comparative study with a conventional dye leak test method indicates that the HSA method in combination with the bombing treatment can be used as a leak test method, enabling detection of container defects.
NASA Astrophysics Data System (ADS)
Cheng, Jie-Zhi; Ni, Dong; Chou, Yi-Hong; Qin, Jing; Tiu, Chui-Mei; Chang, Yeun-Chung; Huang, Chiun-Sheng; Shen, Dinggang; Chen, Chung-Ming
2016-04-01
This paper performs a comprehensive study on the deep-learning-based computer-aided diagnosis (CADx) for the differential diagnosis of benign and malignant nodules/lesions by avoiding the potential errors caused by inaccurate image processing results (e.g., boundary segmentation), as well as the classification bias resulting from a less robust feature set, as involved in most conventional CADx algorithms. Specifically, the stacked denoising auto-encoder (SDAE) is exploited on the two CADx applications for the differentiation of breast ultrasound lesions and lung CT nodules. The SDAE architecture is well equipped with the automatic feature exploration mechanism and noise tolerance advantage, and hence may be suitable to deal with the intrinsically noisy property of medical image data from various imaging modalities. To show the outperformance of SDAE-based CADx over the conventional scheme, two latest conventional CADx algorithms are implemented for comparison. 10 times of 10-fold cross-validations are conducted to illustrate the efficacy of the SDAE-based CADx algorithm. The experimental results show the significant performance boost by the SDAE-based CADx algorithm over the two conventional methods, suggesting that deep learning techniques can potentially change the design paradigm of the CADx systems without the need of explicit design and selection of problem-oriented features.
Cheng, Jie-Zhi; Ni, Dong; Chou, Yi-Hong; Qin, Jing; Tiu, Chui-Mei; Chang, Yeun-Chung; Huang, Chiun-Sheng; Shen, Dinggang; Chen, Chung-Ming
2016-04-15
This paper performs a comprehensive study on the deep-learning-based computer-aided diagnosis (CADx) for the differential diagnosis of benign and malignant nodules/lesions by avoiding the potential errors caused by inaccurate image processing results (e.g., boundary segmentation), as well as the classification bias resulting from a less robust feature set, as involved in most conventional CADx algorithms. Specifically, the stacked denoising auto-encoder (SDAE) is exploited on the two CADx applications for the differentiation of breast ultrasound lesions and lung CT nodules. The SDAE architecture is well equipped with the automatic feature exploration mechanism and noise tolerance advantage, and hence may be suitable to deal with the intrinsically noisy property of medical image data from various imaging modalities. To show the outperformance of SDAE-based CADx over the conventional scheme, two latest conventional CADx algorithms are implemented for comparison. 10 times of 10-fold cross-validations are conducted to illustrate the efficacy of the SDAE-based CADx algorithm. The experimental results show the significant performance boost by the SDAE-based CADx algorithm over the two conventional methods, suggesting that deep learning techniques can potentially change the design paradigm of the CADx systems without the need of explicit design and selection of problem-oriented features.
Cheng, Jie-Zhi; Ni, Dong; Chou, Yi-Hong; Qin, Jing; Tiu, Chui-Mei; Chang, Yeun-Chung; Huang, Chiun-Sheng; Shen, Dinggang; Chen, Chung-Ming
2016-01-01
This paper performs a comprehensive study on the deep-learning-based computer-aided diagnosis (CADx) for the differential diagnosis of benign and malignant nodules/lesions by avoiding the potential errors caused by inaccurate image processing results (e.g., boundary segmentation), as well as the classification bias resulting from a less robust feature set, as involved in most conventional CADx algorithms. Specifically, the stacked denoising auto-encoder (SDAE) is exploited on the two CADx applications for the differentiation of breast ultrasound lesions and lung CT nodules. The SDAE architecture is well equipped with the automatic feature exploration mechanism and noise tolerance advantage, and hence may be suitable to deal with the intrinsically noisy property of medical image data from various imaging modalities. To show the outperformance of SDAE-based CADx over the conventional scheme, two latest conventional CADx algorithms are implemented for comparison. 10 times of 10-fold cross-validations are conducted to illustrate the efficacy of the SDAE-based CADx algorithm. The experimental results show the significant performance boost by the SDAE-based CADx algorithm over the two conventional methods, suggesting that deep learning techniques can potentially change the design paradigm of the CADx systems without the need of explicit design and selection of problem-oriented features. PMID:27079888
Web-Based Interactive 3D Visualization as a Tool for Improved Anatomy Learning
ERIC Educational Resources Information Center
Petersson, Helge; Sinkvist, David; Wang, Chunliang; Smedby, Orjan
2009-01-01
Despite a long tradition, conventional anatomy education based on dissection is declining. This study tested a new virtual reality (VR) technique for anatomy learning based on virtual contrast injection. The aim was to assess whether students value this new three-dimensional (3D) visualization method as a learning tool and what value they gain…
Implementation of an Improved Adaptive Testing Theory
ERIC Educational Resources Information Center
Al-A'ali, Mansoor
2007-01-01
Computer adaptive testing is the study of scoring tests and questions based on assumptions concerning the mathematical relationship between examinees' ability and the examinees' responses. Adaptive student tests, which are based on item response theory (IRT), have many advantages over conventional tests. We use the least square method, a…
Lights, Camera, Lesson: Teaching Literacy through Film
ERIC Educational Resources Information Center
Lipiner, Michael
2011-01-01
This in-depth case study explores a modern approach to education: the benefits of using film, technology and other creative, non-conventional pedagogical methods in the classroom to enhance students' understanding of literature. The study explores the positive effects of introducing a variety of visual-based (and auditory-based) teaching methods…
NASA Astrophysics Data System (ADS)
Wang, Jingcheng; Luo, Jingrun
2018-04-01
Due to the extremely high particle volume fraction (greater than 85%) and damage feature of polymer bonded explosives (PBXs), conventional micromechanical methods lead to inaccurate estimates on their effective elastic properties. According to their manufacture characteristics, a multistep approach based on micromechanical methods is proposed. PBXs are treated as pseudo poly-crystal materials consisting of equivalent composite particles (explosive crystals with binder coating), rather than two-phase composites composed of explosive particles and binder matrix. Moduli of composite spheres are obtained by generalized self-consistent method first, and the self-consistent method is modified to calculate the effective moduli of PBX. Defects and particle size distribution are considered by Mori-Tanaka method. Results show that when the multistep approach is applied to PBX 9501, estimates are far more accurate than the conventional micromechanical results. The bulk modulus is 5.75% higher, and shear modulus is 5.78% lower than the experimental values. Further analyses discover that while particle volume fraction and the binder's property have significant influences on the effective moduli of PBX, the moduli of particles present minor influences. Investigation of another particle size distribution indicates that the use of more fine particles will enhance the effective moduli of PBX.
Sivalingam, Jaichandran; Lam, Alan Tin-Lun; Chen, Hong Yu; Yang, Bin Xia; Chen, Allen Kuan-Liang; Reuveny, Shaul; Loh, Yuin-Han; Oh, Steve Kah-Weng
2016-08-01
In vitro generation of red blood cells (RBCs) from human embryonic stem cells and human induced pluripotent stem cells appears to be a promising alternate approach to circumvent shortages in donor-derived blood supplies for clinical applications. Conventional methods for hematopoietic differentiation of human pluripotent stem cells (hPSC) rely on embryoid body (EB) formation and/or coculture with xenogeneic cell lines. However, most current methods for hPSC expansion and EB formation are not amenable for scale-up to levels required for large-scale RBC generation. Moreover, differentiation methods that rely on xenogenic cell lines would face obstacles for future clinical translation. In this study, we report the development of a serum-free and chemically defined microcarrier-based suspension culture platform for scalable hPSC expansion and EB formation. Improved survival and better quality EBs generated with the microcarrier-based method resulted in significantly improved mesoderm induction and, when combined with hematopoietic differentiation, resulted in at least a 6-fold improvement in hematopoietic precursor expansion, potentially culminating in a 80-fold improvement in the yield of RBC generation compared to a conventional EB-based differentiation method. In addition, we report efficient terminal maturation and generation of mature enucleated RBCs using a coculture system that comprised primary human mesenchymal stromal cells. The microcarrier-based platform could prove to be an appealing strategy for future scale-up of hPSC culture, EB generation, and large-scale generation of RBCs under defined and xeno-free conditions.
Hayashi, Ryusuke; Watanabe, Osamu; Yokoyama, Hiroki; Nishida, Shin'ya
2017-06-01
Characterization of the functional relationship between sensory inputs and neuronal or observers' perceptual responses is one of the fundamental goals of systems neuroscience and psychophysics. Conventional methods, such as reverse correlation and spike-triggered data analyses are limited in their ability to resolve complex and inherently nonlinear neuronal/perceptual processes because these methods require input stimuli to be Gaussian with a zero mean. Recent studies have shown that analyses based on a generalized linear model (GLM) do not require such specific input characteristics and have advantages over conventional methods. GLM, however, relies on iterative optimization algorithms and its calculation costs become very expensive when estimating the nonlinear parameters of a large-scale system using large volumes of data. In this paper, we introduce a new analytical method for identifying a nonlinear system without relying on iterative calculations and yet also not requiring any specific stimulus distribution. We demonstrate the results of numerical simulations, showing that our noniterative method is as accurate as GLM in estimating nonlinear parameters in many cases and outperforms conventional, spike-triggered data analyses. As an example of the application of our method to actual psychophysical data, we investigated how different spatiotemporal frequency channels interact in assessments of motion direction. The nonlinear interaction estimated by our method was consistent with findings from previous vision studies and supports the validity of our method for nonlinear system identification.
Reducing 4D CT artifacts using optimized sorting based on anatomic similarity.
Johnston, Eric; Diehn, Maximilian; Murphy, James D; Loo, Billy W; Maxim, Peter G
2011-05-01
Four-dimensional (4D) computed tomography (CT) has been widely used as a tool to characterize respiratory motion in radiotherapy. The two most commonly used 4D CT algorithms sort images by the associated respiratory phase or displacement into a predefined number of bins, and are prone to image artifacts at transitions between bed positions. The purpose of this work is to demonstrate a method of reducing motion artifacts in 4D CT by incorporating anatomic similarity into phase or displacement based sorting protocols. Ten patient datasets were retrospectively sorted using both the displacement and phase based sorting algorithms. Conventional sorting methods allow selection of only the nearest-neighbor image in time or displacement within each bin. In our method, for each bed position either the displacement or the phase defines the center of a bin range about which several candidate images are selected. The two dimensional correlation coefficients between slices bordering the interface between adjacent couch positions are then calculated for all candidate pairings. Two slices have a high correlation if they are anatomically similar. Candidates from each bin are then selected to maximize the slice correlation over the entire data set using the Dijkstra's shortest path algorithm. To assess the reduction of artifacts, two thoracic radiation oncologists independently compared the resorted 4D datasets pairwise with conventionally sorted datasets, blinded to the sorting method, to choose which had the least motion artifacts. Agreement between reviewers was evaluated using the weighted kappa score. Anatomically based image selection resulted in 4D CT datasets with significantly reduced motion artifacts with both displacement (P = 0.0063) and phase sorting (P = 0.00022). There was good agreement between the two reviewers, with complete agreement 34 times and complete disagreement 6 times. Optimized sorting using anatomic similarity significantly reduces 4D CT motion artifacts compared to conventional phase or displacement based sorting. This improved sorting algorithm is a straightforward extension of the two most common 4D CT sorting algorithms.
Li, Peng; Jia, Junwei; Bai, Lan; Pan, Aihu; Tang, Xueming
2013-07-01
Genetically modified carnation (Dianthus caryophyllus L.) Moonshade was approved for planting and commercialization in several countries from 2004. Developing methods for analyzing Moonshade is necessary for implementing genetically modified organism labeling regulations. In this study, the 5'-transgene integration sequence was isolated using thermal asymmetric interlaced (TAIL)-PCR. Based upon the 5'-transgene integration sequence, conventional and TaqMan real-time PCR assays were established. The relative limit of detection for the conventional PCR assay was 0.05 % for Moonshade using 100 ng total carnation genomic DNA, corresponding to approximately 79 copies of the carnation haploid genome, and the limits of detection and quantification of the TaqMan real-time PCR assay were estimated to be 51 and 254 copies of haploid carnation genomic DNA, respectively. These results are useful for identifying and quantifying Moonshade and its derivatives.
GESFIDE-PROPELLER Approach for Simultaneous R2 and R2* Measurements in the Abdomen
Jin, Ning; Guo, Yang; Zhang, Zhuoli; Zhang, Longjiang; Lu, Guangming; Larson, Andrew C.
2013-01-01
Purpose To investigate the feasibility of combining GESFIDE with PROPELLER sampling approaches for simultaneous abdominal R2 and R2* mapping. Materials and Methods R2 and R2* measurements were performed in 9 healthy volunteers and phantoms using the GESFIDE-PROPELLER and the conventional Cartesian-sampling GESFIDE approaches. Results Images acquired with the GESFIDE-PROPELLER sequence effectively mitigated the respiratory motion artifacts, which were clearly evident in the images acquired using the conventional GESFIDE approach. There were no significant difference between GESFIDE-PROPELLER and reference MGRE R2* measurements (p = 0.162) whereas the Cartesian-sampling based GESFIDE methods significantly overestimated R2* values compared to MGRE measurements (p < 0.001). Conclusion The GESFIDE-PROPELLER sequence provided high quality images and accurate abdominal R2 and R2* maps while avoiding the motion artifacts common to the conventional Cartesian-sampling GESFIDE approaches. PMID:24041478
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen; Wald, Lawrence L.
2017-01-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization. PMID:26915119
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L
2016-08-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.
General framework for dynamic large deformation contact problems based on phantom-node X-FEM
NASA Astrophysics Data System (ADS)
Broumand, P.; Khoei, A. R.
2018-04-01
This paper presents a general framework for modeling dynamic large deformation contact-impact problems based on the phantom-node extended finite element method. The large sliding penalty contact formulation is presented based on a master-slave approach which is implemented within the phantom-node X-FEM and an explicit central difference scheme is used to model the inertial effects. The method is compared with conventional contact X-FEM; advantages, limitations and implementational aspects are also addressed. Several numerical examples are presented to show the robustness and accuracy of the proposed method.
Jabbari, Hamidreza; Fakhri, Mohammad; Lotfaliani, Mojtaba; Kiani, Arda
2013-01-01
It is suggested that hot electrocoagulation-enabled forceps (hot biopsy) may reduce hemorrhage risk after the biopsy in endobronchial tumors. The main concern in this method is possible reduction of the specimen's quality. To compare the procedure related hemorrhage with hot biopsy and conventional forceps biopsy and the diagnostic quality of the obtained specimens with either technique. In this prospective study, assessment of the biopsy samples and quantity of hemorrhage were done in a blind fashion. At first, for each patient a definite clinical diagnosis was made based on pathologic examination of all available samples, clinical data, and imaging findings. Then, second pathologist reviewed all samples to evaluate the quality of the samples. A total of 36 patients with endobronchial lesions were included in this study. Definite diagnosis was made in 83% of the patients. Diagnostic yield of the two methods were not statistically different, while the mean hemorrhage grades of all hot biopsy protocols were significantly lower as compared to that of conventional biopsy (p=0.003, p<0.001 and p<0.001 for 10,20and40 voltages respectively). No significant difference was detected between the qualities of specimens obtained by hot biopsy methods in comparison with conventional biopsy (p>0.05 for all three voltages). Hot biopsy can be a valuable alternative to forceps biopsy in evaluating endobronchial lesions.
Identification of suitable sites for mountain ginseng cultivation using GIS and geo-temperature.
Kang, Hag Mo; Choi, Soo Im; Kim, Hyun
2016-01-01
This study was conducted to explore an accurate site identification technique using a geographic information system (GIS) and geo-temperature (gT) for locating suitable sites for growing cultivated mountain ginseng (CMG; Panax ginseng), which is highly sensitive to the environmental conditions in which it grows. The study site was Jinan-gun, South Korea. The spatial resolution for geographic data was set at 10 m × 10 m, and the temperatures for various climatic factors influencing CMG growth were calculated by averaging the 3-year temperatures obtained from the automatic weather stations of the Korea Meteorological Administration. Identification of suitable sites for CMG cultivation was undertaken using both a conventional method and a new method, in which the gT was added as one of the most important factors for crop cultivation. The results yielded by the 2 methods were then compared. When the gT was added as an additional factor (new method), the proportion of suitable sites identified decreased by 0.4 % compared with the conventional method. However, the proportion matching real CMG cultivation sites increased by 3.5 %. Moreover, only 68.2 % corresponded with suitable sites identified using the conventional factors; i.e., 31.8 % were newly detected suitable sites. The accuracy of GIS-based identification of suitable CMG cultivation sites improved by applying the temperature factor (i.e., gT) in addition to the conventionally used factors.
Costa, M.O.L.P.; Heráclio, S.A.; Coelho, A.V.C.; Acioly, V.L.; Souza, P.R.E.; Correia, M.T.S.
2015-01-01
In the present study, we compared the performance of a ThinPrep cytological method with the conventional Papanicolaou test for diagnosis of cytopathological changes, with regard to unsatisfactory results achieved at the Central Public Health Laboratory of the State of Pernambuco. A population-based, cross-sectional study was performed with women aged 18 to 65 years, who spontaneously sought gynecological services in Public Health Units in the State of Pernambuco, Northeast Brazil, between April and November 2011. All patients in the study were given a standardized questionnaire on sociodemographics, sexual characteristics, reproductive practices, and habits. A total of 525 patients were assessed by the two methods (11.05% were under the age of 25 years, 30.86% were single, 4.4% had had more than 5 sexual partners, 44% were not using contraception, 38.85% were users of alcohol, 24.38% were smokers, 3.24% had consumed drugs previously, 42.01% had gynecological complaints, and 12.19% had an early history of sexually transmitted diseases). The two methods showed poor correlation (k=0.19; 95%CI=0.11–0.26; P<0.001). The ThinPrep method reduced the rate of unsatisfactory results from 4.38% to 1.71% (χ2=5.28; P=0.02), and the number of cytopathological changes diagnosed increased from 2.47% to 3.04%. This study confirmed that adopting the ThinPrep method for diagnosis of cervical cytological samples was an improvement over the conventional method. Furthermore, this method may reduce possible losses from cytological resampling and reduce obstacles to patient follow-up, improving the quality of the public health system in the State of Pernambuco, Northeast Brazil. PMID:26247400
Filho, Herton Luiz Alves Sales; da Mata Sousa, Luiz Claudio Demes; von Glehn, Cristina de Queiroz Carrascosa; da Silva, Adalberto Socorro; dos Santos Neto, Pedro de Alcântara; do Nascimento, Ferraz; de Castro, Adail Fonseca; do Nascimento, Liliane Machado; Kneib, Carolina; Bianchi Cazarote, Helena; Mayumi Kitamura, Daniele; Torres, Juliane Roberta Dias; da Cruz Lopes, Laiane; Barros, Aryela Loureiro; da Silva Edlin, Evelin Nildiane; de Moura, Fernanda Sá Leal; Watanabe, Janine Midori Figueiredo; do Monte, Semiramis Jamil Hadad
2012-06-01
The HLAMatchmaker algorithm, which allows the identification of “safe” acceptable mismatches (AMMs) for recipients of solid organ and cell allografts, is rarely used in part due to the difficulty in using it in the current Excel format. The automation of this algorithm may universalize its use to benefit the allocation of allografts. Recently, we have developed a new software called EpHLA, which is the first computer program automating the use of the HLAMatchmaker algorithm. Herein, we present the experimental validation of the EpHLA program by showing the time efficiency and the quality of operation. The same results, obtained by a single antigen bead assay with sera from 10 sensitized patients waiting for kidney transplants, were analyzed either by conventional HLAMatchmaker or by automated EpHLA method. Users testing these two methods were asked to record: (i) time required for completion of the analysis (in minutes); (ii) number of eplets obtained for class I and class II HLA molecules; (iii) categorization of eplets as reactive or non-reactive based on the MFI cutoff value; and (iv) determination of AMMs based on eplets' reactivities. We showed that although both methods had similar accuracy, the automated EpHLA method was over 8 times faster in comparison to the conventional HLAMatchmaker method. In particular the EpHLA software was faster and more reliable but equally accurate as the conventional method to define AMMs for allografts. The EpHLA software is an accurate and quick method for the identification of AMMs and thus it may be a very useful tool in the decision-making process of organ allocation for highly sensitized patients as well as in many other applications.
Analyzing Carbohydrate-Based Regenerative Fuel Cells as a Power Source for Unmanned Aerial Vehicles
2008-03-01
conventional means of generating electrical energy, such as turbines and internal combustion engines, in that the conventional methods normally have an...have 24 hours of daylight, this means that it must be able to store enough exergy (the total amount of energy that can theoretically be converted to...useful work, differentiated from useful energy by the efficiency of converting energy to work) to function during the time when exergy consumption is
Jayaratne, Yasas Shri Nalaka; Uribe, Flavio; Janakiraman, Nandakumar
2017-01-01
The objective of this systematic review was to compare the antero-posterior, vertical and angular changes of maxillary incisors with conventional anchorage control techniques and mini-implant based space closure methods. The electronic databases Pubmed, Scopus, ISI Web of knowledge, Cochrane Library and Open Grey were searched for potentially eligible studies using a set of predetermined keywords. Full texts meeting the inclusion criteria as well as their references were manually searched. The primary outcome data (linear, angular, and vertical maxillary incisor changes) and secondary outcome data (overbite changes, soft tissue changes, biomechanical factors, root resorption and treatment duration) were extracted from the selected articles and entered into spreadsheets based on the type of anchorage used. The methodological quality of each study was assessed. Six studies met the inclusion criteria. The amount of incisor retraction was greater with buccally placed mini-implants than conventional anchorage techniques. The incisor retraction with indirect anchorage from palatal mini-implants was less when compared with buccally placed mini-implants. Incisor intrusion occurred with buccal mini-implants, whereas extrusion was seen with conventional anchorage. Limited data on the biomechanical variables or adverse effects such as root resorption were reported in these studies. More RCT's that take in to account relevant biomechanical variables and employ three-dimensional quantification of tooth movements are required to provide information on incisor changes during space closure.
NASA Astrophysics Data System (ADS)
Chen, Siyu; Zhang, Hanming; Li, Lei; Xi, Xiaoqi; Han, Yu; Yan, Bin
2016-10-01
X-ray computed tomography (CT) has been extensively applied in industrial non-destructive testing (NDT). However, in practical applications, the X-ray beam polychromaticity often results in beam hardening problems for image reconstruction. The beam hardening artifacts, which manifested as cupping, streaks and flares, not only debase the image quality, but also disturb the subsequent analyses. Unfortunately, conventional CT scanning requires that the scanned object is completely covered by the field of view (FOV), the state-of-art beam hardening correction methods only consider the ideal scanning configuration, and often suffer problems for interior tomography due to the projection truncation. Aiming at this problem, this paper proposed a beam hardening correction method based on radon inversion transform for interior tomography. Experimental results show that, compared to the conventional correction algorithms, the proposed approach has achieved excellent performance in both beam hardening artifacts reduction and truncation artifacts suppression. Therefore, the presented method has vitally theoretic and practicable meaning in artifacts correction of industrial CT.
NASA Technical Reports Server (NTRS)
Farassat, F.; Succi, G. P.
1980-01-01
A review of propeller noise prediction technology is presented which highlights the developments in the field from the successful attempt of Gutin to the current sophisticated techniques. Two methods for the predictions of the discrete frequency noise from conventional and advanced propellers in forward flight are described. These methods developed at MIT and NASA Langley Research Center are based on different time domain formulations. Brief description of the computer algorithms based on these formulations are given. The output of these two programs, which is the acoustic pressure signature, is Fourier analyzed to get the acoustic pressure spectrum. The main difference between the programs as they are coded now is that the Langley program can handle propellers with supersonic tip speed while the MIT program is for subsonic tip speed propellers. Comparisons of the calculated and measured acoustic data for a conventional and an advanced propeller show good agreement in general.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Y; Sharp, G
2014-06-15
Purpose: Gain calibration for X-ray imaging systems with movable flat panel detectors (FPD) and intrinsic crosshairs is a challenge due to the geometry dependence of the heel effect and crosshair artifact. This study aims to develop a gain correction method for such systems by implementing the multi-acquisition gain image correction (MAGIC) technique. Methods: Raw flat-field images containing crosshair shadows and heel effect were acquired in 4 different FPD positions with fixed exposure parameters. The crosshair region was automatically detected and substituted with interpolated values from nearby exposed regions, generating a conventional single-image gain-map for each FPD position. Large kernel-based correctionmore » was applied to these images to correct the heel effect. A mask filter was used to invalidate the original cross-hair regions previously filled with the interpolated values. A final, seamless gain-map was created from the processed images by either the sequential filling (SF) or selective averaging (SA) techniques developed in this study. Quantitative evaluation was performed based on detective quantum efficiency improvement factor (DQEIF) for gain-corrected images using the conventional and proposed techniques. Results: Qualitatively, the MAGIC technique was found to be more effective in eliminating crosshair artifacts compared to the conventional single-image method. The mean DQEIF over the range of frequencies from 0.5 to 3.5 mm-1 were 1.09±0.06, 2.46±0.32, and 3.34±0.36 in the crosshair-artifact region and 2.35±0.31, 2.33±0.31, and 3.09±0.34 in the normal region, for the conventional, MAGIC-SF, and MAGIC-SA techniques, respectively. Conclusion: The introduced MAGIC technique is appropriate for gain calibration of an imaging system associated with a moving FPD and an intrinsic crosshair. The technique showed advantages over a conventional single image-based technique by successfully reducing residual crosshair artifacts, and higher image quality with respect to DQE.« less
Bio-barcode gel assay for microRNA
NASA Astrophysics Data System (ADS)
Lee, Hyojin; Park, Jeong-Eun; Nam, Jwa-Min
2014-02-01
MicroRNA has been identified as a potential biomarker because expression level of microRNA is correlated with various cancers. Its detection at low concentrations would be highly beneficial for cancer diagnosis. Here, we develop a new type of a DNA-modified gold nanoparticle-based bio-barcode assay that uses a conventional gel electrophoresis platform and potassium cyanide chemistry and show this assay can detect microRNA at aM levels without enzymatic amplification. It is also shown that single-base-mismatched microRNA can be differentiated from perfectly matched microRNA and the multiplexed detection of various combinations of microRNA sequences is possible with this approach. Finally, differently expressed microRNA levels are selectively detected from cancer cells using the bio-barcode gel assay, and the results are compared with conventional polymerase chain reaction-based results. The method and results shown herein pave the way for practical use of a conventional gel electrophoresis for detecting biomolecules of interest even at aM level without polymerase chain reaction amplification.
Wideband Motion Control by Position and Acceleration Input Based Disturbance Observer
NASA Astrophysics Data System (ADS)
Irie, Kouhei; Katsura, Seiichiro; Ohishi, Kiyoshi
The disturbance observer can observe and suppress the disturbance torque within its bandwidth. Recent motion systems begin to spread in the society and they are required to have ability to contact with unknown environment. Such a haptic motion requires much wider bandwidth. However, since the conventional disturbance observer attains the acceleration response by the second order derivative of position response, the bandwidth is limited due to the derivative noise. This paper proposes a novel structure of a disturbance observer. The proposed disturbance observer uses an acceleration sensor for enlargement of bandwidth. Generally, the bandwidth of an acceleration sensor is from 1Hz to more than 1kHz. To cover DC range, the conventional position sensor based disturbance observer is integrated. Thus, the performance of the proposed Position and Acceleration input based disturbance observer (PADO) is superior to the conventional one. The PADO is applied to position control (infinity stiffness) and force control (zero stiffness). The numerical and experimental results show viability of the proposed method.
Accelerated Compressed Sensing Based CT Image Reconstruction.
Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R; Paul, Narinder S; Cobbold, Richard S C
2015-01-01
In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.
Accelerated Compressed Sensing Based CT Image Reconstruction
Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R.; Paul, Narinder S.; Cobbold, Richard S. C.
2015-01-01
In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization. PMID:26167200
Autonomous control systems - Architecture and fundamental issues
NASA Technical Reports Server (NTRS)
Antsaklis, P. J.; Passino, K. M.; Wang, S. J.
1988-01-01
A hierarchical functional autonomous controller architecture is introduced. In particular, the architecture for the control of future space vehicles is described in detail; it is designed to ensure the autonomous operation of the control system and it allows interaction with the pilot and crew/ground station, and the systems on board the autonomous vehicle. The fundamental issues in autonomous control system modeling and analysis are discussed. It is proposed to utilize a hybrid approach to modeling and analysis of autonomous systems. This will incorporate conventional control methods based on differential equations and techniques for the analysis of systems described with a symbolic formalism. In this way, the theory of conventional control can be fully utilized. It is stressed that autonomy is the design requirement and intelligent control methods appear at present, to offer some of the necessary tools to achieve autonomy. A conventional approach may evolve and replace some or all of the `intelligent' functions. It is shown that in addition to conventional controllers, the autonomous control system incorporates planning, learning, and FDI (fault detection and identification).
Fast, large-scale hologram calculation in wavelet domain
NASA Astrophysics Data System (ADS)
Shimobaba, Tomoyoshi; Matsushima, Kyoji; Takahashi, Takayuki; Nagahama, Yuki; Hasegawa, Satoki; Sano, Marie; Hirayama, Ryuji; Kakue, Takashi; Ito, Tomoyoshi
2018-04-01
We propose a large-scale hologram calculation using WAvelet ShrinkAge-Based superpositIon (WASABI), a wavelet transform-based algorithm. An image-type hologram calculated using the WASABI method is printed on a glass substrate with the resolution of 65 , 536 × 65 , 536 pixels and a pixel pitch of 1 μm. The hologram calculation time amounts to approximately 354 s on a commercial CPU, which is approximately 30 times faster than conventional methods.
Structured light system calibration method with optimal fringe angle.
Li, Beiwen; Zhang, Song
2014-11-20
For structured light system calibration, one popular approach is to treat the projector as an inverse camera. This is usually performed by projecting horizontal and vertical sequences of patterns to establish one-to-one mapping between camera points and projector points. However, for a well-designed system, either horizontal or vertical fringe images are not sensitive to depth variation and thus yield inaccurate mapping. As a result, the calibration accuracy is jeopardized if a conventional calibration method is used. To address this limitation, this paper proposes a novel calibration method based on optimal fringe angle determination. Experiments demonstrate that our calibration approach can increase the measurement accuracy up to 38% compared to the conventional calibration method with a calibration volume of 300(H) mm×250(W) mm×500(D) mm.
Design sensitivity analysis using EAL. Part 1: Conventional design parameters
NASA Technical Reports Server (NTRS)
Dopker, B.; Choi, Kyung K.; Lee, J.
1986-01-01
A numerical implementation of design sensitivity analysis of builtup structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its database management system. The finite element code used in the implemenatation presented is the Engineering Analysis Language (EAL), which is based on a hybrid method of analysis. It was shown that design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate database. Conventional (sizing) design parameters such as cross-sectional area of beams or thickness of plates and plane elastic solid components are considered. Compliance, displacement, and stress functionals are considered as performance criteria. The method presented is being extended to implement shape design sensitivity analysis using a domain method and a design component method.
Hearing Silence: Toward a Mixed-Method Approach for Studying Genres' Exclusionary Potential
ERIC Educational Resources Information Center
Randazzo, Chalice
2015-01-01
Traditional Rhetorical Genre Study (RGS) methods are not well adapted to study exclusion because excluded information and people are typically absent from the genre, and some excluded information is simply unrelated to the genre because of genre conventions or social context. Within genre-based silences, how can scholars differentiate between an…
Parikh, Harshal R; De, Anuradha S; Baveja, Sujata M
2012-07-01
Physicians and microbiologists have long recognized that the presence of living microorganisms in the blood of a patient carries with it considerable morbidity and mortality. Hence, blood cultures have become critically important and frequently performed test in clinical microbiology laboratories for diagnosis of sepsis. To compare the conventional blood culture method with the lysis centrifugation method in cases of sepsis. Two hundred nonduplicate blood cultures from cases of sepsis were analyzed using two blood culture methods concurrently for recovery of bacteria from patients diagnosed clinically with sepsis - the conventional blood culture method using trypticase soy broth and the lysis centrifugation method using saponin by centrifuging at 3000 g for 30 minutes. Overall bacteria recovered from 200 blood cultures were 17.5%. The conventional blood culture method had a higher yield of organisms, especially Gram positive cocci. The lysis centrifugation method was comparable with the former method with respect to Gram negative bacilli. The sensitivity of lysis centrifugation method in comparison to conventional blood culture method was 49.75% in this study, specificity was 98.21% and diagnostic accuracy was 89.5%. In almost every instance, the time required for detection of the growth was earlier by lysis centrifugation method, which was statistically significant. Contamination by lysis centrifugation was minimal, while that by conventional method was high. Time to growth by the lysis centrifugation method was highly significant (P value 0.000) as compared to time to growth by the conventional blood culture method. For the diagnosis of sepsis, combination of the lysis centrifugation method and the conventional blood culture method with trypticase soy broth or biphasic media is advocable, in order to achieve faster recovery and a better yield of microorganisms.
Loukriz, Abdelhamid; Haddadi, Mourad; Messalti, Sabir
2016-05-01
Improvement of the efficiency of photovoltaic system based on new maximum power point tracking (MPPT) algorithms is the most promising solution due to its low cost and its easy implementation without equipment updating. Many MPPT methods with fixed step size have been developed. However, when atmospheric conditions change rapidly , the performance of conventional algorithms is reduced. In this paper, a new variable step size Incremental Conductance IC MPPT algorithm has been proposed. Modeling and simulation of different operational conditions of conventional Incremental Conductance IC and proposed methods are presented. The proposed method was developed and tested successfully on a photovoltaic system based on Flyback converter and control circuit using dsPIC30F4011. Both, simulation and experimental design are provided in several aspects. A comparative study between the proposed variable step size and fixed step size IC MPPT method under similar operating conditions is presented. The obtained results demonstrate the efficiency of the proposed MPPT algorithm in terms of speed in MPP tracking and accuracy. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Equivalent orthotropic elastic moduli identification method for laminated electrical steel sheets
NASA Astrophysics Data System (ADS)
Saito, Akira; Nishikawa, Yasunari; Yamasaki, Shintaro; Fujita, Kikuo; Kawamoto, Atsushi; Kuroishi, Masakatsu; Nakai, Hideo
2016-05-01
In this paper, a combined numerical-experimental methodology for the identification of elastic moduli of orthotropic media is presented. Special attention is given to the laminated electrical steel sheets, which are modeled as orthotropic media with nine independent engineering elastic moduli. The elastic moduli are determined specifically for use with finite element vibration analyses. We propose a three-step methodology based on a conventional nonlinear least squares fit between measured and computed natural frequencies. The methodology consists of: (1) successive augmentations of the objective function by increasing the number of modes, (2) initial condition updates, and (3) appropriate selection of the natural frequencies based on their sensitivities on the elastic moduli. Using the results of numerical experiments, it is shown that the proposed method achieves more accurate converged solution than a conventional approach. Finally, the proposed method is applied to measured natural frequencies and mode shapes of the laminated electrical steel sheets. It is shown that the method can successfully identify the orthotropic elastic moduli that can reproduce the measured natural frequencies and frequency response functions by using finite element analyses with a reasonable accuracy.
Greenwald, Ralf R.; Quitadamo, Ian J.
2014-01-01
A changing undergraduate demographic and the need to help students develop advanced critical thinking skills in neuroanatomy courses has prompted many faculty to consider new teaching methods including clinical case studies. This study compared primarily conventional and inquiry-based clinical case (IBCC) teaching methods to determine which would produce greater gains in critical thinking and content knowledge. Results showed students in the conventional neuroanatomy course gained less than 3 national percentile ranks while IBCC students gained over 7.5 within one academic term using the valid and reliable California Critical Thinking Skills Test. In addition to 2.5 times greater gains in critical thinking, IBCC teaching methods also produced 12% greater final exam performance and 11% higher grades using common grade performance benchmarks. Classroom observations also indicated that IBCC students were more intellectually engaged and participated to a greater extent in classroom discussions. Through the results of this study, it is hoped that faculty who teach neuroanatomy and desire greater critical thinking and content student learning outcomes will consider using the IBCC method. PMID:24693256
Greenwald, Ralf R; Quitadamo, Ian J
2014-01-01
A changing undergraduate demographic and the need to help students develop advanced critical thinking skills in neuroanatomy courses has prompted many faculty to consider new teaching methods including clinical case studies. This study compared primarily conventional and inquiry-based clinical case (IBCC) teaching methods to determine which would produce greater gains in critical thinking and content knowledge. Results showed students in the conventional neuroanatomy course gained less than 3 national percentile ranks while IBCC students gained over 7.5 within one academic term using the valid and reliable California Critical Thinking Skills Test. In addition to 2.5 times greater gains in critical thinking, IBCC teaching methods also produced 12% greater final exam performance and 11% higher grades using common grade performance benchmarks. Classroom observations also indicated that IBCC students were more intellectually engaged and participated to a greater extent in classroom discussions. Through the results of this study, it is hoped that faculty who teach neuroanatomy and desire greater critical thinking and content student learning outcomes will consider using the IBCC method.
Lee, Nari; Choi, Sung-Wook; Chang, Hyun-Joo; Chun, Hyang Sook
2018-05-01
This study presents a method for rapid detection of Escherichia coli O157:H7 in fresh lettuce based on the properties of target separation and localized surface plasmon resonance of immunomagnetic nanoparticles. The multifunctional immunomagnetic nanoparticles enabling simultaneous separation and detection were prepared by synthesizing magnetic nanoparticles (ca. 10 nm in diameter) composed of an iron oxide (Fe 3 O 4 ) core and gold shell and then conjugating these nanoparticles with the anti- E. coli O157:H7 antibodies. The application of multifunctional immunomagnetic nanoparticles for detecting E. coli O157:H7 in a lettuce matrix allowed detection of the presence of <1 log CFU mL -1 without prior enrichment. In contrast, the detection limit of the conventional plating method was 2.74 log CFU mL -1 . The method, which requires no preenrichment, provides an alternative to conventional microbiological detection methods and can be used as a rapid screening tool for a large number of food samples.
Current trends in endotoxin detection and analysis of endotoxin-protein interactions.
Dullah, Elvina Clarie; Ongkudon, Clarence M
2017-03-01
Endotoxin is a type of pyrogen that can be found in Gram-negative bacteria. Endotoxin can form a stable interaction with other biomolecules thus making its removal difficult especially during the production of biopharmaceutical drugs. The prevention of endotoxins from contaminating biopharmaceutical products is paramount as endotoxin contamination, even in small quantities, can result in fever, inflammation, sepsis, tissue damage and even lead to death. Highly sensitive and accurate detection of endotoxins are keys in the development of biopharmaceutical products derived from Gram-negative bacteria. It will facilitate the study of the intermolecular interaction of an endotoxin with other biomolecules, hence the selection of appropriate endotoxin removal strategies. Currently, most researchers rely on the conventional LAL-based endotoxin detection method. However, new methods have been and are being developed to overcome the problems associated with the LAL-based method. This review paper highlights the current research trends in endotoxin detection from conventional methods to newly developed biosensors. Additionally, it also provides an overview of the use of electron microscopy, dynamic light scattering (DLS), fluorescence resonance energy transfer (FRET) and docking programs in the endotoxin-protein analysis.
Kim, Mingue; Eom, Youngsub; Lee, Hwa; Suh, Young-Woo; Song, Jong Suk; Kim, Hyo Myung
2018-02-01
To evaluate the accuracy of IOL power calculation using adjusted corneal power according to the posterior/anterior corneal curvature radii ratio. Nine hundred twenty-eight eyes from 928 reference subjects and 158 eyes from 158 cataract patients who underwent phacoemulsification surgery were enrolled. Adjusted corneal power of cataract patients was calculated using the fictitious refractive index that was obtained from the geometric mean posterior/anterior corneal curvature radii ratio of reference subjects and adjusted anterior and predicted posterior corneal curvature radii from conventional keratometry (K) using the posterior/anterior corneal curvature radii ratio. The median absolute error (MedAE) based on the adjusted corneal power was compared with that based on conventional K in the Haigis and SRK/T formulae. The geometric mean posterior/anterior corneal curvature radii ratio was 0.808, and the fictitious refractive index of the cornea for a single Scheimpflug camera was 1.3275. The mean difference between adjusted corneal power and conventional K was 0.05 diopter (D). The MedAE based on adjusted corneal power (0.31 D in the Haigis formula and 0.32 D in the SRK/T formula) was significantly smaller than that based on conventional K (0.41 D and 0.40 D, respectively; P < 0.001 and P < 0.001, respectively). The percentage of eyes with refractive prediction error within ± 0.50 D calculated using adjusted corneal power (74.7%) was significantly greater than that obtained using conventional K (62.7%) in the Haigis formula (P = 0.029). IOL power calculation using adjusted corneal power according to the posterior/anterior corneal curvature radii ratio provided more accurate refractive outcomes than calculation using conventional K.
Wong, M S; Cheng, J C Y; Wong, M W; So, S F
2005-04-01
A study was conducted to compare the CAD/CAM method with the conventional manual method in fabrication of spinal orthoses for patients with adolescent idiopathic scoliosis. Ten subjects were recruited for this study. Efficiency analyses of the two methods were performed from cast filling/ digitization process to completion of cast/image rectification. The dimensional changes of the casts/ models rectified by the two cast rectification methods were also investigated. The results demonstrated that the CAD/CAM method was faster than the conventional manual method in the studied processes. The mean rectification time of the CAD/CAM method was shorter than that of the conventional manual method by 108.3 min (63.5%). This indicated that the CAD/CAM method took about 1/3 of the time of the conventional manual to finish cast rectification. In the comparison of cast/image dimensional differences between the conventional manual method and the CAD/CAM method, five major dimensions in each of the five rectified regions namely the axilla, thoracic, lumbar, abdominal and pelvic regions were involved. There were no significant dimensional differences (p < 0.05) in 19 out of the 25 studied dimensions. This study demonstrated that the CAD/CAM system could save the time in the rectification process and offer a relatively high resemblance in cast rectification as compared with the conventional manual method.
Pixel-based speckle adjustment for noise reduction in Fourier-domain OCT images
Zhang, Anqi; Xi, Jiefeng; Sun, Jitao; Li, Xingde
2017-01-01
Speckle resides in OCT signals and inevitably effects OCT image quality. In this work, we present a novel method for speckle noise reduction in Fourier-domain OCT images, which utilizes the phase information of complex OCT data. In this method, speckle area is pre-delineated pixelwise based on a phase-domain processing method and then adjusted by the results of wavelet shrinkage of the original image. Coefficient shrinkage method such as wavelet or contourlet is applied afterwards for further suppressing the speckle noise. Compared with conventional methods without speckle adjustment, the proposed method demonstrates significant improvement of image quality. PMID:28663860
Scerbo, Michelle H; Kaplan, Heidi B; Dua, Anahita; Litwin, Douglas B; Ambrose, Catherine G; Moore, Laura J; Murray, Col Clinton K; Wade, Charles E; Holcomb, John B
2016-06-01
Sepsis from bacteremia occurs in 250,000 cases annually in the United States, has a mortality rate as high as 60%, and is associated with a poorer prognosis than localized infection. Because of these high figures, empiric antibiotic administration for patients with systemic inflammatory response syndrome (SIRS) and suspected infection is the second most common indication for antibiotic administration in intensive care units (ICU)s. However, overuse of empiric antibiotics contributes to the development of opportunistic infections, antibiotic resistance, and the increase in multi-drug-resistant bacterial strains. The current method of diagnosing and ruling out bacteremia is via blood culture (BC) and Gram stain (GS) analysis. Conventional and molecular methods for diagnosing bacteremia were reviewed and compared. The clinical implications, use, and current clinical trials of polymerase chain reaction (PCR)-based methods to detect bacterial pathogens in the blood stream were detailed. BC/GS has several disadvantages. These include: some bacteria do not grow in culture media; others do not GS appropriately; and cultures can require up to 5 d to guide or discontinue antibiotic treatment. PCR-based methods can be potentially applied to detect rapidly, accurately, and directly microbes in human blood samples. Compared with the conventional BC/GS, particular advantages to molecular methods (specifically, PCR-based methods) include faster results, leading to possible improved antibiotic stewardship when bacteremia is not present.
Ground-Cover Measurements: Assessing Correlation Among Aerial and Ground-Based Methods
NASA Astrophysics Data System (ADS)
Booth, D. Terrance; Cox, Samuel E.; Meikle, Tim; Zuuring, Hans R.
2008-12-01
Wyoming’s Green Mountain Common Allotment is public land providing livestock forage, wildlife habitat, and unfenced solitude, amid other ecological services. It is also the center of ongoing debate over USDI Bureau of Land Management’s (BLM) adjudication of land uses. Monitoring resource use is a BLM responsibility, but conventional monitoring is inadequate for the vast areas encompassed in this and other public-land units. New monitoring methods are needed that will reduce monitoring costs. An understanding of data-set relationships among old and new methods is also needed. This study compared two conventional methods with two remote sensing methods using images captured from two meters and 100 meters above ground level from a camera stand (a ground, image-based method) and a light airplane (an aerial, image-based method). Image analysis used SamplePoint or VegMeasure software. Aerial methods allowed for increased sampling intensity at low cost relative to the time and travel required by ground methods. Costs to acquire the aerial imagery and measure ground cover on 162 aerial samples representing 9000 ha were less than 3000. The four highest correlations among data sets for bare ground—the ground-cover characteristic yielding the highest correlations (r)—ranged from 0.76 to 0.85 and included ground with ground, ground with aerial, and aerial with aerial data-set associations. We conclude that our aerial surveys are a cost-effective monitoring method, that ground with aerial data-set correlations can be equal to, or greater than those among ground-based data sets, and that bare ground should continue to be investigated and tested for use as a key indicator of rangeland health.
Development of fuzzy air quality index using soft computing approach.
Mandal, T; Gorai, A K; Pathak, G
2012-10-01
Proper assessment of air quality status in an atmosphere based on limited observations is an essential task for meeting the goals of environmental management. A number of classification methods are available for estimating the changing status of air quality. However, a discrepancy frequently arises from the quality criteria of air employed and vagueness or fuzziness embedded in the decision making output values. Owing to inherent imprecision, difficulties always exist in some conventional methodologies like air quality index when describing integrated air quality conditions with respect to various pollutants parameters and time of exposure. In recent years, the fuzzy logic-based methods have demonstrated to be appropriated to address uncertainty and subjectivity in environmental issues. In the present study, a methodology based on fuzzy inference systems (FIS) to assess air quality is proposed. This paper presents a comparative study to assess status of air quality using fuzzy logic technique and that of conventional technique. The findings clearly indicate that the FIS may successfully harmonize inherent discrepancies and interpret complex conditions.
Controlling bridging and pinching with pixel-based mask for inverse lithography
NASA Astrophysics Data System (ADS)
Kobelkov, Sergey; Tritchkov, Alexander; Han, JiWan
2016-03-01
Inverse Lithography Technology (ILT) has become a viable computational lithography candidate in recent years as it can produce mask output that results in process latitude and CD control in the fab that is hard to match with conventional OPC/SRAF insertion approaches. An approach to solving the inverse lithography problem as a nonlinear, constrained minimization problem over a domain mask pixels was suggested in the paper by Y. Granik "Fast pixel-based mask optimization for inverse lithography" in 2006. The present paper extends this method to satisfy bridging and pinching constraints imposed on print contours. Namely, there are suggested objective functions expressing penalty for constraints violations, and their minimization with gradient descent methods is considered. This approach has been tested with an ILT-based Local Printability Enhancement (LPTM) tool in an automated flow to eliminate hotspots that can be present on the full chip after conventional SRAF placement/OPC and has been applied in 14nm, 10nm node production, single and multiple-patterning flows.
2D and 3D X-ray phase retrieval of multi-material objects using a single defocus distance.
Beltran, M A; Paganin, D M; Uesugi, K; Kitchen, M J
2010-03-29
A method of tomographic phase retrieval is developed for multi-material objects whose components each has a distinct complex refractive index. The phase-retrieval algorithm, based on the Transport-of-Intensity equation, utilizes propagation-based X-ray phase contrast images acquired at a single defocus distance for each tomographic projection. The method requires a priori knowledge of the complex refractive index for each material present in the sample, together with the total projected thickness of the object at each orientation. The requirement of only a single defocus distance per projection simplifies the experimental setup and imposes no additional dose compared to conventional tomography. The algorithm was implemented using phase contrast data acquired at the SPring-8 Synchrotron facility in Japan. The three-dimensional (3D) complex refractive index distribution of a multi-material test object was quantitatively reconstructed using a single X-ray phase-contrast image per projection. The technique is robust in the presence of noise, compared to conventional absorption based tomography.
P Waiker, Veena; Shivalingappa, Shanthakumar
2015-01-01
Platelet rich plasma is known for its hemostatic, adhesive and healing properties in view of the multiple growth factors released from the platelets to the site of wound. The primary objective of this study was to use autologous platelet rich plasma (PRP) in wound beds for anchorage of skin grafts instead of conventional methods like sutures, staplers or glue. In a single center based randomized controlled prospective study of nine months duration, 200 patients with wounds were divided into two equal groups. Autologous PRP was applied on wound beds in PRP group and conventional methods like staples/sutures used to anchor the skin grafts in a control group. Instant graft adherence to wound bed was statistically significant in the PRP group. Time of first post-graft inspection was delayed, and hematoma, graft edema, discharge from graft site, frequency of dressings and duration of stay in plastic surgery unit were significantly less in the PRP group. Autologous PRP ensured instant skin graft adherence to wound bed in comparison to conventional methods of anchorage. Hence, we recommend the use of autologous PRP routinely on wounds prior to resurfacing to ensure the benefits of early healing.
Threshold-driven optimization for reference-based auto-planning
NASA Astrophysics Data System (ADS)
Long, Troy; Chen, Mingli; Jiang, Steve; Lu, Weiguo
2018-02-01
We study threshold-driven optimization methodology for automatically generating a treatment plan that is motivated by a reference DVH for IMRT treatment planning. We present a framework for threshold-driven optimization for reference-based auto-planning (TORA). Commonly used voxel-based quadratic penalties have two components for penalizing under- and over-dosing of voxels: a reference dose threshold and associated penalty weight. Conventional manual- and auto-planning using such a function involves iteratively updating the preference weights while keeping the thresholds constant, an unintuitive and often inconsistent method for planning toward some reference DVH. However, driving a dose distribution by threshold values instead of preference weights can achieve similar plans with less computational effort. The proposed methodology spatially assigns reference DVH information to threshold values, and iteratively improves the quality of that assignment. The methodology effectively handles both sub-optimal and infeasible DVHs. TORA was applied to a prostate case and a liver case as a proof-of-concept. Reference DVHs were generated using a conventional voxel-based objective, then altered to be either infeasible or easy-to-achieve. TORA was able to closely recreate reference DVHs in 5-15 iterations of solving a simple convex sub-problem. TORA has the potential to be effective for auto-planning based on reference DVHs. As dose prediction and knowledge-based planning becomes more prevalent in the clinical setting, incorporating such data into the treatment planning model in a clear, efficient way will be crucial for automated planning. A threshold-focused objective tuning should be explored over conventional methods of updating preference weights for DVH-guided treatment planning.
Hey, Hwee Weng Dennis; Lau, Eugene Tze-Chun; Lim, Joel-Louis; Choong, Denise Ai-Wen; Tan, Chuen-Seng; Liu, Gabriel Ka-Po; Wong, Hee-Kit
2017-03-01
Flexion radiographs have been used to identify cases of spinal instability. However, current methods are not standardized and are not sufficiently sensitive or specific to identify instability. This study aimed to introduce a new slump sitting method for performing lumbar spine flexion radiographs and comparison of the angular range of motions (ROMs) and displacements between the conventional method and this new method. This study used is a prospective study on radiological evaluation of the lumbar spine flexion ROMs and displacements using dynamic radiographs. Sixty patients were recruited from a single spine tertiary center. Angular and displacement measurements of lumbar spine flexion were carried out. Participants were randomly allocated into two groups: those who did the new method first, followed by the conventional method versus those who did the conventional method first, followed by the new method. A comparison of the angular and displacement measurements of lumbar spine flexion between the conventional method and the new method was performed and tested for superiority and non-inferiority. The measurements of global lumbar angular ROM were, on average, 17.3° larger (p<.0001) using the new slump sitting method compared with the conventional method. They were most significant at the levels of L3-L4, L4-L5, and L5-S1 (p<.0001, p<.0001 and p=.001, respectively). There was no significant difference between both methods when measuring lumbar displacements (p=.814). The new method of slump sitting dynamic radiograph was shown to be superior to the conventional method in measuring the angular ROM and non-inferior to the conventional method in the measurement of displacement. Copyright © 2016 Elsevier Inc. All rights reserved.
Chen, Yinsheng; Li, Zeju; Wu, Guoqing; Yu, Jinhua; Wang, Yuanyuan; Lv, Xiaofei; Ju, Xue; Chen, Zhongping
2018-07-01
Due to the totally different therapeutic regimens needed for primary central nervous system lymphoma (PCNSL) and glioblastoma (GBM), accurate differentiation of the two diseases by noninvasive imaging techniques is important for clinical decision-making. Thirty cases of PCNSL and 66 cases of GBM with conventional T1-contrast magnetic resonance imaging (MRI) were analyzed in this study. Convolutional neural networks was used to segment tumor automatically. A modified scale invariant feature transform (SIFT) method was utilized to extract three-dimensional local voxel arrangement information from segmented tumors. Fisher vector was proposed to normalize the dimension of SIFT features. An improved genetic algorithm (GA) was used to extract SIFT features with PCNSL and GBM discrimination ability. The data-set was divided into a cross-validation cohort and an independent validation cohort by the ratio of 2:1. Support vector machine with the leave-one-out cross-validation based on 20 cases of PCNSL and 44 cases of GBM was employed to build and validate the differentiation model. Among 16,384 high-throughput features, 1356 features show significant differences between PCNSL and GBM with p < 0.05 and 420 features with p < 0.001. A total of 496 features were finally chosen by improved GA algorithm. The proposed method produces PCNSL vs. GBM differentiation with an area under the curve (AUC) curve of 99.1% (98.2%), accuracy 95.3% (90.6%), sensitivity 85.0% (80.0%) and specificity 100% (95.5%) on the cross-validation cohort (and independent validation cohort). Since the local voxel arrangement characterization provided by SIFT features, proposed method produced more competitive PCNSL and GBM differentiation performance by using conventional MRI than methods based on advanced MRI.
Liu, Tao; Thibos, Larry; Marin, Gildas; Hernandez, Martha
2014-01-01
Conventional aberration analysis by a Shack-Hartmann aberrometer is based on the implicit assumption that an injected probe beam reflects from a single fundus layer. In fact, the biological fundus is a thick reflector and therefore conventional analysis may produce errors of unknown magnitude. We developed a novel computational method to investigate this potential failure of conventional analysis. The Shack-Hartmann wavefront sensor was simulated by computer software and used to recover by two methods the known wavefront aberrations expected from a population of normally-aberrated human eyes and bi-layer fundus reflection. The conventional method determines the centroid of each spot in the SH data image, from which wavefront slopes are computed for least-squares fitting with derivatives of Zernike polynomials. The novel 'global' method iteratively adjusted the aberration coefficients derived from conventional centroid analysis until the SH image, when treated as a unitary picture, optimally matched the original data image. Both methods recovered higher order aberrations accurately and precisely, but only the global algorithm correctly recovered the defocus coefficients associated with each layer of fundus reflection. The global algorithm accurately recovered Zernike coefficients for mean defocus and bi-layer separation with maximum error <0.1%. The global algorithm was robust for bi-layer separation up to 2 dioptres for a typical SH wavefront sensor design. For 100 randomly generated test wavefronts with 0.7 D axial separation, the retrieved mean axial separation was 0.70 D with standard deviations (S.D.) of 0.002 D. Sufficient information is contained in SH data images to measure the dioptric thickness of dual-layer fundus reflection. The global algorithm is superior since it successfully recovered the focus value associated with both fundus layers even when their separation was too small to produce clearly separated spots, while the conventional analysis misrepresents the defocus component of the wavefront aberration as the mean defocus for the two reflectors. Our novel global algorithm is a promising method for SH data image analysis in clinical and visual optics research for human and animal eyes. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.
Thwarting science by protecting the received wisdom on tobacco addiction from the scientific method.
Difranza, Joseph R
2010-11-04
In their commentary, Dar and Frenk call into question the validity of all published data that describe the onset of nicotine addiction. They argue that the data that describe the early onset of nicotine addiction is so different from the conventional wisdom that it is irrelevant. In this rebuttal, the author argues that the conventional wisdom cannot withstand an application of the scientific method that requires that theories be tested and discarded when they are contradicted by data. The author examines the origins of the threshold theory that has represented the conventional wisdom concerning the onset of nicotine addiction for 4 decades. The major tenets of the threshold theory are presented as hypotheses followed by an examination of the relevant literature. Every tenet of the threshold theory is contradicted by all available relevant data and yet it remains the conventional wisdom. The author provides an evidence-based account of the natural history of nicotine addiction, including its onset and development as revealed by case histories, focus groups, and surveys involving tens of thousands of smokers. These peer-reviewed and replicated studies are the work of independent researchers from around the world using a variety of measures, and they provide a consistent and coherent clinical picture. The author argues that the scientific method demands that the fanciful conventional wisdom be discarded and replaced with the evidence-based description of nicotine addiction that is backed by data. The author charges that in their attempt to defend the conventional wisdom in the face of overwhelming data to the contrary, Dar and Frenk attempt to destroy the credibility of all who have produced these data. Dar and Frenk accuse other researchers of committing methodological errors and showing bias in the analysis of data when in fact Dar and Frenk commit several errors and reveal their bias by using a few outlying data points to misrepresent an entire body of research, and by grossly and consistently mischaracterizing the claims of those whose research they attack.
A 3-D enlarged cell technique (ECT) for elastic wave modelling of a curved free surface
NASA Astrophysics Data System (ADS)
Wei, Songlin; Zhou, Jianyang; Zhuang, Mingwei; Liu, Qing Huo
2016-09-01
The conventional finite-difference time-domain (FDTD) method for elastic waves suffers from the staircasing error when applied to model a curved free surface because of its structured grid. In this work, an improved, stable and accurate 3-D FDTD method for elastic wave modelling on a curved free surface is developed based on the finite volume method and enlarged cell technique (ECT). To achieve a sufficiently accurate implementation, a finite volume scheme is applied to the curved free surface to remove the staircasing error; in the mean time, to achieve the same stability as the FDTD method without reducing the time step increment, the ECT is introduced to preserve the solution stability by enlarging small irregular cells into adjacent cells under the condition of conservation of force. This method is verified by several 3-D numerical examples. Results show that the method is stable at the Courant stability limit for a regular FDTD grid, and has much higher accuracy than the conventional FDTD method.
Timing Recovery Strategies in Magnetic Recording Systems
NASA Astrophysics Data System (ADS)
Kovintavewat, Piya
At some point in a digital communications receiver, the received analog signal must be sampled. Good performance requires that these samples be taken at the right times. The process of synchronizing the sampler with the received analog waveform is known as timing recovery. Conventional timing recovery techniques perform well only when operating at high signal-to-noise ratio (SNR). Nonetheless, iterative error-control codes allow reliable communication at very low SNR, where conventional techniques fail. This paper provides a detailed review on the timing recovery strategies based on per-survivor processing (PSP) that are capable of working at low SNR. We also investigate their performance in magnetic recording systems because magnetic recording is a primary method of storage for a variety of applications, including desktop, mobile, and server systems. Results indicate that the timing recovery strategies based on PSP perform better than the conventional ones and are thus worth being employed in magnetic recording systems.
A new method and device of aligning patient setup lasers in radiation therapy.
Hwang, Ui-Jung; Jo, Kwanghyun; Lim, Young Kyung; Kwak, Jung Won; Choi, Sang Hyuon; Jeong, Chiyoung; Kim, Mi Young; Jeong, Jong Hwi; Shin, Dongho; Lee, Se Byeong; Park, Jeong-Hoon; Park, Sung Yong; Kim, Siyong
2016-01-08
The aim of this study is to develop a new method to align the patient setup lasers in a radiation therapy treatment room and examine its validity and efficiency. The new laser alignment method is realized by a device composed of both a metallic base plate and a few acrylic transparent plates. Except one, every plate has either a crosshair line (CHL) or a single vertical line that is used for alignment. Two holders for radiochromic film insertion are prepared in the device to find a radiation isocenter. The right laser positions can be found optically by matching the shadows of all the CHLs in the gantry head and the device. The reproducibility, accuracy, and efficiency of laser alignment and the dependency on the position error of the light source were evaluated by comparing the means and the standard deviations of the measured laser positions. After the optical alignment of the lasers, the radiation isocenter was found by the gantry and collimator star shots, and then the lasers were translated parallel to the isocenter. In the laser position reproducibility test, the mean and standard deviation on the wall of treatment room were 32.3 ± 0.93 mm for the new method whereas they were 33.4 ± 1.49 mm for the conventional method. The mean alignment accuracy was 1.4 mm for the new method, and 2.1 mm for the conventional method on the walls. In the test of the dependency on the light source position error, the mean laser position was shifted just by a similar amount of the shift of the light source in the new method, but it was greatly magnified in the conventional method. In this study, a new laser alignment method was devised and evaluated successfully. The new method provided more accurate, more reproducible, and faster alignment of the lasers than the conventional method.
Saam, Tobias; Herzen, Julia; Hetterich, Holger; Fill, Sandra; Willner, Marian; Stockmar, Marco; Achterhold, Klaus; Zanette, Irene; Weitkamp, Timm; Schüller, Ulrich; Auweter, Sigrid; Adam-Neumair, Silvia; Nikolaou, Konstantin; Reiser, Maximilian F.; Pfeiffer, Franz; Bamberg, Fabian
2013-01-01
Objectives Phase-contrast imaging is a novel X-ray based technique that provides enhanced soft tissue contrast. The aim of this study was to evaluate the feasibility of visualizing human carotid arteries by grating-based phase-contrast tomography (PC-CT) at two different experimental set-ups: (i) applying synchrotron radiation and (ii) using a conventional X-ray tube. Materials and Methods Five ex-vivo carotid artery specimens were examined with PC-CT either at the European Synchrotron Radiation Facility using a monochromatic X-ray beam (2 specimens; 23 keV; pixel size 5.4 µm), or at a laboratory set-up on a conventional X-ray tube (3 specimens; 35-40 kVp; 70 mA; pixel size 100 µm). Tomographic images were reconstructed and compared to histopathology. Two independent readers determined vessel dimensions and one reader determined signal-to-noise ratios (SNR) between PC-CT and absorption images. Results In total, 51 sections were included in the analysis. Images from both set-ups provided sufficient contrast to differentiate individual vessel layers. All PCI-based measurements strongly predicted but significantly overestimated lumen, intima and vessel wall area for both the synchrotron and the laboratory-based measurements as compared with histology (all p<0.001 with slope >0.53 per mm2, 95%-CI: 0.35 to 0.70). Although synchrotron-based images were characterized by higher SNRs than laboratory-based images; both PC-CT set-ups had superior SNRs compared to corresponding conventional absorption-based images (p<0.001). Inter-reader reproducibility was excellent (ICCs >0.98 and >0.84 for synchrotron and for laboratory-based measurements; respectively). Conclusion Experimental PC-CT of carotid specimens is feasible with both synchrotron and conventional X-ray sources, producing high-resolution images suitable for vessel characterization and atherosclerosis research. PMID:24039969
Gong, Tong; Brew, Bronwyn; Sjölander, Arvid; Almqvist, Catarina
2017-07-01
Various epidemiological designs have been applied to investigate the causes and consequences of fetal growth restriction in register-based observational studies. This review seeks to provide an overview of several conventional designs, including cohort, case-control and more recently applied non-conventional designs such as family-based designs. We also discuss some practical points regarding the application and interpretation of family-based designs. Definitions of each design, the study population, the exposure and the outcome measures are briefly summarised. Examples of study designs are taken from the field of low birth-weight research for illustrative purposes. Also examined are relative advantages and disadvantages of each design in terms of assumptions, potential selection and information bias, confounding and generalisability. Kinship data linkage, statistical models and result interpretation are discussed specific to family-based designs. When all information is retrieved from registers, there is no evident preference of the case-control design over the cohort design to estimate odds ratios. All conventional designs included in the review are prone to bias, particularly due to residual confounding. Family-based designs are able to reduce such bias and strengthen causal inference. In the field of low birth-weight research, family-based designs have been able to confirm a negative association not confounded by genetic or shared environmental factors between low birth weight and the risk of asthma. We conclude that there is a broader need for family-based design in observational research as evidenced by the meaningful contributions to the understanding of the potential causal association between low birth weight and subsequent outcomes.
[Fragment-based drug discovery: concept and aim].
Tanaka, Daisuke
2010-03-01
Fragment-Based Drug Discovery (FBDD) has been recognized as a newly emerging lead discovery methodology that involves biophysical fragment screening and chemistry-driven fragment-to-lead stages. Although fragments, defined as structurally simple and small compounds (typically <300 Da), have not been employed in conventional high-throughput screening (HTS), the recent significant progress in the biophysical screening methods enables fragment screening at a practical level. The intention of FBDD primarily turns our attention to weakly but specifically binding fragments (hit fragments) as the starting point of medicinal chemistry. Hit fragments are then promoted to more potent lead compounds through linking or merging with another hit fragment and/or attaching functional groups. Another positive aspect of FBDD is ligand efficiency. Ligand efficiency is a useful guide in screening hit selection and hit-to-lead phases to achieve lead-likeness. Owing to these features, a number of successful applications of FBDD to "undruggable targets" (where HTS and other lead identification methods failed to identify useful lead compounds) have been reported. As a result, FBDD is now expected to complement more conventional methodologies. This review, as an introduction of the following articles, will summarize the fundamental concepts of FBDD and will discuss its advantages over other conventional drug discovery approaches.
Evaluation of direct and indirect additive manufacture of maxillofacial prostheses.
Eggbeer, Dominic; Bibb, Richard; Evans, Peter; Ji, Lu
2012-09-01
The efficacy of computer-aided technologies in the design and manufacture of maxillofacial prostheses has not been fully proven. This paper presents research into the evaluation of direct and indirect additive manufacture of a maxillofacial prosthesis against conventional laboratory-based techniques. An implant/magnet-retained nasal prosthesis case from a UK maxillofacial unit was selected as a case study. A benchmark prosthesis was fabricated using conventional laboratory-based techniques for comparison against additive manufactured prostheses. For the computer-aided workflow, photogrammetry, computer-aided design and additive manufacture (AM) methods were evaluated in direct prosthesis body fabrication and indirect production using an additively manufactured mould. Qualitative analysis of position, shape, colour and edge quality was undertaken. Mechanical testing to ISO standards was also used to compare the silicone rubber used in the conventional prosthesis with the AM material. Critical evaluation has shown that utilising a computer-aided work-flow can produce a prosthesis body that is comparable to that produced using existing best practice. Technical limitations currently prevent the direct fabrication method demonstrated in this paper from being clinically viable. This research helps prosthesis providers understand the application of a computer-aided approach and guides technology developers and researchers to address the limitations identified.
Ronco, Guglielmo; Segnan, Nereo; Giorgi-Rossi, Paolo; Zappa, Marco; Casadei, Gian Piero; Carozzi, Francesca; Dalla Palma, Paolo; Del Mistro, Annarosa; Folicaldi, Stefania; Gillio-Tos, Anna; Nardo, Gaetano; Naldoni, Carlo; Schincaglia, Patrizia; Zorzi, Manuel; Confortini, Massimo; Cuzick, Jack
2006-06-07
Although testing for human papillomavirus (HPV) has higher sensitivity and lower specificity than cytology alone for detecting cervical intraepithelial neoplasia (CIN), studies comparing conventional and liquid-based cytology have had conflicting results. In the first phase of a two-phase multicenter randomized controlled trial, women aged 35-60 years in the conventional arm (n = 16,658) were screened using conventional cytology, and women in the experimental arm (n = 16,706) had liquid-based cytology and were tested for high-risk HPV types using the Hybrid Capture 2 assay. Women in the conventional arm were referred to colposcopy with atypical cells of undetermined significance (ASCUS) or higher and those in the experimental arm were referred with ASCUS or higher cytology or with a positive (> or = 1 pg/mL) HPV test. Sensitivity and positive predictive value (PPV) for detection of cervical intraepithelial neoplasia grade 2 or higher (CIN2+) were calculated. The screening methods and referral criterion applied in the experimental arm had higher sensitivity than that in the conventional arm (relative sensitivity = 1.47; 95% confidence interval [CI] = 1.03 to 2.09) but a lower PPV (relative PPV = 0.40; 95% CI = 0.23 to 0.66). With HPV testing alone at > or = 1 pg/mL and at > or = 2 pg/mL, the gain in sensitivity compared with the conventional arm remained similar (relative sensitivity = 1.43, 95% CI = 1.00 to 2.04 and relative sensitivity = 1.41, 95% CI = 0.98 to 2.01, respectively) but PPV progressively improved (relative PPV = 0.58, 95% CI = 0.33 to 0.98 and relative PPV = 0.75, 95% CI = 0.45 and 1.27, respectively). Referral based on liquid-based cytology alone did not increase sensitivity compared with conventional cytology (relative sensitivity = 1.06; 95% CI = 0.72 to 1.55) but reduced PPV (relative PPV = 0.57; 95% CI = 0.39 to 0.82). HPV testing alone was more sensitive than conventional cytology among women 35-60 years old. Adding liquid-based cytology improved sensitivity only marginally but increased false-positives. HPV testing using Hybrid Capture 2 with a 2 pg/mL cutoff may be more appropriate than a 1 pg/mL cutoff for primary cervical cancer screening.
An efficient ensemble learning method for gene microarray classification.
Osareh, Alireza; Shadgar, Bita
2013-01-01
The gene microarray analysis and classification have demonstrated an effective way for the effective diagnosis of diseases and cancers. However, it has been also revealed that the basic classification techniques have intrinsic drawbacks in achieving accurate gene classification and cancer diagnosis. On the other hand, classifier ensembles have received increasing attention in various applications. Here, we address the gene classification issue using RotBoost ensemble methodology. This method is a combination of Rotation Forest and AdaBoost techniques which in turn preserve both desirable features of an ensemble architecture, that is, accuracy and diversity. To select a concise subset of informative genes, 5 different feature selection algorithms are considered. To assess the efficiency of the RotBoost, other nonensemble/ensemble techniques including Decision Trees, Support Vector Machines, Rotation Forest, AdaBoost, and Bagging are also deployed. Experimental results have revealed that the combination of the fast correlation-based feature selection method with ICA-based RotBoost ensemble is highly effective for gene classification. In fact, the proposed method can create ensemble classifiers which outperform not only the classifiers produced by the conventional machine learning but also the classifiers generated by two widely used conventional ensemble learning methods, that is, Bagging and AdaBoost.
Tracing the conformational changes in BSA using FRET with environmentally-sensitive squaraine probes
NASA Astrophysics Data System (ADS)
Govor, Iryna V.; Tatarets, Anatoliy L.; Obukhova, Olena M.; Terpetschnig, Ewald A.; Gellerman, Gary; Patsenker, Leonid D.
2016-06-01
A new potential method of detecting the conformational changes in hydrophobic proteins such as bovine serum albumin (BSA) is introduced. The method is based on the change in the Förster resonance energy transfer (FRET) efficiency between protein-sensitive fluorescent probes. As compared to conventional FRET based methods, in this new approach the donor and acceptor dyes are not covalently linked to protein molecules. Performance of the new method is demonstrated using the protein-sensitive squaraine probes Square-634 (donor) and Square-685 (acceptor) to detect the urea-induced conformational changes of BSA. The FRET efficiency between these probes can be considered a more sensitive parameter to trace protein unfolding as compared to the changes in fluorescence intensity of each of these probes. Addition of urea followed by BSA unfolding causes a noticeable decrease in the emission intensities of these probes (factor of 5.6 for Square-634 and 3.0 for Square-685), and the FRET efficiency changes by a factor of up to 17. Compared to the conventional method the new approach therefore demonstrates to be a more sensitive way to detect the conformational changes in BSA.
Modeling a color-rendering operator for high dynamic range images using a cone-response function
NASA Astrophysics Data System (ADS)
Choi, Ho-Hyoung; Kim, Gi-Seok; Yun, Byoung-Ju
2015-09-01
Tone-mapping operators are the typical algorithms designed to produce visibility and the overall impression of brightness, contrast, and color of high dynamic range (HDR) images on low dynamic range (LDR) display devices. Although several new tone-mapping operators have been proposed in recent years, the results of these operators have not matched those of the psychophysical experiments based on the human visual system. A color-rendering model that is a combination of tone-mapping and cone-response functions using an XYZ tristimulus color space is presented. In the proposed method, the tone-mapping operator produces visibility and the overall impression of brightness, contrast, and color in HDR images when mapped onto relatively LDR devices. The tone-mapping resultant image is obtained using chromatic and achromatic colors to avoid well-known color distortions shown in the conventional methods. The resulting image is then processed with a cone-response function wherein emphasis is placed on human visual perception (HVP). The proposed method covers the mismatch between the actual scene and the rendered image based on HVP. The experimental results show that the proposed method yields an improved color-rendering performance compared to conventional methods.
Thermal Desorption Analysis of Effective Specific Soil Surface Area
NASA Astrophysics Data System (ADS)
Smagin, A. V.; Bashina, A. S.; Klyueva, V. V.; Kubareva, A. V.
2017-12-01
A new method of assessing the effective specific surface area based on the successive thermal desorption of water vapor at different temperature stages of sample drying is analyzed in comparison with the conventional static adsorption method using a representative set of soil samples of different genesis and degree of dispersion. The theory of the method uses the fundamental relationship between the thermodynamic water potential (Ψ) and the absolute temperature of drying ( T): Ψ = Q - aT, where Q is the specific heat of vaporization, and a is the physically based parameter related to the initial temperature and relative humidity of the air in the external thermodynamic reservoir (laboratory). From gravimetric data on the mass fraction of water ( W) and the Ψ value, Polyanyi potential curves ( W(Ψ)) for the studied samples are plotted. Water sorption isotherms are then calculated, from which the capacity of monolayer and the target effective specific surface area are determined using the BET theory. Comparative analysis shows that the new method well agrees with the conventional estimation of the degree of dispersion by the BET and Kutilek methods in a wide range of specific surface area values between 10 and 250 m2/g.
Label-Free, Flow-Imaging Methods for Determination of Cell Concentration and Viability.
Sediq, A S; Klem, R; Nejadnik, M R; Meij, P; Jiskoot, Wim
2018-05-30
To investigate the potential of two flow imaging microscopy (FIM) techniques (Micro-Flow Imaging (MFI) and FlowCAM) to determine total cell concentration and cell viability. B-lineage acute lymphoblastic leukemia (B-ALL) cells of 2 different donors were exposed to ambient conditions. Samples were taken at different days and measured with MFI, FlowCAM, hemocytometry and automated cell counting. Dead and live cells from a fresh B-ALL cell suspension were fractionated by flow cytometry in order to derive software filters based on morphological parameters of separate cell populations with MFI and FlowCAM. The filter sets were used to assess cell viability in the measured samples. All techniques gave fairly similar cell concentration values over the whole incubation period. MFI showed to be superior with respect to precision, whereas FlowCAM provided particle images with a higher resolution. Moreover, both FIM methods were able to provide similar results for cell viability as the conventional methods (hemocytometry and automated cell counting). FIM-based methods may be advantageous over conventional cell methods for determining total cell concentration and cell viability, as FIM measures much larger sample volumes, does not require labeling, is less laborious and provides images of individual cells.
On NUFFT-based gridding for non-Cartesian MRI
NASA Astrophysics Data System (ADS)
Fessler, Jeffrey A.
2007-10-01
For MRI with non-Cartesian sampling, the conventional approach to reconstructing images is to use the gridding method with a Kaiser-Bessel (KB) interpolation kernel. Recently, Sha et al. [L. Sha, H. Guo, A.W. Song, An improved gridding method for spiral MRI using nonuniform fast Fourier transform, J. Magn. Reson. 162(2) (2003) 250-258] proposed an alternative method based on a nonuniform FFT (NUFFT) with least-squares (LS) design of the interpolation coefficients. They described this LS_NUFFT method as shift variant and reported that it yielded smaller reconstruction approximation errors than the conventional shift-invariant KB approach. This paper analyzes the LS_NUFFT approach in detail. We show that when one accounts for a certain linear phase factor, the core of the LS_NUFFT interpolator is in fact real and shift invariant. Furthermore, we find that the KB approach yields smaller errors than the original LS_NUFFT approach. We show that optimizing certain scaling factors can lead to a somewhat improved LS_NUFFT approach, but the high computation cost seems to outweigh the modest reduction in reconstruction error. We conclude that the standard KB approach, with appropriate parameters as described in the literature, remains the practical method of choice for gridding reconstruction in MRI.
Illés, Tamás; Somoskeöy, Szabolcs
2013-06-01
A new concept of vertebra vectors based on spinal three-dimensional (3D) reconstructions of images from the EOS system, a new low-dose X-ray imaging device, was recently proposed to facilitate interpretation of EOS 3D data, especially with regard to horizontal plane images. This retrospective study was aimed at the evaluation of the spinal layout visualized by EOS 3D and vertebra vectors before and after surgical correction, the comparison of scoliotic spine measurement values based on 3D vertebra vectors with measurements using conventional two-dimensional (2D) methods, and an evaluation of horizontal plane vector parameters for their relationship with the magnitude of scoliotic deformity. 95 patients with adolescent idiopathic scoliosis operated according to the Cotrel-Dubousset principle were subjected to EOS X-ray examinations pre- and postoperatively, followed by 3D reconstructions and generation of vertebra vectors in a calibrated coordinate system to calculate vector coordinates and parameters, as published earlier. Differences in values of conventional 2D Cobb methods and methods based on vertebra vectors were evaluated by means comparison T test and relationship of corresponding parameters was analysed by bivariate correlation. Relationship of horizontal plane vector parameters with the magnitude of scoliotic deformities and results of surgical correction were analysed by Pearson correlation and linear regression. In comparison to manual 2D methods, a very close relationship was detectable in vertebra vector-based curvature data for coronal curves (preop r 0.950, postop r 0.935) and thoracic kyphosis (preop r 0.893, postop r 0.896), while the found small difference in L1-L5 lordosis values (preop r 0.763, postop r 0.809) was shown to be strongly related to the magnitude of corresponding L5 wedge. The correlation analysis results revealed strong correlation between the magnitude of scoliosis and the lateral translation of apical vertebra in horizontal plane. The horizontal plane coordinates of the terminal and initial points of apical vertebra vectors represent this (r 0.701; r 0.667). Less strong correlation was detected in the axial rotation of apical vertebras and the magnitudes of the frontal curves (r 0.459). Vertebra vectors provide a key opportunity to visualize spinal deformities in all three planes simultaneously. Measurement methods based on vertebral vectors proved to be just as accurate and reliable as conventional measurement methods for coronal and sagittal plane parameters. In addition, the horizontal plane display of the curves can be studied using the same vertebra vectors. Based on the vertebra vectors data, during the surgical treatment of spinal deformities, the diminution of the lateral translation of the vertebras seems to be more important in the results of the surgical correction than the correction of the axial rotation.
Plane-Based Sampling for Ray Casting Algorithm in Sequential Medical Images
Lin, Lili; Chen, Shengyong; Shao, Yan; Gu, Zichun
2013-01-01
This paper proposes a plane-based sampling method to improve the traditional Ray Casting Algorithm (RCA) for the fast reconstruction of a three-dimensional biomedical model from sequential images. In the novel method, the optical properties of all sampling points depend on the intersection points when a ray travels through an equidistant parallel plan cluster of the volume dataset. The results show that the method improves the rendering speed at over three times compared with the conventional algorithm and the image quality is well guaranteed. PMID:23424608
Computer-Based Instruction and Health Professions Education: A Meta-Analysis of Outcomes.
ERIC Educational Resources Information Center
Cohen, Peter A.; Dacanay, Lakshmi S.
1992-01-01
The meta-analytic techniques of G. V. Glass were used to statistically integrate findings from 47 comparative studies on computer-based instruction (CBI) in health professions education. A clear majority of the studies favored CBI over conventional methods of instruction. Results show higher-order applications of computers to be especially…
CACTUS: Command and Control Training Using Knowledge-Based Simulations
ERIC Educational Resources Information Center
Hartley, Roger; Ravenscroft, Andrew; Williams, R. J.
2008-01-01
The CACTUS project was concerned with command and control training of large incidents where public order may be at risk, such as large demonstrations and marches. The training requirements and objectives of the project are first summarized justifying the use of knowledge-based computer methods to support and extend conventional training…
Efficient Broadband Terahertz Radiation Detectors Based on Bolometers with a Thin Metal Absorber
NASA Astrophysics Data System (ADS)
Dem'yanenko, M. A.
2018-01-01
The matrix method has been used to calculate the coefficients of absorption of terahertz radiation in conventional (with radiation incident from vacuum adjacent to the bolometer) and inverted (with radiation incident from the substrate on which the bolometer was fabricated) bolometric structures. Near-unity absorption coefficients were obtained when an additional cavity in the form of a gap between the bolometer and the input or output window was introduced. Conventional bolometers then became narrowband, while inverted-type devices remained broadband.
NASA Astrophysics Data System (ADS)
Zuoming, Sun; Ningfang, Song; Jing, Jin; Jingming, Song; Pan, Ma
2012-12-01
An efficient and simple method of fusion splicing of a Polarization-Maintaining Photonic Crystal Fiber (PM-PCF) and a conventional Polarization-Maintaining Fiber (PMF) with a low loss of 0.65 dB in experiment is reported. The minimum bending diameter of the joint can reach 2 cm. Theoretical calculation of the splicing loss based on mode field diameters (MFDs) mismatch of the two kinds of fibers is given. All parameters affected the splicing loss were studied.
Gassner, C; Karlsson, R; Lipsmeier, F; Moelleken, J
2018-05-30
Previously we have introduced two SPR-based assay principles (dual-binding assay and bridging assay), which allow the determination of two out of three possible interaction parameters for bispecific molecules within one assay setup: two individual interactions to both targets, and/or one simultaneous/overall interaction, which potentially reflects the inter-dependency of both individual binding events. However, activity and similarity are determined by comparing report points over a concentration range, which also mirrors the way data is generated by conventional ELISA-based methods So far, binding kinetics have not been specifically considered in generic approaches for activity assessment. Here, we introduce an improved slope-ratio model which, together with a sensorgram comparison based similarity assessment, allows the development of a detailed, USP-conformal ligand binding assay using only a single sample concentration. We compare this novel analysis method to the usual concentration-range approach for both SPR-based assay principles and discuss its impact on data quality and increased sample throughput. Copyright © 2018 Elsevier B.V. All rights reserved.
An ex vivo approach to botanical-drug interactions: A proof of concept study
Wang, Xinwen; Zhu, Hao-Jie; Munoz, Juliana; Gurley, Bill J.; Markowitz, John S.
2015-01-01
Ethnopharmacological relevance Botanical medicines are frequently used in combination with therapeutic drugs, imposing a risk for harmful botanical-drug interactions (BDIs). Among the existing BDI evaluation methods, clinical studies are the most desirable, but due to their expense and protracted time-line for completion, conventional in vitro methodologies remain the most frequently used BDI assessment tools. However, many predictions generated from in vitro studies are inconsistent with clinical findings. Accordingly, the present study aimed to develop a novel ex vivo approach for BDI assessment and expand the safety evaluation methodoloy in applied ethnopharmacological research. Materials and Methods This approach differs from conventional in vitro methods in that rather than botanical extracts or individual phytochemicals being prepared in artificial buffers, human plasma/serum collected from a limited number of subjects administered botanical supplements was utilized to assess BDIs. To validate the methodology, human plasma/serum samples collected from healthy subjects administered either milk thistle or goldenseal extracts were utilized in incubation studies to determine their potential inhibitory effects on CYP2C9 and CYP3A4/5, respectively. Silybin A and B, two principal milk thistle phytochemicals, and hydrastine and berberine, the purported active constituents in goldenseal, were evaluated in both phosphate buffer and human plasma based in vitro incubation systems. Results Ex vivo study results were consistent with formal clinical study findings for the effect of milk thistle on the disposition of tolbutamide, a CYP2C9 substrate, and for goldenseal’s influence on the pharmacokinetics of midazolam, a widely accepted CYP3A4/5 substrate. Compared to conventional in vitro BDI methodologies of assessment, the introduction of human plasma into the in vitro study model changed the observed inhibitory effect of silybinA, silybin B and hydrastine and berberine on CYP2C9 and CYP3A4/5, respectively, results which more closely mirrored those generated in clinical study. Conclusions Data from conventional buffer-based in vitro studies were less predictive than the ex vivo assessments. Thus, this novel ex vivo approach may be more effective at predicting clinically relevant BDIs than conventional in vitro methods. PMID:25623616
Electronic cigarette substitution in the experimental tobacco marketplace: A review.
Bickel, Warren K; Pope, Derek A; Kaplan, Brent A; Brady DeHart, W; Koffarnus, Mikhail N; Stein, Jeffrey S
2018-04-24
The evolution of science derives, in part, from the development and use of new methods and techniques. Here, we discuss one development that may have impact on the understanding of tobacco regulatory science: namely, the application of behavioral economics to the complex tobacco marketplace. The purpose of this paper is to review studies that examine conditions impacting the degree to which electronic nicotine delivery system (ENDS) products substitute for conventional cigarettes in the Experimental Tobacco Marketplace (ETM). Collectively, the following factors constitute the current experimental understanding of conditions that will affect ENDS use and substitution for conventional cigarettes: increasing the base price of conventional cigarettes, increasing taxation of conventional cigarettes, subsidizing the price of ENDS products, increasing ENDS nicotine strength, and providing narratives that illustrate the potential health benefits of ENDS consumption in lieu of conventional cigarettes. Each of these factors are likely moderated by consumer characteristics, which include prior ENDS use, ENDS use risk perception, and gender. Overall, the ETM provides a unique method to explore and identify the conditions by which various nicotine products may interact with one another that mimics the real world. In addition, the ETM permits the efficacy of a broad range of potential nicotine policies and regulations to be measured prior to governmental implementation. Copyright © 2017. Published by Elsevier Inc.
Standardizing lightweight deflectometer modulus measurements for compaction quality assurance
DOT National Transportation Integrated Search
2017-09-01
To evaluate the compaction of unbound geomaterials under unsaturated conditions and replace the conventional methods with a practical modulus-based specification using LWD, this study examined three different LWDs, the Zorn ZFG 3000 LWD, Dynatest 303...
Wei, Xiang; Camino, Acner; Pi, Shaohua; Cepurna, William; Huang, David; Morrison, John C; Jia, Yali
2018-05-01
Phase-based optical coherence tomography (OCT), such as OCT angiography (OCTA) and Doppler OCT, is sensitive to the confounding phase shift introduced by subject bulk motion. Traditional bulk motion compensation methods are limited by their accuracy and computing cost-effectiveness. In this Letter, to the best of our knowledge, we present a novel bulk motion compensation method for phase-based functional OCT. Bulk motion associated phase shift can be directly derived by solving its equation using a standard deviation of phase-based OCTA and Doppler OCT flow signals. This method was evaluated on rodent retinal images acquired by a prototype visible light OCT and human retinal images acquired by a commercial system. The image quality and computational speed were significantly improved, compared to two conventional phase compensation methods.
Jin, Hong-Ying; Li, Da-Wei; Zhang, Na; Gu, Zhen; Long, Yi-Tao
2015-06-10
We demonstrated a practical method to analyze carbohydrate-protein interaction based on single plasmonic nanoparticles by conventional dark field microscopy (DFM). Protein concanavalin A (ConA) was modified on large sized gold nanoparticles (AuNPs), and dextran was conjugated on small sized AuNPs. As the interaction between ConA and dextran resulted in two kinds of gold nanoparticles coupled together, which caused coupling of plasmonic oscillations, apparent color changes (from green to yellow) of the single AuNPs were observed through DFM. Then, the color information was instantly transformed into a statistic peak wavelength distribution in less than 1 min by a self-developed statistical program (nanoparticleAnalysis). In addition, the interaction between ConA and dextran was proved with biospecific recognition. This approach is high-throughput and real-time, and is a convenient method to analyze carbohydrate-protein interaction at the single nanoparticle level efficiently.
Kilic, Tugba; Erdem, Arzum; Ozsoz, Mehmet; Carrara, Sandro
2018-01-15
As being the most extensively studied, non-coding, evolutionary conserved, post-transcriptional gene regulators of genome, microRNAs (miRNAs) have taken great attention among various disciplines due to their important roles in biological processes and link with cancer. Due to their diagnostic value, there have been many conventional methods used in detection of miRNAs including northern blotting, quantitative real time PCR (qRT-PCR) and microarray technology besides novel techniques based on various nanotechnology approaches and molecular biology tools including miRNA biosensors. The aim of this review is to explain the importance of miRNAs in biomedical field with an emphasis on early cancer diagnosis by overviewing both research based and commercially available miRNA detection methods in the last decade considering their strengths and weakness with an emphasis on miRNA biosensors. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Xu, Zhuo; Sopher, Daniel; Juhlin, Christopher; Han, Liguo; Gong, Xiangbo
2018-04-01
In towed marine seismic data acquisition, a gap between the source and the nearest recording channel is typical. Therefore, extrapolation of the missing near-offset traces is often required to avoid unwanted effects in subsequent data processing steps. However, most existing interpolation methods perform poorly when extrapolating traces. Interferometric interpolation methods are one particular method that have been developed for filling in trace gaps in shot gathers. Interferometry-type interpolation methods differ from conventional interpolation methods as they utilize information from several adjacent shot records to fill in the missing traces. In this study, we aim to improve upon the results generated by conventional time-space domain interferometric interpolation by performing interferometric interpolation in the Radon domain, in order to overcome the effects of irregular data sampling and limited source-receiver aperture. We apply both time-space and Radon-domain interferometric interpolation methods to the Sigsbee2B synthetic dataset and a real towed marine dataset from the Baltic Sea with the primary aim to improve the image of the seabed through extrapolation into the near-offset gap. Radon-domain interferometric interpolation performs better at interpolating the missing near-offset traces than conventional interferometric interpolation when applied to data with irregular geometry and limited source-receiver aperture. We also compare the interferometric interpolated results with those obtained using solely Radon transform (RT) based interpolation and show that interferometry-type interpolation performs better than solely RT-based interpolation when extrapolating the missing near-offset traces. After data processing, we show that the image of the seabed is improved by performing interferometry-type interpolation, especially when Radon-domain interferometric interpolation is applied.
Infrared face recognition based on LBP histogram and KW feature selection
NASA Astrophysics Data System (ADS)
Xie, Zhihua
2014-07-01
The conventional LBP-based feature as represented by the local binary pattern (LBP) histogram still has room for performance improvements. This paper focuses on the dimension reduction of LBP micro-patterns and proposes an improved infrared face recognition method based on LBP histogram representation. To extract the local robust features in infrared face images, LBP is chosen to get the composition of micro-patterns of sub-blocks. Based on statistical test theory, Kruskal-Wallis (KW) feature selection method is proposed to get the LBP patterns which are suitable for infrared face recognition. The experimental results show combination of LBP and KW features selection improves the performance of infrared face recognition, the proposed method outperforms the traditional methods based on LBP histogram, discrete cosine transform(DCT) or principal component analysis(PCA).
NASA Astrophysics Data System (ADS)
Zhang, Hao; Liu, Qiancheng; Li, Hongyuan; Zhang, Yi
2018-04-01
In marine seismic exploration, the ghost energies (down-going waves), which arise from the reflection at the surface, are often treated as unwanted signals for data processing. The ghost wave fields interfere with the desired primary signals, leads to frequency notches and attenuation of low frequencies, which in turn downgrade the resolution of the recorded seismic data. There are two main categories of methods to solve the ghost or the so-called notch problem: the non-conventional acquisition configuration-based technique and a deghosting algorithm-based solution. The variable-depth streamer (VDS) acquisition solution is one of the most representative methods in the first category, which has become a popular solution for marine seismic acquisition to obtain broad data bandwidth. However, this approach is not as economic as the conventional constant depth streamer (CDS) acquisition, due to the precise control of the towing streamer. In addition, there are large quantities of conventionally-towed legacy data stored in the data library. Applying receiver deghosting to the CDS data thus becomes a more economical method. In theory, both types of data after deghosting should have the same bandwidth and S/N ratio, but in reality they are different. In this paper, we conduct a comparative study and evaluation to apply receiver deghosting to a set of real 2D marine data including both types of acquisition (CDS and VDS) corresponding to the same geology. The deghosting algorithm we employed is a self-sustained, inversion-based approach operated in the τ-p domain. This evaluation can help us to understand two questions: whether the VDS acquisition has more broadband characteristics compared to conventional CDS acquisition after deghosting, and whether we can achieve the identical or similar data quality (e.g., S/N ratio) through the proper deghosting algorithm for both types of data. The comparative results are illustrated and discussed.
Pacheco-Fernández, Idaira; Pino, Verónica; Ayala, Juan H; Afonso, Ana M
2018-07-20
The IL-based surfactant octylguanidinium chloride (C 8 Gu-Cl) was designed and synthetized with the purpose of obtaining a less harmful surfactant: containing guanidinium as core cation and a relatively short alkyl chain. Its interfacial and aggregation behavior was evaluated through conductivity and fluorescence measurements, presenting a critical micelle concentration value of 42.5 and 44.6mmolL -1 , respectively. Cytotoxicity studies were carried out with C 8 Gu-Cl and other IL-based and conventional surfactants, specifically the analogue 1-octyl-3-methylimidazolium chloride (C 8 MIm-Cl), and other imidazolium- (C 16 MIm-Br) and pyridinium- (C 16 Py-Cl) based surfactants, together with the conventional cationic CTAB and the conventional anionic SDS. From these studies, C 8 Gu-Cl was the only one to achieve the classification of low cytotoxicity. An in situ dispersive liquid-liquid microextraction (DLLME) method based on transforming the water-soluble C 8 Gu-Cl IL-based surfactant into a water-insoluble IL microdroplet via a simple metathesis reaction was then selected as the extraction/preconcentration method for a group of 6 personal care products (PCPs) present in cosmetic samples. The method was carried out in combination with high-performance liquid chromatography (HPLC) and diode array detection (DAD). The method was properly optimized, requiring the use of only 30μL of C 8 Gu-Cl for 10mL of aqueous sample with a NaCl content of 8% (w/v) to adjust the ionic strength and pH value of 5. The metathesis reaction required the addition of the anion exchange reagent (bis[(trifluoromethyl)sulfonyl]imide - 1:1 molar ratio), followed by vortex and centrifugation, and dilution of the final microdroplet up to 60μL with acetonitrile before the injection in the HPLC-DAD system. The optimum in situ DLLME-HPLC-DAD method takes ∼10min for the extraction step and ∼22min for the chromatographic separation, with analytical features of low detection limits: down to 0.4μgL -1 ; high reproducibility: with RSD values lower than 10% (intra-day) and 16% (inter-day) for a spiked level of 15μgL -1 ; and an average enrichment factor of 89. The requirement of low volumes (30μL) of a low cytotoxic IL-based surfactant allows the method to be considered less harmful than other common analytical microextraction approaches. Copyright © 2017 Elsevier B.V. All rights reserved.
Electronic method for autofluorography of macromolecules on two-D matrices. [Patent application
Davidson, J.B.; Case, A.L.
1981-12-30
A method for detecting, localizing, and quantifying macromolecules contained in a two-dimensional matrix is provided which employs a television-based position sensitive detection system. A molecule-containing matrix may be produced by conventional means to produce spots of light at the molecule locations which are detected by the television system. The matrix, such as a gel matrix, is exposed to an electronic camera system including an image-intensifier and secondary electron conduction camera capable of light integrating times of many minutes. A light image stored in the form of a charge image on the camera tube target is scanned by conventional television techniques, digitized, and stored in a digital memory. Intensity of any point on the image may be determined from the number at the memory address of the point. The entire image may be displayed on a television monitor for inspection and photographing or individual spots may be analyzed through selected readout of the memory locations. Compared to conventional film exposure methods, the exposure time may be reduced 100 to 1000 times.
Epidermal segmentation in high-definition optical coherence tomography.
Li, Annan; Cheng, Jun; Yow, Ai Ping; Wall, Carolin; Wong, Damon Wing Kee; Tey, Hong Liang; Liu, Jiang
2015-01-01
Epidermis segmentation is a crucial step in many dermatological applications. Recently, high-definition optical coherence tomography (HD-OCT) has been developed and applied to imaging subsurface skin tissues. In this paper, a novel epidermis segmentation method using HD-OCT is proposed in which the epidermis is segmented by 3 steps: the weighted least square-based pre-processing, the graph-based skin surface detection and the local integral projection-based dermal-epidermal junction detection respectively. Using a dataset of five 3D volumes, we found that this method correlates well with the conventional method of manually marking out the epidermis. This method can therefore serve to effectively and rapidly delineate the epidermis for study and clinical management of skin diseases.
Chao, Maria T.; Wade, Christine; Kronenberg, Fredi
2009-01-01
Background Complementary and alternative medicine (CAM) is often used alongside conventional medical care, yet fewer than half of patients disclose CAM use to medical doctors. CAM disclosure is particularly low among racial/ethnic minorities, but reasons for differences, such as type of CAM used or quality of conventional healthcare, have not been explored. Objective We tested the hypotheses that disclosure of CAM use to medical doctors is higher for provider-based CAM and among non-Hispanic whites, and that access to and quality of conventional medical care account for racial/ethnic differences in CAM disclosure. Methods Bivariate and multiple variable analyses of the 2002 National Health Interview Survey and 2001 Health Care Quality Survey were performed. Results Disclosure of CAM use to medical providers was higher for provider-based than self-care CAM. Disclosure of any CAM was associated with access to and quality of conventional care and higher among non-Latino whites relative to minorities. Having a regular doctor and quality patient–provider relationship mitigated racial/ethnic differences in CAM disclosure. Conclusion Insufficient disclosure of CAM use to conventional providers, particularly for self-care practices and among minority populations, represents a serious challenge in medical encounter communications. Efforts to improve disclosure of CAM use should be aimed at improving consistency of care and patient–physician communication across racial/ethnic groups. PMID:19024232
Multi-label literature classification based on the Gene Ontology graph.
Jin, Bo; Muller, Brian; Zhai, Chengxiang; Lu, Xinghua
2008-12-08
The Gene Ontology is a controlled vocabulary for representing knowledge related to genes and proteins in a computable form. The current effort of manually annotating proteins with the Gene Ontology is outpaced by the rate of accumulation of biomedical knowledge in literature, which urges the development of text mining approaches to facilitate the process by automatically extracting the Gene Ontology annotation from literature. The task is usually cast as a text classification problem, and contemporary methods are confronted with unbalanced training data and the difficulties associated with multi-label classification. In this research, we investigated the methods of enhancing automatic multi-label classification of biomedical literature by utilizing the structure of the Gene Ontology graph. We have studied three graph-based multi-label classification algorithms, including a novel stochastic algorithm and two top-down hierarchical classification methods for multi-label literature classification. We systematically evaluated and compared these graph-based classification algorithms to a conventional flat multi-label algorithm. The results indicate that, through utilizing the information from the structure of the Gene Ontology graph, the graph-based multi-label classification methods can significantly improve predictions of the Gene Ontology terms implied by the analyzed text. Furthermore, the graph-based multi-label classifiers are capable of suggesting Gene Ontology annotations (to curators) that are closely related to the true annotations even if they fail to predict the true ones directly. A software package implementing the studied algorithms is available for the research community. Through utilizing the information from the structure of the Gene Ontology graph, the graph-based multi-label classification methods have better potential than the conventional flat multi-label classification approach to facilitate protein annotation based on the literature.
Complementary and Alternative Approaches to Pain Relief During Labor
Theau-Yonneau, Anne
2007-01-01
This review evaluated the effect of complementary and alternative medicine on pain during labor with conventional scientific methods using electronic data bases through 2006 were used. Only randomized controlled trials with outcome measures for labor pain were kept for the conclusions. Many studies did not meet the scientific inclusion criteria. According to the randomized control trials, we conclude that for the decrease of labor pain and/or reduction of the need for conventional analgesic methods: (i) There is an efficacy found for acupressure and sterile water blocks. (ii) Most results favored some efficacy for acupuncture and hydrotherapy. (iii) Studies for other complementary or alternative therapies for labor pain control have not shown their effectiveness. PMID:18227907
Social Image Tag Ranking by Two-View Learning
NASA Astrophysics Data System (ADS)
Zhuang, Jinfeng; Hoi, Steven C. H.
Tags play a central role in text-based social image retrieval and browsing. However, the tags annotated by web users could be noisy, irrelevant, and often incomplete for describing the image contents, which may severely deteriorate the performance of text-based image retrieval models. In order to solve this problem, researchers have proposed techniques to rank the annotated tags of a social image according to their relevance to the visual content of the image. In this paper, we aim to overcome the challenge of social image tag ranking for a corpus of social images with rich user-generated tags by proposing a novel two-view learning approach. It can effectively exploit both textual and visual contents of social images to discover the complicated relationship between tags and images. Unlike the conventional learning approaches that usually assumes some parametric models, our method is completely data-driven and makes no assumption about the underlying models, making the proposed solution practically more effective. We formulate our method as an optimization task and present an efficient algorithm to solve it. To evaluate the efficacy of our method, we conducted an extensive set of experiments by applying our technique to both text-based social image retrieval and automatic image annotation tasks. Our empirical results showed that the proposed method can be more effective than the conventional approaches.
Ear recognition from one sample per person.
Chen, Long; Mu, Zhichun; Zhang, Baoqing; Zhang, Yi
2015-01-01
Biometrics has the advantages of efficiency and convenience in identity authentication. As one of the most promising biometric-based methods, ear recognition has received broad attention and research. Previous studies have achieved remarkable performance with multiple samples per person (MSPP) in the gallery. However, most conventional methods are insufficient when there is only one sample per person (OSPP) available in the gallery. To solve the OSPP problem by maximizing the use of a single sample, this paper proposes a hybrid multi-keypoint descriptor sparse representation-based classification (MKD-SRC) ear recognition approach based on 2D and 3D information. Because most 3D sensors capture 3D data accessorizing the corresponding 2D data, it is sensible to use both types of information. First, the ear region is extracted from the profile. Second, keypoints are detected and described for both the 2D texture image and 3D range image. Then, the hybrid MKD-SRC algorithm is used to complete the recognition with only OSPP in the gallery. Experimental results on a benchmark dataset have demonstrated the feasibility and effectiveness of the proposed method in resolving the OSPP problem. A Rank-one recognition rate of 96.4% is achieved for a gallery of 415 subjects, and the time involved in the computation is satisfactory compared to conventional methods.
Li, Qiang; Liu, Hao-Li; Chen, Wen-Shiang
2013-01-01
Previous studies developed ultrasound temperature-imaging methods based on changes in backscattered energy (CBE) to monitor variations in temperature during hyperthermia. In conventional CBE imaging, tracking and compensation of the echo shift due to temperature increase need to be done. Moreover, the CBE image does not enable visualization of the temperature distribution in tissues during nonuniform heating, which limits its clinical application in guidance of tissue ablation treatment. In this study, we investigated a CBE imaging method based on the sliding window technique and the polynomial approximation of the integrated CBE (ICBEpa image) to overcome the difficulties of conventional CBE imaging. We conducted experiments with tissue samples of pork tenderloin ablated by microwave irradiation to validate the feasibility of the proposed method. During ablation, the raw backscattered signals were acquired using an ultrasound scanner for B-mode and ICBEpa imaging. The experimental results showed that the proposed ICBEpa image can visualize the temperature distribution in a tissue with a very good contrast. Moreover, tracking and compensation of the echo shift were not necessary when using the ICBEpa image to visualize the temperature profile. The experimental findings suggested that the ICBEpa image, a new CBE imaging method, has a great potential in CBE-based imaging of hyperthermia and other thermal therapies. PMID:24260041
Zhao, Gang; Tan, Wei; Hou, Jiajia; Qiu, Xiaodong; Ma, Weiguang; Li, Zhixin; Dong, Lei; Zhang, Lei; Yin, Wangbao; Xiao, Liantuan; Axner, Ove; Jia, Suotang
2016-01-25
A methodology for calibration-free wavelength modulation spectroscopy (CF-WMS) that is based upon an extensive empirical description of the wavelength-modulation frequency response (WMFR) of DFB laser is presented. An assessment of the WMFR of a DFB laser by the use of an etalon confirms that it consists of two parts: a 1st harmonic component with an amplitude that is linear with the sweep and a nonlinear 2nd harmonic component with a constant amplitude. Simulations show that, among the various factors that affect the line shape of a background-subtracted peak-normalized 2f signal, such as concentration, phase shifts between intensity modulation and frequency modulation, and WMFR, only the last factor has a decisive impact. Based on this and to avoid the impractical use of an etalon, a novel method to pre-determine the parameters of the WMFR by fitting to a background-subtracted peak-normalized 2f signal has been developed. The accuracy of the new scheme to determine the WMFR is demonstrated and compared with that of conventional methods in CF-WMS by detection of trace acetylene. The results show that the new method provides a four times smaller fitting error than the conventional methods and retrieves concentration more accurately.
Kaplan, Heidi B.; Dua, Anahita; Litwin, Douglas B.; Ambrose, Catherine G.; Moore, Laura J.; Murray, COL Clinton K.; Wade, Charles E.; Holcomb, John B.
2016-01-01
Abstract Background: Sepsis from bacteremia occurs in 250,000 cases annually in the United States, has a mortality rate as high as 60%, and is associated with a poorer prognosis than localized infection. Because of these high figures, empiric antibiotic administration for patients with systemic inflammatory response syndrome (SIRS) and suspected infection is the second most common indication for antibiotic administration in intensive care units (ICU)s. However, overuse of empiric antibiotics contributes to the development of opportunistic infections, antibiotic resistance, and the increase in multi-drug-resistant bacterial strains. The current method of diagnosing and ruling out bacteremia is via blood culture (BC) and Gram stain (GS) analysis. Methods: Conventional and molecular methods for diagnosing bacteremia were reviewed and compared. The clinical implications, use, and current clinical trials of polymerase chain reaction (PCR)-based methods to detect bacterial pathogens in the blood stream were detailed. Results: BC/GS has several disadvantages. These include: some bacteria do not grow in culture media; others do not GS appropriately; and cultures can require up to 5 d to guide or discontinue antibiotic treatment. PCR-based methods can be potentially applied to detect rapidly, accurately, and directly microbes in human blood samples. Conclusions: Compared with the conventional BC/GS, particular advantages to molecular methods (specifically, PCR-based methods) include faster results, leading to possible improved antibiotic stewardship when bacteremia is not present. PMID:26918696
A simple linear model for estimating ozone AOT40 at forest sites from raw passive sampling data.
Ferretti, Marco; Cristofolini, Fabiana; Cristofori, Antonella; Gerosa, Giacomo; Gottardini, Elena
2012-08-01
A rapid, empirical method is described for estimating weekly AOT40 from ozone concentrations measured with passive samplers at forest sites. The method is based on linear regression and was developed after three years of measurements in Trentino (northern Italy). It was tested against an independent set of data from passive sampler sites across Italy. It provides good weekly estimates compared with those measured by conventional monitors (0.85 ≤R(2)≤ 0.970; 97 ≤ RMSE ≤ 302). Estimates obtained using passive sampling at forest sites are comparable to those obtained by another estimation method based on modelling hourly concentrations (R(2) = 0.94; 131 ≤ RMSE ≤ 351). Regression coefficients of passive sampling are similar to those obtained with conventional monitors at forest sites. Testing against an independent dataset generated by passive sampling provided similar results (0.86 ≤R(2)≤ 0.99; 65 ≤ RMSE ≤ 478). Errors tend to accumulate when weekly AOT40 estimates are summed to obtain the total AOT40 over the May-July period, and the median deviation between the two estimation methods based on passive sampling is 11%. The method proposed does not require any assumptions, complex calculation or modelling technique, and can be useful when other estimation methods are not feasible, either in principle or in practice. However, the method is not useful when estimates of hourly concentrations are of interest.
Validation of heart and lung teleauscultation on an Internet-based system.
Fragasso, Gabriele; De Benedictis, Marialuisa; Palloshi, Altin; Moltrasio, Marco; Cappelletti, Alberto; Carlino, Mauro; Marchisi, Angelo; Pala, Mariagrazia; Alfieri, Ottavio; Margonato, Alberto
2003-11-01
The feasibility and accuracy of an Internet-based system for teleauscultation was evaluated in 103 cardiac patients, who were auscultated by the same cardiologist with a conventional stethoscope and with an Internet-based method, using an electronic stethoscope and transmitting heart and lung sounds between computer work stations. In 92% of patients, the results of electronic and acoustic auscultation coincided, indicating that teleauscultation may be considered a reliable method for assessing cardiac patients and could, therefore, be adopted in the context of comprehensive telecare programs.
NASA Astrophysics Data System (ADS)
Zeraatpisheh, Mojtaba; Ayoubi, Shamsollah; Jafari, Azam; Finke, Peter
2017-05-01
The efficiency of different digital and conventional soil mapping approaches to produce categorical maps of soil types is determined by cost, sample size, accuracy and the selected taxonomic level. The efficiency of digital and conventional soil mapping approaches was examined in the semi-arid region of Borujen, central Iran. This research aimed to (i) compare two digital soil mapping approaches including Multinomial logistic regression and random forest, with the conventional soil mapping approach at four soil taxonomic levels (order, suborder, great group and subgroup levels), (ii) validate the predicted soil maps by the same validation data set to determine the best method for producing the soil maps, and (iii) select the best soil taxonomic level by different approaches at three sample sizes (100, 80, and 60 point observations), in two scenarios with and without a geomorphology map as a spatial covariate. In most predicted maps, using both digital soil mapping approaches, the best results were obtained using the combination of terrain attributes and the geomorphology map, although differences between the scenarios with and without the geomorphology map were not significant. Employing the geomorphology map increased map purity and the Kappa index, and led to a decrease in the 'noisiness' of soil maps. Multinomial logistic regression had better performance at higher taxonomic levels (order and suborder levels); however, random forest showed better performance at lower taxonomic levels (great group and subgroup levels). Multinomial logistic regression was less sensitive than random forest to a decrease in the number of training observations. The conventional soil mapping method produced a map with larger minimum polygon size because of traditional cartographic criteria used to make the geological map 1:100,000 (on which the conventional soil mapping map was largely based). Likewise, conventional soil mapping map had also a larger average polygon size that resulted in a lower level of detail. Multinomial logistic regression at the order level (map purity of 0.80), random forest at the suborder (map purity of 0.72) and great group level (map purity of 0.60), and conventional soil mapping at the subgroup level (map purity of 0.48) produced the most accurate maps in the study area. The multinomial logistic regression method was identified as the most effective approach based on a combined index of map purity, map information content, and map production cost. The combined index also showed that smaller sample size led to a preference for the order level, while a larger sample size led to a preference for the great group level.
Validating Alternative Modes of Scoring for Coloured Progressive Matrices.
ERIC Educational Resources Information Center
Razel, Micha; Eylon, Bat-Sheva
Conventional scoring of the Coloured Progressive Matrices (CPM) was compared with three methods of multiple weight scoring. The methods include: (1) theoretical weighting in which the weights were based on a theory of cognitive processing; (2) judged weighting in which the weights were given by a group of nine adult expert judges; and (3)…
Critical review of analytical techniques for safeguarding the thorium-uranium fuel cycle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hakkila, E.A.
1978-10-01
Conventional analytical methods applicable to the determination of thorium, uranium, and plutonium in feed, product, and waste streams from reprocessing thorium-based nuclear reactor fuels are reviewed. Separations methods of interest for these analyses are discussed. Recommendations concerning the applicability of various techniques to reprocessing samples are included. 15 tables, 218 references.
ERIC Educational Resources Information Center
Madhavan, Manoharan; Kaur, Gurjeet
2006-01-01
Introduction: Fixed Learning Module (FLM) adopted in pathology teaching to medical undergraduates, encompasses exhibition of potted specimens and charts. Though it is an important teaching method it also has its limitations. Aim: To create an alternative method for teaching pathology using web based, interactive computer technology [i.e.,…
Evaluation of a low-cost liquid-based Pap test in rural El Salvador: a split-sample study.
Guo, Jin; Cremer, Miriam; Maza, Mauricio; Alfaro, Karla; Felix, Juan C
2014-04-01
We sought to test the diagnostic efficacy of a low-cost, liquid-based cervical cytology that could be implemented in low-resource settings. A prospective, split-sample Pap study was performed in 595 women attending a cervical cancer screening clinic in rural El Salvador. Collected cervical samples were used to make a conventional Pap (cell sample directly to glass slide), whereas residual material was used to make the liquid-based sample using the ClearPrep method. Selected samples were tested from the residual sample of the liquid-based collection for the presence of high-risk Human papillomaviruses. Of 595 patients, 570 were interpreted with the same diagnosis between the 2 methods (95.8% agreement). There were comparable numbers of unsatisfactory cases; however, ClearPrep significantly increased detection of low-grade squamous intraepithelial lesions and decreased the diagnoses of atypical squamous cells of undetermined significance. ClearPrep identified an equivalent number of high-grade squamous intraepithelial lesion cases as the conventional Pap. High-risk human papillomavirus was identified in all cases of high-grade squamous intraepithelial lesion, adenocarcinoma in situ, and cancer as well as in 78% of low-grade squamous intraepithelial lesions out of the residual fluid of the ClearPrep vials. The low-cost ClearPrep Pap test demonstrated equivalent detection of squamous intraepithelial lesions when compared with the conventional Pap smear and demonstrated the potential for ancillary molecular testing. The test seems a viable option for implementation in low-resource settings.
Innovations in diagnostic imaging of localized prostate cancer.
Pummer, Karl; Rieken, Malte; Augustin, Herbert; Gutschi, Thomas; Shariat, Shahrokh F
2014-08-01
In recent years, various imaging modalities have been developed to improve diagnosis, staging, and localization of early-stage prostate cancer (PCa). A MEDLINE literature search of the time frame between 01/2007 and 06/2013 was performed on imaging of localized PCa. Conventional transrectal ultrasound (TRUS) is mainly used to guide prostate biopsy. Contrast-enhanced ultrasound is based on the assumption that PCa tissue is hypervascularized and might be better identified after intravenous injection of a microbubble contrast agent. However, results on its additional value for cancer detection are controversial. Computer-based analysis of the transrectal ultrasound signal (C-TRUS) appears to detect cancer in a high rate of patients with previous biopsies. Real-time elastography seems to have higher sensitivity, specificity, and positive predictive value than conventional TRUS. However, the method still awaits prospective validation. The same is true for prostate histoscanning, an ultrasound-based method for tissue characterization. Currently, multiparametric MRI provides improved tissue visualization of the prostate, which may be helpful in the diagnosis and targeting of prostate lesions. However, most published series are small and suffer from variations in indication, methodology, quality, interpretation, and reporting. Among ultrasound-based techniques, real-time elastography and C-TRUS seem the most promising techniques. Multiparametric MRI appears to have advantages over conventional T2-weighted MRI in the detection of PCa. Despite these promising results, currently, no recommendation for the routine use of these novel imaging techniques can be made. Prospective studies defining the value of various imaging modalities are urgently needed.
Kim, Wooseong; Hendricks, Gabriel Lambert; Lee, Kiho; Mylonakis, Eleftherios
2017-06-01
The emergence of antibiotic-resistant and -tolerant bacteria is a major threat to human health. Although efforts for drug discovery are ongoing, conventional bacteria-centered screening strategies have thus far failed to yield new classes of effective antibiotics. Therefore, new paradigms for discovering novel antibiotics are of critical importance. Caenorhabditis elegans, a model organism used for in vivo, offers a promising solution for identification of anti-infective compounds. Areas covered: This review examines the advantages of C. elegans-based high-throughput screening over conventional, bacteria-centered in vitro screens. It discusses major anti-infective compounds identified from large-scale C. elegans-based screens and presents the first clinically-approved drugs, then known bioactive compounds, and finally novel small molecules. Expert opinion: There are clear advantages of using a C. elegans-infection based screening method. A C. elegans-based screen produces an enriched pool of non-toxic, efficacious, potential anti-infectives, covering: conventional antimicrobial agents, immunomodulators, and anti-virulence agents. Although C. elegans-based screens do not denote the mode of action of hit compounds, this can be elucidated in secondary studies by comparing the results to target-based screens, or conducting subsequent target-based screens, including the genetic knock-down of host or bacterial genes.
A Unified Fisher's Ratio Learning Method for Spatial Filter Optimization.
Li, Xinyang; Guan, Cuntai; Zhang, Haihong; Ang, Kai Keng
To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.
Deng, Wei; Zhang, Xiujuan; Pan, Huanhuan; Shang, Qixun; Wang, Jincheng; Zhang, Xiaohong; Zhang, Xiwei; Jie, Jiansheng
2014-01-01
Single-crystal organic nanostructures show promising applications in flexible and stretchable electronics, while their applications are impeded by the large incompatibility with the well-developed photolithography techniques. Here we report a novel two-step transfer printing (TTP) method for the construction of organic nanowires (NWs) based devices onto arbitrary substrates. Copper phthalocyanine (CuPc) NWs are first transfer-printed from the growth substrate to the desired receiver substrate by contact-printing (CP) method, and then electrode arrays are transfer-printed onto the resulting receiver substrate by etching-assisted transfer printing (ETP) method. By utilizing a thin copper (Cu) layer as sacrificial layer, microelectrodes fabricated on it via photolithography could be readily transferred to diverse conventional or non-conventional substrates that are not easily accessible before with a high transfer yield of near 100%. The ETP method also exhibits an extremely high flexibility; various electrodes such as Au, Ti, and Al etc. can be transferred, and almost all types of organic devices, such as resistors, Schottky diodes, and field-effect transistors (FETs), can be constructed on planar or complex curvilinear substrates. Significantly, these devices can function properly and exhibit closed or even superior performance than the device counterparts fabricated by conventional approach. PMID:24942458
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S; Kang, S; Eom, J
Purpose: Photon-counting detectors (PCDs) allow multi-energy X-ray imaging without additional exposures and spectral overlap. This capability results in the improvement of accuracy of material decomposition for dual-energy X-ray imaging and the reduction of radiation dose. In this study, the PCD-based contrast-enhanced dual-energy mammography (CEDM) was compared with the conventional CDEM in terms of radiation dose, image quality and accuracy of material decomposition. Methods: A dual-energy model was designed by using Beer-Lambert’s law and rational inverse fitting function for decomposing materials from a polychromatic X-ray source. A cadmium zinc telluride (CZT)-based PCD, which has five energy thresholds, and iodine solutions includedmore » in a 3D half-cylindrical phantom, which composed of 50% glandular and 50% adipose tissues, were simulated by using a Monte Carlo simulation tool. The low- and high-energy images were obtained in accordance with the clinical exposure conditions for the conventional CDEM. Energy bins of 20–33 and 34–50 keV were defined from X-ray energy spectra simulated at 50 kVp with different dose levels for implementing the PCD-based CDEM. The dual-energy mammographic techniques were compared by means of absorbed dose, noise property and normalized root-mean-square error (NRMSE). Results: Comparing to the conventional CEDM, the iodine solutions were clearly decomposed for the PCD-based CEDM. Although the radiation dose for the PCD-based CDEM was lower than that for the conventional CEDM, the PCD-based CDEM improved the noise property and accuracy of decomposition images. Conclusion: This study demonstrates that the PCD-based CDEM allows the quantitative material decomposition, and reduces radiation dose in comparison with the conventional CDEM. Therefore, the PCD-based CDEM is able to provide useful information for detecting breast tumor and enhancing diagnostic accuracy in mammography.« less
Zhao, Chenguang; Bolan, Patrick J.; Royce, Melanie; Lakkadi, Navneeth; Eberhardt, Steven; Sillerud, Laurel; Lee, Sang-Joon; Posse, Stefan
2012-01-01
Purpose To quantitatively measure tCho levels in healthy breasts using Proton-Echo-Planar-Spectroscopic-Imaging (PEPSI). Material and Methods The 2-dimensional mapping of tCho at 3 Tesla across an entire breast slice using PEPSI and a hybrid spectral quantification method based on LCModel fitting and integration of tCho using the fitted spectrum were developed. This method was validated in 19 healthy females and compared with single voxel spectroscopy (SVS) and with PRESS prelocalized conventional Magnetic Resonance Spectroscopic Imaging (MRSI) using identical voxel size (8 cc) and similar scan times (~7 min). Results A tCho peak with a signal to noise ratio larger than 2 was detected in 10 subjects using both PEPSI and SVS. The average tCho concentration in these subjects was 0.45 ± 0.2 mmol/kg using PEPSI and 0.48±0.3 mmol/kg using SVS. Comparable results were obtained in 2 subjects using conventional MRSI. High lipid content in the spectra of 9 tCho negative subjects was associated with spectral line broadening of more than 26 Hz, which made tCho detection impossible. Conventional MRSI with PRESS prelocalization in glandular tissue in two of these subjects yielded tCho concentrations comparable to PEPSI. Conclusion The detection sensitivity of PEPSI is comparable to SVS and conventional PRESS-MRSI. PEPSI can be potentially used in the evaluation of tCho in breast cancer. A tCho threshold concentration value of ~0.7mmol/kg might be used to differentiate between cancerous and healthy (or benign) breast tissues based on this work and previous studies. PMID:22782667
Moreira, Maria E; Hernandez, Caleb; Stevens, Allen D; Jones, Seth; Sande, Margaret; Blumen, Jason R; Hopkins, Emily; Bakes, Katherine; Haukoos, Jason S
2015-08-01
The Institute of Medicine has called on the US health care system to identify and reduce medical errors. Unfortunately, medication dosing errors remain commonplace and may result in potentially life-threatening outcomes, particularly for pediatric patients when dosing requires weight-based calculations. Novel medication delivery systems that may reduce dosing errors resonate with national health care priorities. Our goal was to evaluate novel, prefilled medication syringes labeled with color-coded volumes corresponding to the weight-based dosing of the Broselow Tape, compared with conventional medication administration, in simulated pediatric emergency department (ED) resuscitation scenarios. We performed a prospective, block-randomized, crossover study in which 10 emergency physician and nurse teams managed 2 simulated pediatric arrest scenarios in situ, using either prefilled, color-coded syringes (intervention) or conventional drug administration methods (control). The ED resuscitation room and the intravenous medication port were video recorded during the simulations. Data were extracted from video review by blinded, independent reviewers. Median time to delivery of all doses for the conventional and color-coded delivery groups was 47 seconds (95% confidence interval [CI] 40 to 53 seconds) and 19 seconds (95% CI 18 to 20 seconds), respectively (difference=27 seconds; 95% CI 21 to 33 seconds). With the conventional method, 118 doses were administered, with 20 critical dosing errors (17%); with the color-coded method, 123 doses were administered, with 0 critical dosing errors (difference=17%; 95% CI 4% to 30%). A novel color-coded, prefilled syringe decreased time to medication administration and significantly reduced critical dosing errors by emergency physician and nurse teams during simulated pediatric ED resuscitations. Copyright © 2015 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.
Boeddinghaus, Moritz; Breloer, Eva Sabina; Rehmann, Peter; Wöstmann, Bernd
2015-11-01
The purpose of this clinical study was to compare the marginal fit of dental crowns based on three different intraoral digital and one conventional impression methods. Forty-nine teeth of altogether 24 patients were prepared to be treated with full-coverage restorations. Digital impressions were made using three intraoral scanners: Sirona CEREC AC Omnicam (OCam), Heraeus Cara TRIOS and 3M Lava True Definition (TDef). Furthermore, a gypsum model based on a conventional impression (EXA'lence, GC, Tokyo, Japan) was scanned with a standard laboratory scanner (3Shape D700). Based on the dataset obtained, four zirconia copings per tooth were produced. The marginal fit of the copings in the patient's mouth was assessed employing a replica technique. Overall, seven measurement copings did not fit and, therefore, could not be assessed. The marginal gap was 88 μm (68-136 μm) [median/interquartile range] for the TDef, 112 μm (94-149 μm) for the Cara TRIOS, 113 μm (81-157 μm) for the laboratory scanner and 149 μm (114-218 μm) for the OCam. There was a statistically significant difference between the OCam and the other groups (p < 0.05). Within the limitations of this study, it can be concluded that zirconia copings based on intraoral scans and a laboratory scans of a conventional model are comparable to one another with regard to their marginal fit. Regarding the results of this study, the digital intraoral impression can be considered as an alternative to a conventional impression with a consecutive digital workflow when the finish line is clearly visible and it is possible to keep it dry.
NASA Astrophysics Data System (ADS)
Chauhan, H.; Krishna Mohan, B.
2014-11-01
The present study was undertaken with the objective to check effectiveness of spectral similarity measures to develop precise crop spectra from the collected hyperspectral field spectra. In Multispectral and Hyperspectral remote sensing, classification of pixels is obtained by statistical comparison (by means of spectral similarity) of known field or library spectra to unknown image spectra. Though these algorithms are readily used, little emphasis has been placed on use of various spectral similarity measures to select precise crop spectra from the set of field spectra. Conventionally crop spectra are developed after rejecting outliers based only on broad-spectrum analysis. Here a successful attempt has been made to develop precise crop spectra based on spectral similarity. As unevaluated data usage leads to uncertainty in the image classification, it is very crucial to evaluate the data. Hence, notwithstanding the conventional method, the data precision has been performed effectively to serve the purpose of the present research work. The effectiveness of developed precise field spectra was evaluated by spectral discrimination measures and found higher discrimination values compared to spectra developed conventionally. Overall classification accuracy for the image classified by field spectra selected conventionally is 51.89% and 75.47% for the image classified by field spectra selected precisely based on spectral similarity. KHAT values are 0.37, 0.62 and Z values are 2.77, 9.59 for image classified using conventional and precise field spectra respectively. Reasonable higher classification accuracy, KHAT and Z values shows the possibility of a new approach for field spectra selection based on spectral similarity measure.
Watanabe, Hiroshi C; Kubillus, Maximilian; Kubař, Tomáš; Stach, Robert; Mizaikoff, Boris; Ishikita, Hiroshi
2017-07-21
In the condensed phase, quantum chemical properties such as many-body effects and intermolecular charge fluctuations are critical determinants of the solvation structure and dynamics. Thus, a quantum mechanical (QM) molecular description is required for both solute and solvent to incorporate these properties. However, it is challenging to conduct molecular dynamics (MD) simulations for condensed systems of sufficient scale when adapting QM potentials. To overcome this problem, we recently developed the size-consistent multi-partitioning (SCMP) quantum mechanics/molecular mechanics (QM/MM) method and realized stable and accurate MD simulations, using the QM potential to a benchmark system. In the present study, as the first application of the SCMP method, we have investigated the structures and dynamics of Na + , K + , and Ca 2+ solutions based on nanosecond-scale sampling, a sampling 100-times longer than that of conventional QM-based samplings. Furthermore, we have evaluated two dynamic properties, the diffusion coefficient and difference spectra, with high statistical certainty. Furthermore the calculation of these properties has not previously been possible within the conventional QM/MM framework. Based on our analysis, we have quantitatively evaluated the quantum chemical solvation effects, which show distinct differences between the cations.
Garoushi, Sufyan K.; Hatem, Marwa; Lassila, Lippo V. J.; Vallittu, Pekka K.
2015-01-01
Abstract Objectives: To determine the marginal microleakage of Class II restorations made with different composite base materials and the static load-bearing capacity of direct composite onlay restorations. Methods: Class II cavities were prepared in 40 extracted molars. They were divided into five groups (n = 8/group) depending on composite base material used (everX Posterior, SDR, Tetric EvoFlow). After Class II restorations were completed, specimens were sectioned mid-sagitally. For each group, sectioned restorations were immersed in dye. Specimens were viewed under a stereo-microscope and the percentage of cavity leakage was calculated. Ten groups of onlay restorations were fabricated (n = 8/group); groups were made with composite base materials (everX Posterior, SDR, Tetric EvoFlow, Gradia Direct LoFlo) and covered by 1 mm layer of conventional (Tetric N-Ceram) or bulk fill (Tetric EvoCeram Bulk Fill) composites. Groups made only from conventional, bulk fill and short fiber composites were used as control. Specimens were statically loaded until fracture. Data were analyzed using ANOVA (p = 0.05). Results: Microleakage of restorations made of plain conventional composite or short fiber composite base material showed statistically (p < 0.05) lower values compared to other groups. ANOVA revealed that onlay restorations made from short fiber-reinforced composite (FRC) as base or plain restoration had statistically significant higher load-bearing capacity (1593 N) (p < 0.05) than other restorations. Conclusion: Restorations combining base of short FRC and surface layer of conventional composite displayed promising performance related to microleakage and load-bearing capacity. PMID:28642894
NASA Astrophysics Data System (ADS)
Kim, Jongbin; Kim, Minkoo; Kim, Jong-Man; Kim, Seung-Ryeol; Lee, Seung-Woo
2014-09-01
This paper reports transient response characteristics of active-matrix organic light emitting diode (AMOLED) displays for mobile applications. This work reports that the rising responses look like saw-tooth waveform and are not always faster than those of liquid crystal displays. Thus, a driving technology is proposed to improve the rising transient responses of AMOLED based on the overdrive (OD) technology. We modified the OD technology by combining it with a dithering method because the conventional OD method cannot successfully enhance all the rising responses. Our method can improve all the transitions of AMOLED without modifying the conventional gamma architecture of drivers. A new artifact is found when OD is applied to certain transitions. We propose an optimum OD selection method to mitigate the artifact. The implementation results show the proposed technology can successfully improve motion quality of scrolling texts as well as moving pictures in AMOLED displays.
A GPU-based calculation using the three-dimensional FDTD method for electromagnetic field analysis.
Nagaoka, Tomoaki; Watanabe, Soichi
2010-01-01
Numerical simulations with the numerical human model using the finite-difference time domain (FDTD) method have recently been performed frequently in a number of fields in biomedical engineering. However, the FDTD calculation runs too slowly. We focus, therefore, on general purpose programming on the graphics processing unit (GPGPU). The three-dimensional FDTD method was implemented on the GPU using Compute Unified Device Architecture (CUDA). In this study, we used the NVIDIA Tesla C1060 as a GPGPU board. The performance of the GPU is evaluated in comparison with the performance of a conventional CPU and a vector supercomputer. The results indicate that three-dimensional FDTD calculations using a GPU can significantly reduce run time in comparison with that using a conventional CPU, even a native GPU implementation of the three-dimensional FDTD method, while the GPU/CPU speed ratio varies with the calculation domain and thread block size.
Two modified symplectic partitioned Runge-Kutta methods for solving the elastic wave equation
NASA Astrophysics Data System (ADS)
Su, Bo; Tuo, Xianguo; Xu, Ling
2017-08-01
Based on a modified strategy, two modified symplectic partitioned Runge-Kutta (PRK) methods are proposed for the temporal discretization of the elastic wave equation. The two symplectic schemes are similar in form but are different in nature. After the spatial discretization of the elastic wave equation, the ordinary Hamiltonian formulation for the elastic wave equation is presented. The PRK scheme is then applied for time integration. An additional term associated with spatial discretization is inserted into the different stages of the PRK scheme. Theoretical analyses are conducted to evaluate the numerical dispersion and stability of the two novel PRK methods. A finite difference method is used to approximate the spatial derivatives since the two schemes are independent of the spatial discretization technique used. The numerical solutions computed by the two new schemes are compared with those computed by a conventional symplectic PRK. The numerical results, which verify the new method, are superior to those generated by traditional conventional methods in seismic wave modeling.
Aouadi, Souha; Vasic, Ana; Paloor, Satheesh; Torfeh, Tarraf; McGarry, Maeve; Petric, Primoz; Riyas, Mohamed; Hammoud, Rabih; Al-Hammadi, Noora
2017-10-01
To create a synthetic CT (sCT) from conventional brain MRI using a patch-based method for MRI-only radiotherapy planning and verification. Conventional T1 and T2-weighted MRI and CT datasets from 13 patients who underwent brain radiotherapy were included in a retrospective study whereas 6 patients were tested prospectively. A new contribution to the Non-local Means Patch-Based Method (NMPBM) framework was done with the use of novel multi-scale and dual-contrast patches. Furthermore, the training dataset was improved by pre-selecting the closest database patients to the target patient for computation time/accuracy balance. sCT and derived DRRs were assessed visually and quantitatively. VMAT planning was performed on CT and sCT for hypothetical PTVs in homogeneous and heterogeneous regions. Dosimetric analysis was done by comparing Dose Volume Histogram (DVH) parameters of PTVs and organs at risk (OARs). Positional accuracy of MRI-only image-guided radiation therapy based on CBCT or kV images was evaluated. The retrospective (respectively prospective) evaluation of the proposed Multi-scale and Dual-contrast Patch-Based Method (MDPBM) gave a mean absolute error MAE=99.69±11.07HU (98.95±8.35HU), and a Dice in bones DI bone =83±0.03 (0.82±0.03). Good agreement with conventional planning techniques was obtained; the highest percentage of DVH metric deviations was 0.43% (0.53%) for PTVs and 0.59% (0.75%) for OARs. The accuracy of sCT/CBCT or DRR sCT /kV images registration parameters was <2mm and <2°. Improvements with MDPBM, compared to NMPBM, were significant. We presented a novel method for sCT generation from T1 and T2-weighted MRI potentially suitable for MRI-only external beam radiotherapy in brain sites. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Ho, K. K.; Moody, G. B.; Peng, C. K.; Mietus, J. E.; Larson, M. G.; Levy, D.; Goldberger, A. L.
1997-01-01
BACKGROUND: Despite much recent interest in quantification of heart rate variability (HRV), the prognostic value of conventional measures of HRV and of newer indices based on nonlinear dynamics is not universally accepted. METHODS AND RESULTS: We have designed algorithms for analyzing ambulatory ECG recordings and measuring HRV without human intervention, using robust methods for obtaining time-domain measures (mean and SD of heart rate), frequency-domain measures (power in the bands of 0.001 to 0.01 Hz [VLF], 0.01 to 0.15 Hz [LF], and 0.15 to 0.5 Hz [HF] and total spectral power [TP] over all three of these bands), and measures based on nonlinear dynamics (approximate entropy [ApEn], a measure of complexity, and detrended fluctuation analysis [DFA], a measure of long-term correlations). The study population consisted of chronic congestive heart failure (CHF) case patients and sex- and age-matched control subjects in the Framingham Heart Study. After exclusion of technically inadequate studies and those with atrial fibrillation, we used these algorithms to study HRV in 2-hour ambulatory ECG recordings of 69 participants (mean age, 71.7+/-8.1 years). By use of separate Cox proportional-hazards models, the conventional measures SD (P<.01), LF (P<.01), VLF (P<.05), and TP (P<.01) and the nonlinear measure DFA (P<.05) were predictors of survival over a mean follow-up period of 1.9 years; other measures, including ApEn (P>.3), were not. In multivariable models, DFA was of borderline predictive significance (P=.06) after adjustment for the diagnosis of CHF and SD. CONCLUSIONS: These results demonstrate that HRV analysis of ambulatory ECG recordings based on fully automated methods can have prognostic value in a population-based study and that nonlinear HRV indices may contribute prognostic value to complement traditional HRV measures.
Fuzzy rule-based image segmentation in dynamic MR images of the liver
NASA Astrophysics Data System (ADS)
Kobashi, Syoji; Hata, Yutaka; Tokimoto, Yasuhiro; Ishikawa, Makato
2000-06-01
This paper presents a fuzzy rule-based region growing method for segmenting two-dimensional (2-D) and three-dimensional (3- D) magnetic resonance (MR) images. The method is an extension of the conventional region growing method. The proposed method evaluates the growing criteria by using fuzzy inference techniques. The use of the fuzzy if-then rules is appropriate for describing the knowledge of the legions on the MR images. To evaluate the performance of the proposed method, it was applied to artificially generated images. In comparison with the conventional method, the proposed method shows high robustness for noisy images. The method then applied for segmenting the dynamic MR images of the liver. The dynamic MR imaging has been used for diagnosis of hepatocellular carcinoma (HCC), portal hypertension, and so on. Segmenting the liver, portal vein (PV), and inferior vena cava (IVC) can give useful description for the diagnosis, and is a basis work of a pres-surgery planning system and a virtual endoscope. To apply the proposed method, fuzzy if-then rules are derived from the time-density curve of ROIs. In the experimental results, the 2-D reconstructed and 3-D rendered images of the segmented liver, PV, and IVC are shown. The evaluation by a physician shows that the generated images are comparable to the hepatic anatomy, and they would be useful to understanding, diagnosis, and pre-surgery planning.
Yang, Guang; Yu, Simiao; Dong, Hao; Slabaugh, Greg; Dragotti, Pier Luigi; Ye, Xujiong; Liu, Fangde; Arridge, Simon; Keegan, Jennifer; Guo, Yike; Firmin, David; Keegan, Jennifer; Slabaugh, Greg; Arridge, Simon; Ye, Xujiong; Guo, Yike; Yu, Simiao; Liu, Fangde; Firmin, David; Dragotti, Pier Luigi; Yang, Guang; Dong, Hao
2018-06-01
Compressed sensing magnetic resonance imaging (CS-MRI) enables fast acquisition, which is highly desirable for numerous clinical applications. This can not only reduce the scanning cost and ease patient burden, but also potentially reduce motion artefacts and the effect of contrast washout, thus yielding better image quality. Different from parallel imaging-based fast MRI, which utilizes multiple coils to simultaneously receive MR signals, CS-MRI breaks the Nyquist-Shannon sampling barrier to reconstruct MRI images with much less required raw data. This paper provides a deep learning-based strategy for reconstruction of CS-MRI, and bridges a substantial gap between conventional non-learning methods working only on data from a single image, and prior knowledge from large training data sets. In particular, a novel conditional Generative Adversarial Networks-based model (DAGAN)-based model is proposed to reconstruct CS-MRI. In our DAGAN architecture, we have designed a refinement learning method to stabilize our U-Net based generator, which provides an end-to-end network to reduce aliasing artefacts. To better preserve texture and edges in the reconstruction, we have coupled the adversarial loss with an innovative content loss. In addition, we incorporate frequency-domain information to enforce similarity in both the image and frequency domains. We have performed comprehensive comparison studies with both conventional CS-MRI reconstruction methods and newly investigated deep learning approaches. Compared with these methods, our DAGAN method provides superior reconstruction with preserved perceptual image details. Furthermore, each image is reconstructed in about 5 ms, which is suitable for real-time processing.
Rasmussen Hellberg, Rosalee S; Morrissey, Michael T; Hanner, Robert H
2010-09-01
The purpose of this study was to develop a species-specific multiplex polymerase chain reaction (PCR) method that allows for the detection of salmon species substitution on the commercial market. Species-specific primers and TaqMan® probes were developed based on a comprehensive collection of mitochondrial 5' cytochrome c oxidase subunit I (COI) deoxyribonucleic acid (DNA) "barcode" sequences. Primers and probes were combined into multiplex assays and tested for specificity against 112 reference samples representing 25 species. Sensitivity and linearity tests were conducted using 10-fold serial dilutions of target DNA (single-species samples) and DNA admixtures containing the target species at levels of 10%, 1.0%, and 0.1% mixed with a secondary species. The specificity tests showed positive signals for the target DNA in both real-time and conventional PCR systems. Nonspecific amplification in both systems was minimal; however, false positives were detected at low levels (1.2% to 8.3%) in conventional PCR. Detection levels were similar for admixtures and single-species samples based on a 30 PCR cycle cut-off, with limits of 0.25 to 2.5 ng (1% to 10%) in conventional PCR and 0.05 to 5.0 ng (0.1% to 10%) in real-time PCR. A small-scale test with food samples showed promising results, with species identification possible even in heavily processed food items. Overall, this study presents a rapid, specific, and sensitive method for salmon species identification that can be applied to mixed-species and heavily processed samples in either conventional or real-time PCR formats. This study provides a newly developed method for salmon and trout species identification that will assist both industry and regulatory agencies in the detection and prevention of species substitution. This multiplex PCR method allows for rapid, high-throughput species identification even in heavily processed and mixed-species samples. An inter-laboratory study is currently being carried out to assess the ability of this method to identify species in a variety of commercial salmon and trout products.
Molla Kazemiha, Vahid; Bonakdar, Shahin; Amanzadeh, Amir; Azari, Shahram; Memarnejadian, Arash; Shahbazi, Shirin; Shokrgozar, Mohammad Ali; Mahdian, Reza
2016-08-01
Mycoplasmas are the most important contaminants of cell cultures throughout the world. They are considered as a major problem in biological studies and biopharmaceutical economic issues. In this study, our aim was to find the best standard technique as a rapid method with high sensitivity, specificity and accuracy for the detection of mycoplasma contamination in the cell lines of the National Cell Bank of Iran. Thirty cell lines suspected to mycoplasma contamination were evaluated by five different techniques including microbial culture, indirect DNA DAPI staining, enzymatic mycoalert(®) assay, conventional PCR and real-time PCR. Five mycoplasma-contaminated cell lines were assigned as positive controls and five mycoplasma-free cell lines as negative controls. The enzymatic method was performed using the mycoalert(®) mycoplasma detection kit. Real-time PCR technique was conducted by PromoKine diagnostic kits. In the conventional PCR method, mycoplasma genus-specific primers were designed to analyze the sequences based on a fixed and common region on 16S ribosomal RNA with PCR product size of 425 bp. Mycoplasma contamination was observed in 60, 56.66, 53.33, 46.66 and 33.33 % of 30 different cell cultures by real-time PCR, PCR, enzymatic mycoalert(®), indirect DNA DAPI staining and microbial culture methods, respectively. The analysis of the results of the different methods showed that the real-time PCR assay was superior the other methods with the sensitivity, specificity, accuracy, predictive value of positive and negative results of 100 %. These values were 94.44, 100, 96.77, 100 and 92.85 % for the conventional PCR method, respectively. Therefore, this study showed that real-time PCR and PCR assays based on the common sequences in the 16S ribosomal RNA are reliable methods with high sensitivity, specificity and accuracy for detection of mycoplasma contamination in cell cultures and other biological products.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, W; Zaghian, M; Lim, G
2015-06-15
Purpose: The current practice of considering the relative biological effectiveness (RBE) of protons in intensity modulated proton therapy (IMPT) planning is to use a generic RBE value of 1.1. However, RBE is indeed a variable depending on the dose per fraction, the linear energy transfer, tissue parameters, etc. In this study, we investigate the impact of using variable RBE based optimization (vRBE-OPT) on IMPT dose distributions compared by conventional fixed RBE based optimization (fRBE-OPT). Methods: Proton plans of three head and neck cancer patients were included for our study. In order to calculate variable RBE, tissue specific parameters were obtainedmore » from the literature and dose averaged LET values were calculated by Monte Carlo simulations. Biological effects were calculated using the linear quadratic model and they were utilized in the variable RBE based optimization. We used a Polak-Ribiere conjugate gradient algorithm to solve the model. In fixed RBE based optimization, we used conventional physical dose optimization to optimize doses weighted by 1.1. IMPT plans for each patient were optimized by both methods (vRBE-OPT and fRBE-OPT). Both variable and fixed RBE weighted dose distributions were calculated for both methods and compared by dosimetric measures. Results: The variable RBE weighted dose distributions were more homogenous within the targets, compared with the fixed RBE weighted dose distributions for the plans created by vRBE-OPT. We observed that there were noticeable deviations between variable and fixed RBE weighted dose distributions if the plan were optimized by fRBE-OPT. For organs at risk sparing, dose distributions from both methods were comparable. Conclusion: Biological dose based optimization rather than conventional physical dose based optimization in IMPT planning may bring benefit in improved tumor control when evaluating biologically equivalent dose, without sacrificing OAR sparing, for head and neck cancer patients. The research is supported in part by National Institutes of Health Grant No. 2U19CA021239-35.« less
Improving label-free detection of circulating melanoma cells by photoacoustic flow cytometry
NASA Astrophysics Data System (ADS)
Zhou, Huan; Wang, Qiyan; Pang, Kai; Zhou, Quanyu; Yang, Ping; He, Hao; Wei, Xunbin
2018-02-01
Melanoma is a kind of a malignant tumor of melanocytes with the properties of high mortality and high metastasis rate. The circulating melanoma cells with the high content of melanin can be detected by light absorption to diagnose and treat cancer at an early stage. Compared with conventional detection methods such as in vivo flow cytometry (IVFC) based on fluorescence, the in vivo photoacoustic flow cytometry (PAFC) utilizes melanin cells as biomarkers to collect the photoacoustic (PA) signals without toxic fluorescent dyes labeling in a non-invasive way. The information of target tumor cells is helpful for data analysis and cell counting. However, the raw signals in PAFC system contain numerous noises such as environmental noise, device noise and in vivo motion noise. Conventional denoising algorithms such as wavelet denoising (WD) method and means filter (MF) method are based on the local information to extract the data of clinical interest, which remove the subtle feature and leave many noises. To address the above questions, the nonlocal means (NLM) method based on nonlocal data has been proposed to suppress the noise in PA signals. Extensive experiments on in vivo PA signals from the mice with the injection of B16F10 cells in caudal vein have been conducted. All the results indicate that the NLM method has superior noise reduction performance and subtle information reservation.
Effectiveness of a modified tutorless problem-based learning method in dermatology - a pilot study.
Kaliyadan, F; Amri, M; Dhufiri, M; Amin, T T; Khan, M A
2012-01-01
Problem-Based Learning (PBL) is a student-centred instructional strategy in which students learn in a collaborative manner, the learning process being guided by a facilitator. One of the limitations of conventional PBL in medical education is the need for adequate resources in terms of faculty and time. Our study aimed to compare conventional PBL in dermatology with a modified tutorless PBL in which pre-listed cues and the use of digital media help direct the learning process. Thirty-one-fifth year medical students were divided into two groups: the study group comprising 16 students were exposed to the modified PBL, whereas the control group comprising 15 students were given the same scenarios and triggers, but in a conventional tutor-facilitated PBL. Knowledge acquisition and student feedback were assessed using a post-test and a Likert scale-based questionnaire, respectively. The post-test marks showed no significant statistical differences between the two groups. The general feedback regarding the modified PBL was positive and the students felt comfortable with the module. The learning objectives were met satisfactorily in both groups. Modified tutorless PBL modules might be an effective method to incorporate student-centred learning in dermatology without constraints in terms of faculty resources or time. © 2011 The Authors. Journal of the European Academy of Dermatology and Venereology © 2011 European Academy of Dermatology and Venereology.
Proposal for a new categorization of aseptic processing facilities based on risk assessment scores.
Katayama, Hirohito; Toda, Atsushi; Tokunaga, Yuji; Katoh, Shigeo
2008-01-01
Risk assessment of aseptic processing facilities was performed using two published risk assessment tools. Calculated risk scores were compared with experimental test results, including environmental monitoring and media fill run results, in three different types of facilities. The two risk assessment tools used gave a generally similar outcome. However, depending on the tool used, variations were observed in the relative scores between the facilities. For the facility yielding the lowest risk scores, the corresponding experimental test results showed no contamination, indicating that these ordinal testing methods are insufficient to evaluate this kind of facility. A conventional facility having acceptable aseptic processing lines gave relatively high risk scores. The facility showing a rather high risk score demonstrated the usefulness of conventional microbiological test methods. Considering the significant gaps observed in calculated risk scores and in the ordinal microbiological test results between advanced and conventional facilities, we propose a facility categorization based on risk assessment. The most important risk factor in aseptic processing is human intervention. When human intervention is eliminated from the process by advanced hardware design, the aseptic processing facility can be classified into a new risk category that is better suited for assuring sterility based on a new set of criteria rather than on currently used microbiological analysis. To fully benefit from advanced technologies, we propose three risk categories for these aseptic facilities.
Ntonifor, N N; Ngufor, C A; Kimbi, H K; Oben, B O
2006-10-01
To document and test the efficacy of indigenous traditional personal protection methods against mosquito bites and general nuisance. A prospective study based on a survey and field evaluation of selected plant-based personal protection methods against mosquito bites. Bolifamba, a rural setting of the Mount Cameroon region. A structured questionnaire was administered to 179 respondents and two anti-mosquito measures were tested under field conditions. Identified traditional anti-mosquito methods used by indigenes of Bolifamba. Two plants tested under field conditions were found to be effective. Of the 179 respondents, 88 (49.16%) used traditional anti-mosquito methods; 57 (64.77%) used plant-based methods while 31 (35.2%) used various petroleum oils. The rest of the respondents, 91 (50.8%) used conventional personal protection methods. Reasons for using traditional methods were because they were available, affordable and lack of known more effective alternatives. The demerits of these methods were: labourious to implement, stain dresses, produce a lot of smoke/ repulsive odours when used; those of conventional methods were lack of adequate information about them, high cost and non-availability. When the two most frequently used plants, Saccharum officinarium and Ocimum basilicum were evaluated under field conditions, each gave a better protection than the control. Most plants used against mosquitoes in the area are known potent mosquito repellents but others identified in the study warrant further research. The two tested under field conditions were effective though less than the commonly used commercial diethyltoluamide.
RECOVERY OF VOCS FROM SURFACTANT SOLUTION BY PERVAPORATION
Surfactant-based processes are emerging as promising technologies to enhance conventional pump-and-treat methods for remediating soils contaminated with nonaqueous phase liquids (NAPLs), primarily due to the potential to significantly reduce the remediation time. In order to reus...
DOT National Transportation Integrated Search
2016-11-28
Intelligent Compaction (IC) is considered to be an innovative technology intended to address some of the problems associated with conventional compaction methods of earthwork (e.g. stiffnessbased measurements instead of density-based measurements). I...
Gender-Specific Correlates of Complementary and Alternative Medicine Use for Knee Osteoarthritis
Yang, Shibing; Eaton, Charles B.; McAlindon, Timothy; Lapane, Kate L.
2012-01-01
Abstract Background Knee osteoarthritis (OA) increases healthcare use and cost. Women have higher pain and lower quality of life measures compared to men even after accounting for differences in age, body mass index (BMI), and radiographic OA severity. Our objective was to describe gender-specific correlates of complementary and alternative medicine (CAM) use among persons with radiographically confirmed knee OA. Methods Using data from the Osteoarthritis Initiative, 2,679 women and men with radiographic tibiofemoral OA in at least one knee were identified. Treatment approaches were classified as current CAM therapy (alternative medical systems, mind-body interventions, manipulation and body-based methods, energy therapies, and three types of biologically based therapies) or conventional medication use (over-the-counter or prescription). Gender-specific multivariable logistic regression models identified sociodemographic and clinical/functional correlates of CAM use. Results CAM use, either alone (23.9% women, 21.9% men) or with conventional medications (27.3% women, 19.0% men), was common. Glucosamine use (27.2% women, 28.2% men) and chondroitin sulfate use (24.8% women; 25.7% men) did not differ by gender. Compared to men, women were more likely to report use of mind-body interventions (14.1% vs. 5.7%), topical agents (16.1% vs. 9.5%), and concurrent CAM strategies (18.0% vs. 9.9%). Higher quality of life measures and physical function indices in women were inversely associated with any therapy, and higher pain scores were positively associated with conventional medication use. History of hip replacement was a strong correlate of conventional medication use in women but not in men. Conclusions Women were more likely than men to use CAM alone or concomitantly with conventional medications. PMID:22946630
Luong, Emilie; Shayegan, Amir
2018-01-01
Aim The aim of this study was to make a comparison between microleakage of conventionally restored class V cavities using acid etchant and the ones conditioned by erbium-doped yttrium aluminum garnet (Er:YAG) laser, and also to assess and compare the effectiveness of enamel surface treatments of occlusal pits and fissures by acid etching and conditioned by Er:YAG laser-etch. Materials and methods Seventy-two extracted third molars were used in this study. The samples were divided into two major groups: class V cavities and pit and fissure sealants. Each subgroup was divided into conventional acid etching, Er:YAG laser conditioning and conventional acid etching, and combination with Er:YAG laser conditioning (n=12). The teeth were placed in 2% methylene blue dye solution, were sectioned, and were evaluated according to the dye penetration criteria. Two samples per subgroup were chosen for scanning electron microscopic (SEM) analysis. Results There was a significant difference between occlusal and cervical margin groups. Laser conventional composite cementum group showed more microleakage values compared to other groups. There was no significant difference between occlusal margin groups. However, there was a significant difference between cervical margin groups in terms of microleakage. In sealant groups, there was a significant difference between laser and conventional with/without laser treatment groups in terms of microleakage. Conclusion Based on the results reported in this study, it can be concluded that the application of the Er:YAG laser beneath the resin composite, the resin-modified glass ionomers (GIs), and the fissure sealant placement may be an alternative enamel and dentin etching method to acid etching. PMID:29881311
Forghani-Arani, Farnoush; Behura, Jyoti; Haines, Seth S.; Batzle, Mike
2013-01-01
In studies on heavy oil, shale reservoirs, tight gas and enhanced geothermal systems, the use of surface passive seismic data to monitor induced microseismicity due to the fluid flow in the subsurface is becoming more common. However, in most studies passive seismic records contain days and months of data and manually analysing the data can be expensive and inaccurate. Moreover, in the presence of noise, detecting the arrival of weak microseismic events becomes challenging. Hence, the use of an automated, accurate and computationally fast technique for event detection in passive seismic data is essential. The conventional automatic event identification algorithm computes a running-window energy ratio of the short-term average to the long-term average of the passive seismic data for each trace. We show that for the common case of a low signal-to-noise ratio in surface passive records, the conventional method is not sufficiently effective at event identification. Here, we extend the conventional algorithm by introducing a technique that is based on the cross-correlation of the energy ratios computed by the conventional method. With our technique we can measure the similarities amongst the computed energy ratios at different traces. Our approach is successful at improving the detectability of events with a low signal-to-noise ratio that are not detectable with the conventional algorithm. Also, our algorithm has the advantage to identify if an event is common to all stations (a regional event) or to a limited number of stations (a local event). We provide examples of applying our technique to synthetic data and a field surface passive data set recorded at a geothermal site.
Nakarmi, Ukash; Wang, Yanhua; Lyu, Jingyuan; Liang, Dong; Ying, Leslie
2017-11-01
While many low rank and sparsity-based approaches have been developed for accelerated dynamic magnetic resonance imaging (dMRI), they all use low rankness or sparsity in input space, overlooking the intrinsic nonlinear correlation in most dMRI data. In this paper, we propose a kernel-based framework to allow nonlinear manifold models in reconstruction from sub-Nyquist data. Within this framework, many existing algorithms can be extended to kernel framework with nonlinear models. In particular, we have developed a novel algorithm with a kernel-based low-rank model generalizing the conventional low rank formulation. The algorithm consists of manifold learning using kernel, low rank enforcement in feature space, and preimaging with data consistency. Extensive simulation and experiment results show that the proposed method surpasses the conventional low-rank-modeled approaches for dMRI.
Application of nanomaterials in the bioanalytical detection of disease-related genes.
Zhu, Xiaoqian; Li, Jiao; He, Hanping; Huang, Min; Zhang, Xiuhua; Wang, Shengfu
2015-12-15
In the diagnosis of genetic diseases and disorders, nanomaterials-based gene detection systems have significant advantages over conventional diagnostic systems in terms of simplicity, sensitivity, specificity, and portability. In this review, we describe the application of nanomaterials for disease-related genes detection in different methods excluding PCR-related method, such as colorimetry, fluorescence-based methods, electrochemistry, microarray methods, surface-enhanced Raman spectroscopy (SERS), quartz crystal microbalance (QCM) methods, and dynamic light scattering (DLS). The most commonly used nanomaterials are gold, silver, carbon and semiconducting nanoparticles. Various nanomaterials-based gene detection methods are introduced, their respective advantages are discussed, and selected examples are provided to illustrate the properties of these nanomaterials and their emerging applications for the detection of specific nucleic acid sequences. Copyright © 2015. Published by Elsevier B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benmakhlouf, H; Kraepelien, T; Forander, P
2014-06-01
Purpose: Most Gamma knife treatments are based solely on MR-images. However, for fractionated treatments and to implement TPS dose calculations that require electron densities, CT image data is essential. The purpose of this work is to assess the dosimetric effects of using MR-images registered with stereotactic CT-images in Gamma knife treatments. Methods: Twelve patients treated for vestibular schwannoma with Gamma Knife Perfexion (Elekta Instruments, Sweden) were selected for this study. The prescribed doses (12 Gy to periphery) were delivered based on the conventional approach of using stereotactic MR-images only. These plans were imported into stereotactic CT-images (by registering MR-images withmore » stereotactic CT-images using the Leksell gamma plan registration software). The dose plans, for each patient, are identical in both cases except for potential rotations and translations resulting from the registration. The impact of the registrations was assessed by an algorithm written in Matlab. The algorithm compares the dose-distributions voxel-by-voxel between the two plans, calculates the full dose coverage of the target (treated in the conventional approach) achieved by the CT-based plan, and calculates the minimum dose delivered to the target (treated in the conventional approach) achieved by the CT-based plan. Results: The mean dose difference between the plans was 0.2 Gy to 0.4 Gy (max 4.5 Gy) whereas between 89% and 97% of the target (treated in the conventional approach) received the prescribed dose, by the CT-plan. The minimum dose to the target (treated in the conventional approach) given by the CT-based plan was between 7.9 Gy and 10.7 Gy (compared to 12 Gy in the conventional treatment). Conclusion: The impact of using MR-images registered with stereotactic CT-images has successfully been compared to conventionally delivered dose plans showing significant differences between the two. Although CTimages have been implemented clinically; the effect of the registration has not been fully investigated.« less
Characterization of Adrenal Adenoma by Gaussian Model-Based Algorithm.
Hsu, Larson D; Wang, Carolyn L; Clark, Toshimasa J
2016-01-01
We confirmed that computed tomography (CT) attenuation values of pixels in an adrenal nodule approximate a Gaussian distribution. Building on this and the previously described histogram analysis method, we created an algorithm that uses mean and standard deviation to estimate the percentage of negative attenuation pixels in an adrenal nodule, thereby allowing differentiation of adenomas and nonadenomas. The institutional review board approved both components of this study in which we developed and then validated our criteria. In the first, we retrospectively assessed CT attenuation values of adrenal nodules for normality using a 2-sample Kolmogorov-Smirnov test. In the second, we evaluated a separate cohort of patients with adrenal nodules using both the conventional 10HU unit mean attenuation method and our Gaussian model-based algorithm. We compared the sensitivities of the 2 methods using McNemar's test. A total of 183 of 185 observations (98.9%) demonstrated a Gaussian distribution in adrenal nodule pixel attenuation values. The sensitivity and specificity of our Gaussian model-based algorithm for identifying adrenal adenoma were 86.1% and 83.3%, respectively. The sensitivity and specificity of the mean attenuation method were 53.2% and 94.4%, respectively. The sensitivities of the 2 methods were significantly different (P value < 0.001). In conclusion, the CT attenuation values within an adrenal nodule follow a Gaussian distribution. Our Gaussian model-based algorithm can characterize adrenal adenomas with higher sensitivity than the conventional mean attenuation method. The use of our algorithm, which does not require additional postprocessing, may increase workflow efficiency and reduce unnecessary workup of benign nodules. Copyright © 2016 Elsevier Inc. All rights reserved.
The development of additive manufacturing technique for nickel-base alloys: A review
NASA Astrophysics Data System (ADS)
Zadi-Maad, Ahmad; Basuki, Arif
2018-04-01
Nickel-base alloys are an attractive alloy due to its excellent mechanical properties, a high resistance to creep deformation, corrosion, and oxidation. However, it is a hard task to control performance when casting or forging for this material. In recent years, additive manufacturing (AM) process has been implemented to replace the conventional directional solidification process for the production of nickel-base alloys. Due to its potentially lower cost and flexibility manufacturing process, AM is considered as a substitute technique for the existing. This paper provides a comprehensive review of the previous work related to the AM techniques for Ni-base alloys while highlighting current challenges and methods to solving them. The properties of conventionally manufactured Ni-base alloys are also compared with the AM fabricated alloys. The mechanical properties obtained from tension, hardness and fatigue test are included, along with discussions of the effect of post-treatment process. Recommendations for further work are also provided.
Variable Density Effects in Stochastic Lagrangian Models for Turbulent Combustion
2016-07-20
PDF methods in dealing with chemical reaction and convection are preserved irrespective of density variation. Since the density variation in a typical...combustion process may be as large as factor of seven, including variable- density effects in PDF methods is of significance. Conventionally, the...strategy of modelling variable density flows in PDF methods is similar to that used for second-moment closure models (SMCM): models are developed based on
Shermeyer, Jacob S.; Haack, Barry N.
2015-01-01
Two forestry-change detection methods are described, compared, and contrasted for estimating deforestation and growth in threatened forests in southern Peru from 2000 to 2010. The methods used in this study rely on freely available data, including atmospherically corrected Landsat 5 Thematic Mapper and Moderate Resolution Imaging Spectroradiometer (MODIS) vegetation continuous fields (VCF). The two methods include a conventional supervised signature extraction method and a unique self-calibrating method called MODIS VCF guided forest/nonforest (FNF) masking. The process chain for each of these methods includes a threshold classification of MODIS VCF, training data or signature extraction, signature evaluation, k-nearest neighbor classification, analyst-guided reclassification, and postclassification image differencing to generate forest change maps. Comparisons of all methods were based on an accuracy assessment using 500 validation pixels. Results of this accuracy assessment indicate that FNF masking had a 5% higher overall accuracy and was superior to conventional supervised classification when estimating forest change. Both methods succeeded in classifying persistently forested and nonforested areas, and both had limitations when classifying forest change.
Near-Optimal Tracking Control of Mobile Robots Via Receding-Horizon Dual Heuristic Programming.
Lian, Chuanqiang; Xu, Xin; Chen, Hong; He, Haibo
2016-11-01
Trajectory tracking control of wheeled mobile robots (WMRs) has been an important research topic in control theory and robotics. Although various tracking control methods with stability have been developed for WMRs, it is still difficult to design optimal or near-optimal tracking controller under uncertainties and disturbances. In this paper, a near-optimal tracking control method is presented for WMRs based on receding-horizon dual heuristic programming (RHDHP). In the proposed method, a backstepping kinematic controller is designed to generate desired velocity profiles and the receding horizon strategy is used to decompose the infinite-horizon optimal control problem into a series of finite-horizon optimal control problems. In each horizon, a closed-loop tracking control policy is successively updated using a class of approximate dynamic programming algorithms called finite-horizon dual heuristic programming (DHP). The convergence property of the proposed method is analyzed and it is shown that the tracking control system based on RHDHP is asymptotically stable by using the Lyapunov approach. Simulation results on three tracking control problems demonstrate that the proposed method has improved control performance when compared with conventional model predictive control (MPC) and DHP. It is also illustrated that the proposed method has lower computational burden than conventional MPC, which is very beneficial for real-time tracking control.
Tuning time-frequency methods for the detection of metered HF speech
NASA Astrophysics Data System (ADS)
Nelson, Douglas J.; Smith, Lawrence H.
2002-12-01
Speech is metered if the stresses occur at a nearly regular rate. Metered speech is common in poetry, and it can occur naturally in speech, if the speaker is spelling a word or reciting words or numbers from a list. In radio communications, the CQ request, call sign and other codes are frequently metered. In tactical communications and air traffic control, location, heading and identification codes may be metered. Moreover metering may be expected to survive even in HF communications, which are corrupted by noise, interference and mistuning. For this environment, speech recognition and conventional machine-based methods are not effective. We describe Time-Frequency methods which have been adapted successfully to the problem of mitigation of HF signal conditions and detection of metered speech. These methods are based on modeled time and frequency correlation properties of nearly harmonic functions. We derive these properties and demonstrate a performance gain over conventional correlation and spectral methods. Finally, in addressing the problem of HF single sideband (SSB) communications, the problems of carrier mistuning, interfering signals, such as manual Morse, and fast automatic gain control (AGC) must be addressed. We demonstrate simple methods which may be used to blindly mitigate mistuning and narrowband interference, and effectively invert the fast automatic gain function.
A Carrier Estimation Method Based on MLE and KF for Weak GNSS Signals.
Zhang, Hongyang; Xu, Luping; Yan, Bo; Zhang, Hua; Luo, Liyan
2017-06-22
Maximum likelihood estimation (MLE) has been researched for some acquisition and tracking applications of global navigation satellite system (GNSS) receivers and shows high performance. However, all current methods are derived and operated based on the sampling data, which results in a large computation burden. This paper proposes a low-complexity MLE carrier tracking loop for weak GNSS signals which processes the coherent integration results instead of the sampling data. First, the cost function of the MLE of signal parameters such as signal amplitude, carrier phase, and Doppler frequency are used to derive a MLE discriminator function. The optimal value of the cost function is searched by an efficient Levenberg-Marquardt (LM) method iteratively. Its performance including Cramér-Rao bound (CRB), dynamic characteristics and computation burden are analyzed by numerical techniques. Second, an adaptive Kalman filter is designed for the MLE discriminator to obtain smooth estimates of carrier phase and frequency. The performance of the proposed loop, in terms of sensitivity, accuracy and bit error rate, is compared with conventional methods by Monte Carlo (MC) simulations both in pedestrian-level and vehicle-level dynamic circumstances. Finally, an optimal loop which combines the proposed method and conventional method is designed to achieve the optimal performance both in weak and strong signal circumstances.
A motion compensation technique using sliced blocks and its application to hybrid video coding
NASA Astrophysics Data System (ADS)
Kondo, Satoshi; Sasai, Hisao
2005-07-01
This paper proposes a new motion compensation method using "sliced blocks" in DCT-based hybrid video coding. In H.264 ? MPEG-4 Advance Video Coding, a brand-new international video coding standard, motion compensation can be performed by splitting macroblocks into multiple square or rectangular regions. In the proposed method, on the other hand, macroblocks or sub-macroblocks are divided into two regions (sliced blocks) by an arbitrary line segment. The result is that the shapes of the segmented regions are not limited to squares or rectangles, allowing the shapes of the segmented regions to better match the boundaries between moving objects. Thus, the proposed method can improve the performance of the motion compensation. In addition, adaptive prediction of the shape according to the region shape of the surrounding macroblocks can reduce overheads to describe shape information in the bitstream. The proposed method also has the advantage that conventional coding techniques such as mode decision using rate-distortion optimization can be utilized, since coding processes such as frequency transform and quantization are performed on a macroblock basis, similar to the conventional coding methods. The proposed method is implemented in an H.264-based P-picture codec and an improvement in bit rate of 5% is confirmed in comparison with H.264.
Zemtsova, Galina E; Montgomery, Merrill; Levin, Michael L
2015-01-01
Studies on the natural transmission cycles of zoonotic pathogens and the reservoir competence of vertebrate hosts require methods for reliable diagnosis of infection in wild and laboratory animals. Several PCR-based applications have been developed for detection of infections caused by Spotted Fever group Rickettsia spp. in a variety of animal tissues. These assays are being widely used by researchers, but they differ in their sensitivity and reliability. We compared the sensitivity of five previously published conventional PCR assays and one SYBR green-based real-time PCR assay for the detection of rickettsial DNA in blood and tissue samples from Rickettsia- infected laboratory animals (n = 87). The real-time PCR, which detected rickettsial DNA in 37.9% of samples, was the most sensitive. The next best were the semi-nested ompA assay and rpoB conventional PCR, which detected as positive 18.4% and 14.9% samples respectively. Conventional assays targeting ompB, gltA and hrtA genes have been the least sensitive. Therefore, we recommend the SYBR green-based real-time PCR as a tool for the detection of rickettsial DNA in animal samples due to its higher sensitivity when compared to more traditional assays.
Zemtsova, Galina E.; Montgomery, Merrill; Levin, Michael L.
2015-01-01
Studies on the natural transmission cycles of zoonotic pathogens and the reservoir competence of vertebrate hosts require methods for reliable diagnosis of infection in wild and laboratory animals. Several PCR-based applications have been developed for detection of infections caused by Spotted Fever group Rickettsia spp. in a variety of animal tissues. These assays are being widely used by researchers, but they differ in their sensitivity and reliability. We compared the sensitivity of five previously published conventional PCR assays and one SYBR green-based real-time PCR assay for the detection of rickettsial DNA in blood and tissue samples from Rickettsia- infected laboratory animals (n = 87). The real-time PCR, which detected rickettsial DNA in 37.9% of samples, was the most sensitive. The next best were the semi-nested ompA assay and rpoB conventional PCR, which detected as positive 18.4% and 14.9% samples respectively. Conventional assays targeting ompB, gltA and hrtA genes have been the least sensitive. Therefore, we recommend the SYBR green-based real-time PCR as a tool for the detection of rickettsial DNA in animal samples due to its higher sensitivity when compared to more traditional assays. PMID:25607846
Palmucci, Stefano; Roccasalva, Federica; Piccoli, Marina; Fuccio Sanzà, Giovanni; Foti, Pietro Valerio; Ragozzino, Alfonso; Milone, Pietro; Ettorre, Giovanni Carlo
2017-01-01
Since its introduction, MRCP has been improved over the years due to the introduction of several technical advances and innovations. It consists of a noninvasive method for biliary tree representation, based on heavily T2-weighted images. Conventionally, its protocol includes two-dimensional single-shot fast spin-echo images, acquired with thin sections or with multiple thick slabs. In recent years, three-dimensional T2-weighted fast-recovery fast spin-echo images have been added to the conventional protocol, increasing the possibility of biliary anatomy demonstration and leading to a significant benefit over conventional 2D imaging. A significant innovation has been reached with the introduction of hepatobiliary contrasts, represented by gadoxetic acid and gadobenate dimeglumine: they are excreted into the bile canaliculi, allowing the opacification of the biliary tree. Recently, 3D interpolated T1-weighted spoiled gradient echo images have been proposed for the evaluation of the biliary tree, obtaining images after hepatobiliary contrast agent administration. Thus, the acquisition of these excretory phases improves the diagnostic capability of conventional MRCP-based on T2 acquisitions. In this paper, technical features of contrast-enhanced magnetic resonance cholangiography are briefly discussed; main diagnostic tips of hepatobiliary phase are showed, emphasizing the benefit of enhanced cholangiography in comparison with conventional MRCP.
Microleakage of adhesive restorative materials.
Gladys, S; Van Meerbeek, B; Lambrechts, P; Vanherle, G
2001-06-01
To compare the marginal sealing ability of two conventional and one polyacid-modified resin-based composite, and two conventional and three resin-modified glass-ionomers in conventional cylindrical box cavities following a silver-staining microleakage evaluation method. In 80 freshly extracted and caries-free human third molars, three standardized cylindrical butt-joint cavities were prepared: the first cavity in coronal enamel, the second at the cemento-enamel junction (CEJ) and the third completely in root cementum. A control group of 10 additional teeth was chosen. After the cavities were restored randomly using the eight restorative materials tested, the specimens were first stored in distilled water at 37 degrees C for 7 days and then thermocycled (500 cycles). Thereafter, the specimens were centrifuged for 10 min in plastic bottles containing 50 wt% silver nitrate aqueous solution. The degree of microleakage was recorded at four different depths along the restoration margins using an optical stereomicroscope equipped with a measuring gauge. None of the tested systems prevented microleakage completely, but the extent of leakage decreased towards the bottom of the restorations. The resin-modified glass-ionomers performed better than the conventional resin-based composites and conventional glass-ionomers. Distinct leakage patterns were recorded among all materials investigated. Complete marginal sealing could still not be reached with the new adhesive restorative materials.
Antti T. Kaartinen; Jeremy S. Fried; Paul A. Dunham
2002-01-01
Three Landsat TM-based GIS layers were evaluated as alternatives to conventional, photointerpretation-based stratification of FIA field plots. Estimates for timberland area, timber volume, and volume of down wood were calculated for California's North Coast Survey Unit of 2.5 million hectares. The estimates were compared on the basis of standard errors,...
NASA Astrophysics Data System (ADS)
Huang, Yin-Nan
Nuclear power plants (NPPs) and spent nuclear fuel (SNF) are required by code and regulations to be designed for a family of extreme events, including very rare earthquake shaking, loss of coolant accidents, and tornado-borne missile impacts. Blast loading due to malevolent attack became a design consideration for NPPs and SNF after the terrorist attacks of September 11, 2001. The studies presented in this dissertation assess the performance of sample conventional and base isolated NPP reactor buildings subjected to seismic effects and blast loadings. The response of the sample reactor building to tornado-borne missile impacts and internal events (e.g., loss of coolant accidents) will not change if the building is base isolated and so these hazards were not considered. The sample NPP reactor building studied in this dissertation is composed of containment and internal structures with a total weight of approximately 75,000 tons. Four configurations of the reactor building are studied, including one conventional fixed-base reactor building and three base-isolated reactor buildings using Friction Pendulum(TM), lead rubber and low damping rubber bearings. The seismic assessment of the sample reactor building is performed using a new procedure proposed in this dissertation that builds on the methodology presented in the draft ATC-58 Guidelines and the widely used Zion method, which uses fragility curves defined in terms of ground-motion parameters for NPP seismic probabilistic risk assessment. The new procedure improves the Zion method by using fragility curves that are defined in terms of structural response parameters since damage and failure of NPP components are more closely tied to structural response parameters than to ground motion parameters. Alternate ground motion scaling methods are studied to help establish an optimal procedure for scaling ground motions for the purpose of seismic performance assessment. The proposed performance assessment procedure is used to evaluate the vulnerability of the conventional and base-isolated NPP reactor buildings. The seismic performance assessment confirms the utility of seismic isolation at reducing spectral demands on secondary systems. Procedures to reduce the construction cost of secondary systems in isolated reactor buildings are presented. A blast assessment of the sample reactor building is performed for an assumed threat of 2000 kg of TNT explosive detonated on the surface with a closest distance to the reactor building of 10 m. The air and ground shock waves produced by the design threat are generated and used for performance assessment. The air blast loading to the sample reactor building is computed using a Computational Fluid Dynamics code Air3D and the ground shock time series is generated using an attenuation model for soil/rock response. Response-history analysis of the sample conventional and base isolated reactor buildings to external blast loadings is performed using the hydrocode LS-DYNA. The spectral demands on the secondary systems in the isolated reactor building due to air blast loading are greater than those for the conventional reactor building but much smaller than those spectral demands associated with Safe Shutdown Earthquake shaking. The isolators are extremely effective at filtering out high acceleration, high frequency ground shock loading.
Rodríguez, Guillermo López; Weber, Joshua; Sandhu, Jaswinder Singh; Anastasio, Mark A.
2011-01-01
We propose and experimentally demonstrate a new method for complex-valued wavefield retrieval in off-axis acoustic holography. The method involves use of an intensity-sensitive acousto-optic (AO) sensor, optimized for use at 3.3 MHz, to record the acoustic hologram and a computational method for reconstruction of the object wavefield. The proposed method may circumvent limitations of conventional implementations of acoustic holography and may facilitate the development of acoustic-holography-based biomedical imaging methods. PMID:21669451
Nonlinear dynamics based digital logic and circuits.
Kia, Behnam; Lindner, John F; Ditto, William L
2015-01-01
We discuss the role and importance of dynamics in the brain and biological neural networks and argue that dynamics is one of the main missing elements in conventional Boolean logic and circuits. We summarize a simple dynamics based computing method, and categorize different techniques that we have introduced to realize logic, functionality, and programmability. We discuss the role and importance of coupled dynamics in networks of biological excitable cells, and then review our simple coupled dynamics based method for computing. In this paper, for the first time, we show how dynamics can be used and programmed to implement computation in any given base, including but not limited to base two.
Effect of organic and conventional rearing system on the mineral content of pork.
Zhao, Yan; Wang, Donghua; Yang, Shuming
2016-08-01
Dietary composition and rearing regime largely determine the trace elemental composition of pigs, and consequently their concentration in animal products. The present study evaluates thirteen macro- and trace element concentrations in pork from organic and conventional farms. Conventional pigs were given a commercial feed with added minerals; organic pigs were given a feed based on organic feedstuffs. The content of macro-elements (Na, K, Mg and Ca) and some trace elements (Ni, Fe, Zn and Sr) in organic and conventional meat samples showed no significant differences (P>0.05). Several trace element concentrations in organic pork were significantly higher (P<0.05) compared to conventional pork: Cr (808 and 500μg/kg in organic and conventional pork, respectively), Mn (695 and 473μg/kg) and Cu (1.80 and 1.49mg/kg). The results showed considerable differences in mineral content between samples from pigs reared in organic and conventional systems. Our results also indicate that authentication of organic pork can be realized by applying multivariate chemometric methods such as discriminant analysis to this multi-element data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mixing Problem Based Learning and Conventional Teaching Methods in an Analog Electronics Course
ERIC Educational Resources Information Center
Podges, J. M.; Kommers, P. A. M.; Winnips, K.; van Joolingen, W. R.
2014-01-01
This study, undertaken at the Walter Sisulu University of Technology (WSU) in South Africa, describes how problem-based learning (PBL) affects the first year 'analog electronics course', when PBL and the lecturing mode is compared. Problems were designed to match real-life situations. Data between the experimental group and the control group that…
Summarization as the base for text assessment
NASA Astrophysics Data System (ADS)
Karanikolas, Nikitas N.
2015-02-01
We present a model that apply shallow text summarization as a cheap (in resources needed) process for Automatic (machine based) free text answer Assessment (AA). The evaluation of the proposed method induces the inference that the Conventional Assessment (CA, man made assessment of free text answers) does not have an obvious mechanical replacement. However, this is a research challenge.
JPEG2000 encoding with perceptual distortion control.
Liu, Zhen; Karam, Lina J; Watson, Andrew B
2006-07-01
In this paper, a new encoding approach is proposed to control the JPEG2000 encoding in order to reach a desired perceptual quality. The new method is based on a vision model that incorporates various masking effects of human visual perception and a perceptual distortion metric that takes spatial and spectral summation of individual quantization errors into account. Compared with the conventional rate-based distortion minimization JPEG2000 encoding, the new method provides a way to generate consistent quality images at a lower bit rate.
Applying problem-based learning to otolaryngology teaching.
Abou-Elhamd, K A; Rashad, U M; Al-Sultan, A I
2011-02-01
Undergraduate medical education requires ongoing improvement in order to keep pace with the changing demands of twenty-first century medical practice. Problem-based learning is increasingly being adopted in medical schools worldwide. We review its application in the specialty of ENT, and we present our experience of using this approach combined with more traditional methods. We introduced problem-based learning techniques into the ENT course taught to fifth-year medical students at Al-Ahsa College of Medicine, King Faisal University, Saudi Arabia. As a result, the teaching schedule included both clinical and theoretical activities. Six clinical teaching days were allowed for history-taking, examination techniques and clinical scenario discussion. Case scenarios were discussed in small group teaching sessions. Conventional methods were employed to teach audiology and ENT radiology (one three-hour session each); a three-hour simulation laboratory session and three-hour student presentation were also scheduled. In addition, students attended out-patient clinics for three days, and used multimedia facilities to learn about various otolaryngology diseases (in another three-hour session). This input was supplemented with didactic teaching in the form of 16 instructional lectures per semester (one hour per week). From our teaching experience, we believe that the application of problem-based learning to ENT teaching has resulted in a substantial increase in students' knowledge. Furthermore, students have given encouraging feedback on their experience of combined problem-based learning and conventional teaching methods.
NASA Astrophysics Data System (ADS)
Zhou, Y.; Zhao, H.; Hao, H.; Wang, C.
2018-05-01
Accurate remote sensing water extraction is one of the primary tasks of watershed ecological environment study. Since the Yanhe water system has typical characteristics of a small water volume and narrow river channel, which leads to the difficulty for conventional water extraction methods such as Normalized Difference Water Index (NDWI). A new Multi-Spectral Threshold segmentation of the NDWI (MST-NDWI) water extraction method is proposed to achieve the accurate water extraction in Yanhe watershed. In the MST-NDWI method, the spectral characteristics of water bodies and typical backgrounds on the Landsat/TM images have been evaluated in Yanhe watershed. The multi-spectral thresholds (TM1, TM4, TM5) based on maximum-likelihood have been utilized before NDWI water extraction to realize segmentation for a division of built-up lands and small linear rivers. With the proposed method, a water map is extracted from the Landsat/TM images in 2010 in China. An accuracy assessment is conducted to compare the proposed method with the conventional water indexes such as NDWI, Modified NDWI (MNDWI), Enhanced Water Index (EWI), and Automated Water Extraction Index (AWEI). The result shows that the MST-NDWI method generates better water extraction accuracy in Yanhe watershed and can effectively diminish the confusing background objects compared to the conventional water indexes. The MST-NDWI method integrates NDWI and Multi-Spectral Threshold segmentation algorithms, with richer valuable information and remarkable results in accurate water extraction in Yanhe watershed.
Wijsman, Liselotte Willemijn; Cachucho, Ricardo; Hoevenaar-Blom, Marieke Peternella; Mooijaart, Simon Pieter; Richard, Edo
2017-01-01
Background Smartphone-assisted technologies potentially provide the opportunity for large-scale, long-term, repeated monitoring of cognitive functioning at home. Objective The aim of this proof-of-principle study was to evaluate the feasibility and validity of performing cognitive tests in people at increased risk of dementia using smartphone-based technology during a 6 months follow-up period. Methods We used the smartphone-based app iVitality to evaluate five cognitive tests based on conventional neuropsychological tests (Memory-Word, Trail Making, Stroop, Reaction Time, and Letter-N-Back) in healthy adults. Feasibility was tested by studying adherence of all participants to perform smartphone-based cognitive tests. Validity was studied by assessing the correlation between conventional neuropsychological tests and smartphone-based cognitive tests and by studying the effect of repeated testing. Results We included 151 participants (mean age in years=57.3, standard deviation=5.3). Mean adherence to assigned smartphone tests during 6 months was 60% (SD 24.7). There was moderate correlation between the firstly made smartphone-based test and the conventional test for the Stroop test and the Trail Making test with Spearman ρ=.3-.5 (P<.001). Correlation increased for both tests when comparing the conventional test with the mean score of all attempts a participant had made, with the highest correlation for Stroop panel 3 (ρ=.62, P<.001). Performance on the Stroop and the Trail Making tests improved over time suggesting a learning effect, but the scores on the Letter-N-back, the Memory-Word, and the Reaction Time tests remained stable. Conclusions Repeated smartphone-assisted cognitive testing is feasible with reasonable adherence and moderate relative validity for the Stroop and the Trail Making tests compared with conventional neuropsychological tests. Smartphone-based cognitive testing seems promising for large-scale data-collection in population studies. PMID:28546139
Tsirogiannis, Panagiotis; Reissmann, Daniel R; Heydecke, Guido
2016-09-01
In existing published reports, some studies indicate the superiority of digital impression systems in terms of the marginal accuracy of ceramic restorations, whereas others show that the conventional method provides restorations with better marginal fit than fully digital fabrication. Which impression method provides the lowest mean values for marginal adaptation is inconclusive. The findings from those studies cannot be easily generalized, and in vivo studies that could provide valid and meaningful information are limited in the existing publications. The purpose of this study was to systematically review existing reports and evaluate the marginal fit of ceramic single-tooth restorations after either digital or conventional impression methods by combining the available evidence in a meta-analysis. The search strategy for this systematic review of the publications was based on a Population, Intervention, Comparison, and Outcome (PICO) framework. For the statistical analysis, the mean marginal fit values of each study were extracted and categorized according to the impression method to calculate the mean value, together with the 95% confidence intervals (CI) of each category, and to evaluate the impact of each impression method on the marginal adaptation by comparing digital and conventional techniques separately for in vitro and in vivo studies. Twelve studies were included in the meta-analysis from the 63 identified records after database searching. For the in vitro studies, where ceramic restorations were fabricated after conventional impressions, the mean value of the marginal fit was 58.9 μm (95% CI: 41.1-76.7 μm), whereas after digital impressions, it was 63.3 μm (95% CI: 50.5-76.0 μm). In the in vivo studies, the mean marginal discrepancy of the restorations after digital impressions was 56.1 μm (95% CI: 46.3-65.8 μm), whereas after conventional impressions, it was 79.2 μm (95% CI: 59.6-98.9 μm) No significant difference was observed regarding the marginal discrepancy of single-unit ceramic restorations fabricated after digital or conventional impressions. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Wong, M S; Cheng, C Y; Ng, B K W; Lam, T P; Chiu, S W
2006-01-01
Spinal orthoses are commonly prescribed to patients with moderate AIS for prevention of further deterioration. In a conventional manufacturing method, plaster bandages are used to get patient's body contour and plaster cast is rectified manually. With the introduction of CAD/CAM system, a series of automated processes from body scanning to digital rectification and milling of positive model can be performed in a fast and accurate fashion. This project is to study the impact of CAD/CAM method as compared with the conventional method. In assessing the 147 recruited subjects fitted with spinal orthoses (43 subjects using conventional method and 104 subjects using CAD/CAM method), significant decreases (p<0.05) were found in the Cobb angles when comparing the pre-intervention data with that of the first year of intervention. Regarding the learning curve, Orthotists are getting more competent with the CAD/CAM technique in four years time. The mean productivity of the CAD/CAM method is 2.75 times higher than that of the conventional method. The CAD/CAM method could achieve similar clinical outcomes and with its high efficiency, could be considered as substitute for conventional methods in fabricating spinal orthoses for patients with AIS.