Improvement of the Owner Distinction Method for Healing-Type Pet Robots
NASA Astrophysics Data System (ADS)
Nambo, Hidetaka; Kimura, Haruhiko; Hara, Mirai; Abe, Koji; Tajima, Takuya
In order to decrease human stress, Animal Assisted Therapy which applies pets to heal humans is attracted. However, since animals are insanitary and unsafe, it is difficult to practically apply animal pets in hospitals. For the reason, on behalf of animal pets, pet robots have been attracted. Since pet robots would have no problems in sanitation and safety, they are able to be applied as a substitute for animal pets in the therapy. In our previous study where pet robots distinguish their owners like an animal pet, we used a puppet type pet robot which has pressure type touch sensors. However, the accuracy of our method was not sufficient to practical use. In this paper, we propose a method to improve the accuracy of the distinction. The proposed method can be applied for capacitive touch sensors such as installed in AIBO in addition to pressure type touch sensors. Besides, this paper shows performance of the proposed method from experimental results and confirms the proposed method has improved performance of the distinction in the conventional method.
Factorization and reduction methods for optimal control of distributed parameter systems
NASA Technical Reports Server (NTRS)
Burns, J. A.; Powers, R. K.
1985-01-01
A Chandrasekhar-type factorization method is applied to the linear-quadratic optimal control problem for distributed parameter systems. An aeroelastic control problem is used as a model example to demonstrate that if computationally efficient algorithms, such as those of Chandrasekhar-type, are combined with the special structure often available to a particular problem, then an abstract approximation theory developed for distributed parameter control theory becomes a viable method of solution. A numerical scheme based on averaging approximations is applied to hereditary control problems. Numerical examples are given.
Business Models for Training and Performance Improvement Departments
ERIC Educational Resources Information Center
Carliner, Saul
2004-01-01
Although typically applied to entire enterprises, the concept of business models applies to training and performance improvement groups. Business models are "the method by which firm[s] build and use [their] resources to offer.. value." Business models affect the types of projects, services offered, skills required, business processes, and type of…
ERIC Educational Resources Information Center
Bahr, Damon; Monroe, Eula E.; Shaha, Steven H.
2013-01-01
The purpose of this study was to compare changes in beliefs of two groups of preservice teachers involved in two types of opportunities to immediately apply methods for teaching accompanying an elementary mathematics methods course. Students in one group applied the methods learned in class through weekly 30-minute peer-teaching sessions, while…
Scientific use of the finite element method in Orthodontics
Knop, Luegya; Gandini, Luiz Gonzaga; Shintcovsk, Ricardo Lima; Gandini, Marcia Regina Elisa Aparecida Schiavon
2015-01-01
INTRODUCTION: The finite element method (FEM) is an engineering resource applied to calculate the stress and deformation of complex structures, and has been widely used in orthodontic research. With the advantage of being a non-invasive and accurate method that provides quantitative and detailed data on the physiological reactions possible to occur in tissues, applying the FEM can anticipate the visualization of these tissue responses through the observation of areas of stress created from applied orthodontic mechanics. OBJECTIVE: This article aims at reviewing and discussing the stages of the finite element method application and its applicability in Orthodontics. RESULTS: FEM is able to evaluate the stress distribution at the interface between periodontal ligament and alveolar bone, and the shifting trend in various types of tooth movement when using different types of orthodontic devices. Therefore, it is necessary to know specific software for this purpose. CONCLUSIONS: FEM is an important experimental method to answer questions about tooth movement, overcoming the disadvantages of other experimental methods. PMID:25992996
Statistical methods used in articles published by the Journal of Periodontal and Implant Science.
Choi, Eunsil; Lyu, Jiyoung; Park, Jinyoung; Kim, Hae-Young
2014-12-01
The purposes of this study were to assess the trend of use of statistical methods including parametric and nonparametric methods and to evaluate the use of complex statistical methodology in recent periodontal studies. This study analyzed 123 articles published in the Journal of Periodontal & Implant Science (JPIS) between 2010 and 2014. Frequencies and percentages were calculated according to the number of statistical methods used, the type of statistical method applied, and the type of statistical software used. Most of the published articles considered (64.4%) used statistical methods. Since 2011, the percentage of JPIS articles using statistics has increased. On the basis of multiple counting, we found that the percentage of studies in JPIS using parametric methods was 61.1%. Further, complex statistical methods were applied in only 6 of the published studies (5.0%), and nonparametric statistical methods were applied in 77 of the published studies (38.9% of a total of 198 studies considered). We found an increasing trend towards the application of statistical methods and nonparametric methods in recent periodontal studies and thus, concluded that increased use of complex statistical methodology might be preferred by the researchers in the fields of study covered by JPIS.
Methods for assessing the stability of slopes during earthquakes-A retrospective
Jibson, R.W.
2011-01-01
During the twentieth century, several methods to assess the stability of slopes during earthquakes were developed. Pseudostatic analysis was the earliest method; it involved simply adding a permanent body force representing the earthquake shaking to a static limit-equilibrium analysis. Stress-deformation analysis, a later development, involved much more complex modeling of slopes using a mesh in which the internal stresses and strains within elements are computed based on the applied external loads, including gravity and seismic loads. Stress-deformation analysis provided the most realistic model of slope behavior, but it is very complex and requires a high density of high-quality soil-property data as well as an accurate model of soil behavior. In 1965, Newmark developed a method that effectively bridges the gap between these two types of analysis. His sliding-block model is easy to apply and provides a useful index of co-seismic slope performance. Subsequent modifications to sliding-block analysis have made it applicable to a wider range of landslide types. Sliding-block analysis provides perhaps the greatest utility of all the types of analysis. It is far easier to apply than stress-deformation analysis, and it yields much more useful information than does pseudostatic analysis. ?? 2010.
NASA Astrophysics Data System (ADS)
Dittrich, Paul-Gerald; Grunert, Fred; Ehehalt, Jörg; Hofmann, Dietrich
2015-03-01
Aim of the paper is to show that the colorimetric characterization of optically clear colored liquids can be performed with different measurement methods and their application specific multichannel spectral sensors. The possible measurement methods are differentiated by the applied types of multichannel spectral sensors and therefore by their spectral resolution, measurement speed, measurement accuracy and measurement costs. The paper describes how different types of multichannel spectral sensors are calibrated with different types of calibration methods and how the measurement values can be used for further colorimetric calculations. The different measurement methods and the different application specific calibration methods will be explained methodically and theoretically. The paper proofs that and how different multichannel spectral sensor modules with different calibration methods can be applied with smartpads for the calculation of measurement results both in laboratory and in field. A given practical example is the application of different multichannel spectral sensors for the colorimetric characterization of petroleum oils and fuels and their colorimetric characterization by the Saybolt color scale.
Automatic 3D kidney segmentation based on shape constrained GC-OAAM
NASA Astrophysics Data System (ADS)
Chen, Xinjian; Summers, Ronald M.; Yao, Jianhua
2011-03-01
The kidney can be classified into three main tissue types: renal cortex, renal medulla and renal pelvis (or collecting system). Dysfunction of different renal tissue types may cause different kidney diseases. Therefore, accurate and efficient segmentation of kidney into different tissue types plays a very important role in clinical research. In this paper, we propose an automatic 3D kidney segmentation method which segments the kidney into the three different tissue types: renal cortex, medulla and pelvis. The proposed method synergistically combines active appearance model (AAM), live wire (LW) and graph cut (GC) methods, GC-OAAM for short. Our method consists of two main steps. First, a pseudo 3D segmentation method is employed for kidney initialization in which the segmentation is performed slice-by-slice via a multi-object oriented active appearance model (OAAM) method. An improved iterative model refinement algorithm is proposed for the AAM optimization, which synergistically combines the AAM and LW method. Multi-object strategy is applied to help the object initialization. The 3D model constraints are applied to the initialization result. Second, the object shape information generated from the initialization step is integrated into the GC cost computation. A multi-label GC method is used to segment the kidney into cortex, medulla and pelvis. The proposed method was tested on 19 clinical arterial phase CT data sets. The preliminary results showed the feasibility and efficiency of the proposed method.
Fullerton, Birgit; Pöhlmann, Boris; Krohn, Robert; Adams, John L; Gerlach, Ferdinand M; Erler, Antje
2016-10-01
To present a case study on how to compare various matching methods applying different measures of balance and to point out some pitfalls involved in relying on such measures. Administrative claims data from a German statutory health insurance fund covering the years 2004-2008. We applied three different covariance balance diagnostics to a choice of 12 different matching methods used to evaluate the effectiveness of the German disease management program for type 2 diabetes (DMPDM2). We further compared the effect estimates resulting from applying these different matching techniques in the evaluation of the DMPDM2. The choice of balance measure leads to different results on the performance of the applied matching methods. Exact matching methods performed well across all measures of balance, but resulted in the exclusion of many observations, leading to a change of the baseline characteristics of the study sample and also the effect estimate of the DMPDM2. All PS-based methods showed similar effect estimates. Applying a higher matching ratio and using a larger variable set generally resulted in better balance. Using a generalized boosted instead of a logistic regression model showed slightly better performance for balance diagnostics taking into account imbalances at higher moments. Best practice should include the application of several matching methods and thorough balance diagnostics. Applying matching techniques can provide a useful preprocessing step to reveal areas of the data that lack common support. The use of different balance diagnostics can be helpful for the interpretation of different effect estimates found with different matching methods. © Health Research and Educational Trust.
Fast wavelet based algorithms for linear evolution equations
NASA Technical Reports Server (NTRS)
Engquist, Bjorn; Osher, Stanley; Zhong, Sifen
1992-01-01
A class was devised of fast wavelet based algorithms for linear evolution equations whose coefficients are time independent. The method draws on the work of Beylkin, Coifman, and Rokhlin which they applied to general Calderon-Zygmund type integral operators. A modification of their idea is applied to linear hyperbolic and parabolic equations, with spatially varying coefficients. A significant speedup over standard methods is obtained when applied to hyperbolic equations in one space dimension and parabolic equations in multidimensions.
Mutation Clusters from Cancer Exome.
Kakushadze, Zura; Yu, Willie
2017-08-15
We apply our statistically deterministic machine learning/clustering algorithm *K-means (recently developed in https://ssrn.com/abstract=2908286) to 10,656 published exome samples for 32 cancer types. A majority of cancer types exhibit a mutation clustering structure. Our results are in-sample stable. They are also out-of-sample stable when applied to 1389 published genome samples across 14 cancer types. In contrast, we find in- and out-of-sample instabilities in cancer signatures extracted from exome samples via nonnegative matrix factorization (NMF), a computationally-costly and non-deterministic method. Extracting stable mutation structures from exome data could have important implications for speed and cost, which are critical for early-stage cancer diagnostics, such as novel blood-test methods currently in development.
Mutation Clusters from Cancer Exome
Kakushadze, Zura; Yu, Willie
2017-01-01
We apply our statistically deterministic machine learning/clustering algorithm *K-means (recently developed in https://ssrn.com/abstract=2908286) to 10,656 published exome samples for 32 cancer types. A majority of cancer types exhibit a mutation clustering structure. Our results are in-sample stable. They are also out-of-sample stable when applied to 1389 published genome samples across 14 cancer types. In contrast, we find in- and out-of-sample instabilities in cancer signatures extracted from exome samples via nonnegative matrix factorization (NMF), a computationally-costly and non-deterministic method. Extracting stable mutation structures from exome data could have important implications for speed and cost, which are critical for early-stage cancer diagnostics, such as novel blood-test methods currently in development. PMID:28809811
NASA Astrophysics Data System (ADS)
Wang, Bin; Wu, Xinyuan
2014-11-01
In this paper we consider multi-frequency highly oscillatory second-order differential equations x″ (t) + Mx (t) = f (t , x (t) ,x‧ (t)) where high-frequency oscillations are generated by the linear part Mx (t), and M is positive semi-definite (not necessarily nonsingular). It is known that Filon-type methods are effective approach to numerically solving highly oscillatory problems. Unfortunately, however, existing Filon-type asymptotic methods fail to apply to the highly oscillatory second-order differential equations when M is singular. We study and propose an efficient improvement on the existing Filon-type asymptotic methods, so that the improved Filon-type asymptotic methods can be able to numerically solving this class of multi-frequency highly oscillatory systems with a singular matrix M. The improved Filon-type asymptotic methods are designed by combining Filon-type methods with the asymptotic methods based on the variation-of-constants formula. We also present one efficient and practical improved Filon-type asymptotic method which can be performed at lower cost. Accompanying numerical results show the remarkable efficiency.
NASA Astrophysics Data System (ADS)
Esteban, Pere; Beck, Christoph; Philipp, Andreas
2010-05-01
Using data associated with accidents or damages caused by snow avalanches over the eastern Pyrenees (Andorra and Catalonia) several atmospheric circulation type catalogues have been obtained. For this purpose, different circulation type classification methods based on Principal Component Analysis (T-mode and S-mode using the extreme scores) and on optimization procedures (Improved K-means and SANDRA) were applied . Considering the characteristics of the phenomena studied, not only single day circulation patterns were taken into account but also sequences of circulation types of varying length. Thus different classifications with different numbers of types and for different sequence lengths were obtained using the different classification methods. Simple between type variability, within type variability, and outlier detection procedures have been applied for selecting the best result concerning snow avalanches type classifications. Furthermore, days without occurrence of the hazards were also related to the avalanche centroids using pattern-correlations, facilitating the calculation of the anomalies between hazardous and no hazardous days, and also frequencies of occurrence of hazardous events for each circulation type. Finally, the catalogues statistically considered the best results are evaluated using the avalanche forecaster expert knowledge. Consistent explanation of snow avalanches occurrence by means of circulation sequences is obtained, but always considering results from classifications with different sequence length. This work has been developed in the framework of the COST Action 733 (Harmonisation and Applications of Weather Type Classifications for European regions).
Box-Cox transformation for QTL mapping.
Yang, Runqing; Yi, Nengjun; Xu, Shizhong
2006-01-01
The maximum likelihood method of QTL mapping assumes that the phenotypic values of a quantitative trait follow a normal distribution. If the assumption is violated, some forms of transformation should be taken to make the assumption approximately true. The Box-Cox transformation is a general transformation method which can be applied to many different types of data. The flexibility of the Box-Cox transformation is due to a variable, called transformation factor, appearing in the Box-Cox formula. We developed a maximum likelihood method that treats the transformation factor as an unknown parameter, which is estimated from the data simultaneously along with the QTL parameters. The method makes an objective choice of data transformation and thus can be applied to QTL analysis for many different types of data. Simulation studies show that (1) Box-Cox transformation can substantially increase the power of QTL detection; (2) Box-Cox transformation can replace some specialized transformation methods that are commonly used in QTL mapping; and (3) applying the Box-Cox transformation to data already normally distributed does not harm the result.
Low-template methods yield limited extra information for PowerPlex® Fusion 6C profiling.
Duijs, Francisca; van de Merwe, Linda; Sijen, Titia; Benschop, Corina C G
2018-06-01
Advances in autosomal DNA profiling systems enable analyzing increased numbers of short tandem repeat (STR) loci in one reaction. Increasing the number of STR loci increases the amount of information that may be obtained from a (crime scene) sample. In this study, we examined whether even more allelic information can be obtained by applying low-template methods. To this aim, the performance of the PowerPlex® Fusion 6C STR typing system was assessed when increasing the number of PCR cycles or enhancing the capillary electrophoresis (CE) injection settings. Results show that applying these low-template methods yields limited extra information and comes at cost of more background noise. In addition, the gain in detection of alleles was much smaller when compared to the gain when applying low-template methods to the 15-loci AmpFLSTR® NGM™ system. Consequently, the PowerPlex® Fusion 6C STR typing system was implemented using standard settings only; low-template methods were not implemented for our routine forensic casework. Copyright © 2018 Elsevier B.V. All rights reserved.
Retroreflectivity database study.
DOT National Transportation Integrated Search
2009-07-16
Pavement marking delineation is one method to provide positive driver guidance on all roadway types. There are a variety of pavement markings used by local and state transportation agencies in the United States. The type of pavement marking applied t...
ERIC Educational Resources Information Center
Shih, Ching-Lin; Liu, Tien-Hsiang; Wang, Wen-Chung
2014-01-01
The simultaneous item bias test (SIBTEST) method regression procedure and the differential item functioning (DIF)-free-then-DIF strategy are applied to the logistic regression (LR) method simultaneously in this study. These procedures are used to adjust the effects of matching true score on observed score and to better control the Type I error…
On the exact solutions of high order wave equations of KdV type (I)
NASA Astrophysics Data System (ADS)
Bulut, Hasan; Pandir, Yusuf; Baskonus, Haci Mehmet
2014-12-01
In this paper, by means of a proper transformation and symbolic computation, we study high order wave equations of KdV type (I). We obtained classification of exact solutions that contain soliton, rational, trigonometric and elliptic function solutions by using the extended trial equation method. As a result, the motivation of this paper is to utilize the extended trial equation method to explore new solutions of high order wave equation of KdV type (I). This method is confirmed by applying it to this kind of selected nonlinear equations.
High Resolution Melting (HRM) applied to wine authenticity.
Pereira, Leonor; Gomes, Sónia; Castro, Cláudia; Eiras-Dias, José Eduardo; Brazão, João; Graça, António; Fernandes, José R; Martins-Lopes, Paula
2017-02-01
Wine authenticity methods are in increasing demand mainly in Denomination of Origin designations. The DNA-based methodologies are a reliable means of tracking food/wine varietal composition. The main aim of this work was the study of High Resolution Melting (HRM) application as a screening method for must and wine authenticity. Three sample types (leaf, must and wine) were used to validate the three developed HRM assays (Vv1-705bp; Vv2-375bp; and Vv3-119bp). The Vv1 HRM assay was only successful when applied to leaf and must samples. The Vv2 HRM assay successfully amplified all sample types, allowing genotype discrimination based on melting temperature values. The smallest amplicon, Vv3, produced a coincident melting curve shape in all sample types (leaf and wine) with corresponding genotypes. This study presents sensitive, rapid and efficient HRM assays applied for the first time to wine samples suitable for wine authenticity purposes. Copyright © 2016 Elsevier Ltd. All rights reserved.
Pérez-Olmos, R; Rios, A; Fernández, J R; Lapa, R A; Lima, J L
2001-01-05
In this paper, the construction and evaluation of an electrode selective to nitrate with improved sensitivity, constructed like a conventional electrode (ISE) but using an operational amplifier to sum the potentials supplied by four membranes (ESOA) is described. The two types of electrodes, without an inner reference solution, were constructed using tetraoctylammonium bromide as sensor, dibutylphthalate as solvent mediator and PVC as plastic matrix, the membranes obtained directly applied onto a conductive epoxy resin support. After the comparative evaluation of their working characteristics they were used in the determination of nitrate in different types of tobacco. The limit of detection of the direct potentiometric method developed was found to be 0.18 g kg(-1) and the precision and accuracy of the method, when applied to eight different samples of tobacco, expressed in terms of mean R.S.D. and average percentage of spike recovery was 0.6 and 100.3%, respectively. The comparison of variances showed, on all ocassions, that the results obtained by the ESOA were similar to those obtained by the conventional ISE, but with higher precision. Linear regression analysis showed good agreement (r=0.9994) between the results obtained by the developed potentiometric method and those of a spectrophotometric method based on brucine, adopted as reference method, when applied simultaneously to 32 samples of different types of tobacco.
Rapid Parameterization Schemes for Aircraft Shape Optimization
NASA Technical Reports Server (NTRS)
Li, Wu
2012-01-01
A rapid shape parameterization tool called PROTEUS is developed for aircraft shape optimization. This tool can be applied directly to any aircraft geometry that has been defined in PLOT3D format, with the restriction that each aircraft component must be defined by only one data block. PROTEUS has eight types of parameterization schemes: planform, wing surface, twist, body surface, body scaling, body camber line, shifting/scaling, and linear morphing. These parametric schemes can be applied to two types of components: wing-type surfaces (e.g., wing, canard, horizontal tail, vertical tail, and pylon) and body-type surfaces (e.g., fuselage, pod, and nacelle). These schemes permit the easy setup of commonly used shape modification methods, and each customized parametric scheme can be applied to the same type of component for any configuration. This paper explains the mathematics for these parametric schemes and uses two supersonic configurations to demonstrate the application of these schemes.
Applying the Transtheoretical Model to Investigate Behavioural Change in Type 2 Diabetic Patients
ERIC Educational Resources Information Center
Lin, Shu-Ping; Wang, Ming-Jye
2013-01-01
Background: Long-term behaviour change in type 2 diabetic patients may provide effective glycemic control. Purpose: To investigate the key factors that promote behaviour change in diabetic subjects using the transtheoretical model. Methods: Subjects were selected by purposive sampling from type 2 diabetes outpatients. Self-administered…
ERIC Educational Resources Information Center
Bronikowski, Michal; Bronikowska, Malgorzata; Kantanista, Adam; Ciekot, Monika; Laudanska-Krzeminska, Ida; Szwed, Szymon
2009-01-01
Study aim: To assess the intensities of three types of physical education (PE) classes corresponding to the phases of the teaching/learning process: Type 1--acquiring and developing skills, Type 2--selecting and applying skills, tactics and compositional principles and Type 3--evaluating and improving performance skills. Material and methods: A…
Solving the interval type-2 fuzzy polynomial equation using the ranking method
NASA Astrophysics Data System (ADS)
Rahman, Nurhakimah Ab.; Abdullah, Lazim
2014-07-01
Polynomial equations with trapezoidal and triangular fuzzy numbers have attracted some interest among researchers in mathematics, engineering and social sciences. There are some methods that have been developed in order to solve these equations. In this study we are interested in introducing the interval type-2 fuzzy polynomial equation and solving it using the ranking method of fuzzy numbers. The ranking method concept was firstly proposed to find real roots of fuzzy polynomial equation. Therefore, the ranking method is applied to find real roots of the interval type-2 fuzzy polynomial equation. We transform the interval type-2 fuzzy polynomial equation to a system of crisp interval type-2 fuzzy polynomial equation. This transformation is performed using the ranking method of fuzzy numbers based on three parameters, namely value, ambiguity and fuzziness. Finally, we illustrate our approach by numerical example.
Application of ride quality technology to predict ride satisfaction for commuter-type aircraft
NASA Technical Reports Server (NTRS)
Jacobson, I. D.; Kuhlthau, A. R.; Richards, L. G.
1975-01-01
A method was developed to predict passenger satisfaction with the ride environment of a transportation vehicle. This method, a general approach, was applied to a commuter-type aircraft for illustrative purposes. The effect of terrain, altitude and seat location were examined. The method predicts the variation in passengers satisfied for any set of flight conditions. In addition several noncommuter aircraft were analyzed for comparison and other uses of the model described. The method has advantages for design, evaluation, and operating decisions.
Application of Conjugate Gradient methods to tidal simulation
Barragy, E.; Carey, G.F.; Walters, R.A.
1993-01-01
A harmonic decomposition technique is applied to the shallow water equations to yield a complex, nonsymmetric, nonlinear, Helmholtz type problem for the sea surface and an accompanying complex, nonlinear diagonal problem for the velocities. The equation for the sea surface is linearized using successive approximation and then discretized with linear, triangular finite elements. The study focuses on applying iterative methods to solve the resulting complex linear systems. The comparative evaluation includes both standard iterative methods for the real subsystems and complex versions of the well known Bi-Conjugate Gradient and Bi-Conjugate Gradient Squared methods. Several Incomplete LU type preconditioners are discussed, and the effects of node ordering, rejection strategy, domain geometry and Coriolis parameter (affecting asymmetry) are investigated. Implementation details for the complex case are discussed. Performance studies are presented and comparisons made with a frontal solver. ?? 1993.
Electronic system for floor surface type detection in robotics applications
NASA Astrophysics Data System (ADS)
Tarapata, Grzegorz; Paczesny, Daniel; Tarasiuk, Łukasz
2016-11-01
The paper reports a recognizing method base on ultrasonic transducers utilized for the surface types detection. Ultra-sonic signal is transmitted toward the examined substrate, then reflected and scattered signal goes back to another ultra-sonic receiver. Thee measuring signal is generated by a piezo-electric transducer located at specified distance from the tested substrate. The detector is a second piezo-electric transducer located next to the transmitter. Depending on thee type of substrate which is exposed by an ultrasonic wave, the signal is partially absorbed inn the material, diffused and reflected towards the receiver. To measure the level of received signal, the dedicated electronic circuit was design and implemented in the presented systems. Such system was designed too recognize two types of floor surface: solid (like concrete, ceramic stiles, wood) and soft (carpets, floor coverings). The method will be applied in electronic detection system dedicated to autonomous cleaning robots due to selection of appropriate cleaning method. This work presents the concept of ultrasonic signals utilization, the design of both the measurement system and the measuring stand and as well number of wide tests results which validates correctness of applied ultrasonic method.
Chosen interval methods for solving linear interval systems with special type of matrix
NASA Astrophysics Data System (ADS)
Szyszka, Barbara
2013-10-01
The paper is devoted to chosen direct interval methods for solving linear interval systems with special type of matrix. This kind of matrix: band matrix with a parameter, from finite difference problem is obtained. Such linear systems occur while solving one dimensional wave equation (Partial Differential Equations of hyperbolic type) by using the central difference interval method of the second order. Interval methods are constructed so as the errors of method are enclosed in obtained results, therefore presented linear interval systems contain elements that determining the errors of difference method. The chosen direct algorithms have been applied for solving linear systems because they have no errors of method. All calculations were performed in floating-point interval arithmetic.
Park, Jung In; Pruinelli, Lisiane; Westra, Bonnie L; Delaney, Connie W
2014-01-01
With the pervasive implementation of electronic health records (EHR), new opportunities arise for nursing research through use of EHR data. Increasingly, comparative effectiveness research within and across health systems is conducted to identify the impact of nursing for improving health, health care, and lowering costs of care. Use of EHR data for this type of research requires use of national and internationally recognized nursing terminologies to normalize data. Research methods are evolving as large data sets become available through EHRs. Little is known about the types of research and analytic methods for applied to nursing research using EHR data normalized with nursing terminologies. The purpose of this paper is to report on a subset of a systematic review of peer reviewed studies related to applied nursing informatics research involving EHR data using standardized nursing terminologies.
Band-gap corrected density functional theory calculations for InAs/GaSb type II superlattices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jianwei; Zhang, Yong
2014-12-07
We performed pseudopotential based density functional theory (DFT) calculations for GaSb/InAs type II superlattices (T2SLs), with bandgap errors from the local density approximation mitigated by applying an empirical method to correct the bulk bandgaps. Specifically, this work (1) compared the calculated bandgaps with experimental data and non-self-consistent atomistic methods; (2) calculated the T2SL band structures with varying structural parameters; (3) investigated the interfacial effects associated with the no-common-atom heterostructure; and (4) studied the strain effect due to lattice mismatch between the two components. This work demonstrates the feasibility of applying the DFT method to more exotic heterostructures and defect problemsmore » related to this material system.« less
NASA Technical Reports Server (NTRS)
Smith, James A.
1992-01-01
The inversion of the leaf area index (LAI) canopy parameter from optical spectral reflectance measurements is obtained using a backpropagation artificial neural network trained using input-output pairs generated by a multiple scattering reflectance model. The problem of LAI estimation over sparse canopies (LAI < 1.0) with varying soil reflectance backgrounds is particularly difficult. Standard multiple regression methods applied to canopies within a single homogeneous soil type yield good results but perform unacceptably when applied across soil boundaries, resulting in absolute percentage errors of >1000 percent for low LAI. Minimization methods applied to merit functions constructed from differences between measured reflectances and predicted reflectances using multiple-scattering models are unacceptably sensitive to a good initial guess for the desired parameter. In contrast, the neural network reported generally yields absolute percentage errors of <30 percent when weighting coefficients trained on one soil type were applied to predicted canopy reflectance at a different soil background.
Assessing and evaluating multidisciplinary translational teams: a mixed methods approach.
Wooten, Kevin C; Rose, Robert M; Ostir, Glenn V; Calhoun, William J; Ameredes, Bill T; Brasier, Allan R
2014-03-01
A case report illustrates how multidisciplinary translational teams can be assessed using outcome, process, and developmental types of evaluation using a mixed-methods approach. Types of evaluation appropriate for teams are considered in relation to relevant research questions and assessment methods. Logic models are applied to scientific projects and team development to inform choices between methods within a mixed-methods design. Use of an expert panel is reviewed, culminating in consensus ratings of 11 multidisciplinary teams and a final evaluation within a team-type taxonomy. Based on team maturation and scientific progress, teams were designated as (a) early in development, (b) traditional, (c) process focused, or (d) exemplary. Lessons learned from data reduction, use of mixed methods, and use of expert panels are explored.
Shared or Integrated: Which Type of Integration is More Effective Improves Students’ Creativity?
NASA Astrophysics Data System (ADS)
Mariyam, M.; Kaniawati, I.; Sriyati, S.
2017-09-01
Integrated science learning has various types of integration. This study aims to apply shared and integrated type of integration with project based learning (PjBL) model to improve students’ creativity on waste recycling theme. The research method used is a quasi experiment with the matching-only pre test-post test design. The samples of this study are 108 students consisting of 36 students (experiment class 1st), 35 students (experiment class 2nd) and 37 students (control class 3rd) at one of Junior High School in Tanggamus, Lampung. The results show that there is difference of creativity improvement in the class applied by PjBL model with shared type of integration, integrated type of integration and without any integration in waste recycling theme. Class applied by PjBL model with shared type of integration has the higher creativity improvement than the PjBL model with integrated type of integration and without any integration. Integrated science learning using shared type only combines 2 lessons, hence an intact concept is resulted. So, PjBL model with shared type of integration more effective improves students’ creativity than integrated type.
NASA Astrophysics Data System (ADS)
Motegi, Kohei
2018-05-01
We present a method to analyze the wavefunctions of six-vertex models by extending the Izergin-Korepin analysis originally developed for domain wall boundary partition functions. First, we apply the method to the case of the basic wavefunctions of the XXZ-type six-vertex model. By giving the Izergin-Korepin characterization of the wavefunctions, we show that these wavefunctions can be expressed as multiparameter deformations of the quantum group deformed Grothendieck polynomials. As a second example, we show that the Izergin-Korepin analysis is effective for analysis of the wavefunctions for a triangular boundary and present the explicit forms of the symmetric functions representing these wavefunctions. As a third example, we apply the method to the elliptic Felderhof model which is a face-type version and an elliptic extension of the trigonometric Felderhof model. We show that the wavefunctions can be expressed as one-parameter deformations of an elliptic analog of the Vandermonde determinant and elliptic symmetric functions.
Statistical Model Selection for TID Hardness Assurance
NASA Technical Reports Server (NTRS)
Ladbury, R.; Gorelick, J. L.; McClure, S.
2010-01-01
Radiation Hardness Assurance (RHA) methodologies against Total Ionizing Dose (TID) degradation impose rigorous statistical treatments for data from a part's Radiation Lot Acceptance Test (RLAT) and/or its historical performance. However, no similar methods exist for using "similarity" data - that is, data for similar parts fabricated in the same process as the part under qualification. This is despite the greater difficulty and potential risk in interpreting of similarity data. In this work, we develop methods to disentangle part-to-part, lot-to-lot and part-type-to-part-type variation. The methods we develop apply not just for qualification decisions, but also for quality control and detection of process changes and other "out-of-family" behavior. We begin by discussing the data used in ·the study and the challenges of developing a statistic providing a meaningful measure of degradation across multiple part types, each with its own performance specifications. We then develop analysis techniques and apply them to the different data sets.
NASA Astrophysics Data System (ADS)
Anagnostopoulos, Konstantinos N.; Azuma, Takehiro; Ito, Yuta; Nishimura, Jun; Papadoudis, Stratos Kovalkov
2018-02-01
In recent years the complex Langevin method (CLM) has proven a powerful method in studying statistical systems which suffer from the sign problem. Here we show that it can also be applied to an important problem concerning why we live in four-dimensional spacetime. Our target system is the type IIB matrix model, which is conjectured to be a nonperturbative definition of type IIB superstring theory in ten dimensions. The fermion determinant of the model becomes complex upon Euclideanization, which causes a severe sign problem in its Monte Carlo studies. It is speculated that the phase of the fermion determinant actually induces the spontaneous breaking of the SO(10) rotational symmetry, which has direct consequences on the aforementioned question. In this paper, we apply the CLM to the 6D version of the type IIB matrix model and show clear evidence that the SO(6) symmetry is broken down to SO(3). Our results are consistent with those obtained previously by the Gaussian expansion method.
NASA Technical Reports Server (NTRS)
Green, M. J.; Nachtsheim, P. R.
1972-01-01
A numerical method for the solution of large systems of nonlinear differential equations of the boundary-layer type is described. The method is a modification of the technique for satisfying asymptotic boundary conditions. The present method employs inverse interpolation instead of the Newton method to adjust the initial conditions of the related initial-value problem. This eliminates the so-called perturbation equations. The elimination of the perturbation equations not only reduces the user's preliminary work in the application of the method, but also reduces the number of time-consuming initial-value problems to be numerically solved at each iteration. For further ease of application, the solution of the overdetermined system for the unknown initial conditions is obtained automatically by applying Golub's linear least-squares algorithm. The relative ease of application of the proposed numerical method increases directly as the order of the differential-equation system increases. Hence, the method is especially attractive for the solution of large-order systems. After the method is described, it is applied to a fifth-order problem from boundary-layer theory.
Pulsed-field gel electrophoresis typing of Staphylococcus aureus isolates
USDA-ARS?s Scientific Manuscript database
Pulsed-field gel electrophoresis (PFGE) is the most applied and effective genetic typing method for epidemiological studies and investigation of foodborne outbreaks caused by different pathogens, including Staphylococcus aureus. The technique relies on analysis of large DNA fragments generated by th...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geist, William H.
2017-09-15
The objectives for this presentation are to describe the method that the IAEA uses to determine a sampling plan for nuclear material measurements; describe the terms detection probability and significant quantity; list the three nuclear materials measurement types; describe the sampling method applied to an item facility; and describe multiple method sampling.
NASA Astrophysics Data System (ADS)
Grunin, A. P.; Kalinov, G. A.; Bolokhovtsev, A. V.; Sai, S. V.
2018-05-01
This article reports on a novel method to improve the accuracy of positioning an object by a low frequency hyperbolic radio navigation system like an eLoran. This method is based on the application of the standard Kalman filter. Investigations of an affection of the filter parameters and the type of the movement on accuracy of the vehicle position estimation are carried out. Evaluation of the method accuracy was investigated by separating data from the semi-empirical movement model to different types of movements.
Model parameter learning using Kullback-Leibler divergence
NASA Astrophysics Data System (ADS)
Lin, Chungwei; Marks, Tim K.; Pajovic, Milutin; Watanabe, Shinji; Tung, Chih-kuan
2018-02-01
In this paper, we address the following problem: For a given set of spin configurations whose probability distribution is of the Boltzmann type, how do we determine the model coupling parameters? We demonstrate that directly minimizing the Kullback-Leibler divergence is an efficient method. We test this method against the Ising and XY models on the one-dimensional (1D) and two-dimensional (2D) lattices, and provide two estimators to quantify the model quality. We apply this method to two types of problems. First, we apply it to the real-space renormalization group (RG). We find that the obtained RG flow is sufficiently good for determining the phase boundary (within 1% of the exact result) and the critical point, but not accurate enough for critical exponents. The proposed method provides a simple way to numerically estimate amplitudes of the interactions typically truncated in the real-space RG procedure. Second, we apply this method to the dynamical system composed of self-propelled particles, where we extract the parameter of a statistical model (a generalized XY model) from a dynamical system described by the Viscek model. We are able to obtain reasonable coupling values corresponding to different noise strengths of the Viscek model. Our method is thus able to provide quantitative analysis of dynamical systems composed of self-propelled particles.
Piran, Arezoo; Shahcheraghi, Fereshteh; Solgi, Hamid; Rohani, Mahdi; Badmasti, Farzad
2017-10-01
The multi-drug resistant (MDR) Acinetobacter baumannii as an important nosocomial pathogen has emerged a global health concern in recent years. In this study, we applied three easier, faster, and cost-effective methods including PCR-based open reading frames (ORFs) typing, sequence typing of bla OXA-51-like and RAPD-PCR method to rapid typing of A. baumannii strains. Taken together in the present study the results of ORFs typing, PCR-sequencing of bla OXA-51-like genes and MLST sequence typing revealed there was a high prevalence (62%, 35/57) of ST2 as international and successful clone which detected among clinical isolates of multi-drug resistant A. baumannii with ORF pattern B and bla OXA-66 gene. Only 7% (4/57) of MDR isolates belonged to ST1 with ORF pattern A and bla OXA-69 gene. Interestingly, we detected singleton ST513 (32%, 18/57) that encoded bla OXA-90 and showed the ORF pattern H as previously isolated in Middle East. Moreover, our data showed RAPD-PCR method can detect divergent strains of the STs. The Cl-1, Cl-2, Cl-3, Cl-4, Cl-10, Cl-11, Cl-12, Cl-13 and Cl-14 belonged to ST2. While the Cl-6, Cl-7, Cl-8 and Cl-9 belonged to ST513. Only Cl-5 belonged to ST1. It seems that the combination of these methods have more discriminatory than any method separately and could be effectively applied to rapid detection of the clonal complex (CC) of A. baumannii strains without performing of MLST or PFGE. Copyright © 2017 Elsevier B.V. All rights reserved.
Percolation analysis of nonlinear structures in scale-free two-dimensional simulations
NASA Technical Reports Server (NTRS)
Dominik, Kurt G.; Shandarin, Sergei F.
1992-01-01
Results are presented of applying percolation analysis to several two-dimensional N-body models which simulate the formation of large-scale structure. Three parameters are estimated: total area (a(c)), total mass (M(C)), and percolation density (rho(c)) of the percolating structure at the percolation threshold for both unsmoothed and smoothed (with different scales L(s)) nonlinear with filamentary structures, confirming early speculations that this type of model has several features of filamentary-type distributions. Also, it is shown that, by properly applying smoothing techniques, many problems previously considered detrimental can be dealt with and overcome. Possible difficulties and prospects with the use of this method are discussed, specifically relating to techniques and methods already applied to CfA deep sky surveys. The success of this test in two dimensions and the potential for extrapolation to three dimensions is also discussed.
NASA Astrophysics Data System (ADS)
Maslakov, M. L.
2018-04-01
This paper examines the solution of convolution-type integral equations of the first kind by applying the Tikhonov regularization method with two-parameter stabilizing functions. The class of stabilizing functions is expanded in order to improve the accuracy of the resulting solution. The features of the problem formulation for identification and adaptive signal correction are described. A method for choosing regularization parameters in problems of identification and adaptive signal correction is suggested.
White, Alec F.; Head-Gordon, Martin; McCurdy, C. William
2017-01-30
The computation of Siegert energies by analytic continuation of bound state energies has recently been applied to shape resonances in polyatomic molecules by several authors. Here, we critically evaluate a recently proposed analytic continuation method based on low order (type III) Padé approximants as well as an analytic continuation method based on high order (type II) Padé approximants. We compare three classes of stabilizing potentials: Coulomb potentials, Gaussian potentials, and attenuated Coulomb potentials. These methods are applied to a model potential where the correct answer is known exactly and to the 2Π g shape resonance of N 2 - whichmore » has been studied extensively by other methods. Both the choice of stabilizing potential and method of analytic continuation prove to be important to the accuracy of the results. We then conclude that an attenuated Coulomb potential is the most effective of the three for bound state analytic continuation methods. With the proper potential, such methods show promise for algorithmic determination of the positions and widths of molecular shape resonances.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Alec F.; Head-Gordon, Martin; McCurdy, C. William
The computation of Siegert energies by analytic continuation of bound state energies has recently been applied to shape resonances in polyatomic molecules by several authors. Here, we critically evaluate a recently proposed analytic continuation method based on low order (type III) Padé approximants as well as an analytic continuation method based on high order (type II) Padé approximants. We compare three classes of stabilizing potentials: Coulomb potentials, Gaussian potentials, and attenuated Coulomb potentials. These methods are applied to a model potential where the correct answer is known exactly and to the 2Π g shape resonance of N 2 - whichmore » has been studied extensively by other methods. Both the choice of stabilizing potential and method of analytic continuation prove to be important to the accuracy of the results. We then conclude that an attenuated Coulomb potential is the most effective of the three for bound state analytic continuation methods. With the proper potential, such methods show promise for algorithmic determination of the positions and widths of molecular shape resonances.« less
Insights: A New Method to Balance Chemical Equations.
ERIC Educational Resources Information Center
Garcia, Arcesio
1987-01-01
Describes a method designed to balance oxidation-reduction chemical equations. Outlines a method which is based on changes in the oxidation number that can be applied to both molecular reactions and ionic reactions. Provides examples and delineates the steps to follow for each type of equation balancing. (TW)
Assessing and Evaluating Multidisciplinary Translational Teams: A Mixed Methods Approach
Wooten, Kevin C.; Rose, Robert M.; Ostir, Glenn V.; Calhoun, William J.; Ameredes, Bill T.; Brasier, Allan R.
2014-01-01
A case report illustrates how multidisciplinary translational teams can be assessed using outcome, process, and developmental types of evaluation using a mixed methods approach. Types of evaluation appropriate for teams are considered in relation to relevant research questions and assessment methods. Logic models are applied to scientific projects and team development to inform choices between methods within a mixed methods design. Use of an expert panel is reviewed, culminating in consensus ratings of 11 multidisciplinary teams and a final evaluation within a team type taxonomy. Based on team maturation and scientific progress, teams were designated as: a) early in development, b) traditional, c) process focused, or d) exemplary. Lessons learned from data reduction, use of mixed methods, and use of expert panels are explored. PMID:24064432
Madea, Burkhard; Saukko, Pekka; Musshoff, Frank
2007-01-17
In the last years the research output of forensic medicine has sometimes been regarded as insufficient and as of poor quality, especially when parameters as impact factors and external funding were taken into account. However, forensic medicine has different tasks compared to clinical medicine. The main difference between basic subjects, clinical and forensic medicine is not a lack of scientific efficiency in forensic medicine but is a result of the questions asked, the available methods and specific aims. In contrast to natural-scientific research, forensic science has furthermore important intersections with arts and socio-scientific disciplines. Etiologic and pathogenetic research is of only limited relevance in forensic medicine. Thus, forensic medicine is excluded from these research fields, which are mainly supported by external funding. In forensic medicine research mainly means applied research regarding findings, the probative value and reconstruction as well as examination at different points of intersection between medicine and law. Clinical types of research such as controlled randomised, prospective cross-sectional, cohort or case-control studies can only rarely be applied in forensic medicine due to the area specific research fields (e.g. thantatology, violent death, vitality, traffic medicine, analytical toxicology, hemogenetics and stain analysis). The types of studies which are successfully established in forensic medicine are comparison of methods, sensitivity studies, validation of methods, kinetic examinations etc. Tasks of research in forensic medicine and study types, which may be applied will be addressed.
Neural net applied to anthropological material: a methodical study on the human nasal skeleton.
Prescher, Andreas; Meyers, Anne; Gerf von Keyserlingk, Diedrich
2005-07-01
A new information processing method, an artificial neural net, was applied to characterise the variability of anthropological features of the human nasal skeleton. The aim was to find different types of nasal skeletons. A neural net with 15*15 nodes was trained by 17 standard anthropological parameters taken from 184 skulls of the Aachen collection. The trained neural net delivers its classification in a two-dimensional map. Different types of noses were locally separated within the map. Rare and frequent types may be distinguished after one passage of the complete collection through the net. Statistical descriptive analysis, hierarchical cluster analysis, and discriminant analysis were applied to the same data set. These parallel applications allowed comparison of the new approach to the more traditional ones. In general the classification by the neural net is in correspondence with cluster analysis and discriminant analysis. However, it goes beyond these classifications because of the possibility of differentiating the types in multi-dimensional dependencies. Furthermore, places in the map are kept blank for intermediate forms, which may be theoretically expected, but were not included in the training set. In conclusion, the application of a neural network is a suitable method for investigating large collections of biological material. The gained classification may be helpful in anatomy and anthropology as well as in forensic medicine. It may be used to characterise the peculiarity of a whole set as well as to find particular cases within the set.
Spectral embedding finds meaningful (relevant) structure in image and microarray data
Higgs, Brandon W; Weller, Jennifer; Solka, Jeffrey L
2006-01-01
Background Accurate methods for extraction of meaningful patterns in high dimensional data have become increasingly important with the recent generation of data types containing measurements across thousands of variables. Principal components analysis (PCA) is a linear dimensionality reduction (DR) method that is unsupervised in that it relies only on the data; projections are calculated in Euclidean or a similar linear space and do not use tuning parameters for optimizing the fit to the data. However, relationships within sets of nonlinear data types, such as biological networks or images, are frequently mis-rendered into a low dimensional space by linear methods. Nonlinear methods, in contrast, attempt to model important aspects of the underlying data structure, often requiring parameter(s) fitting to the data type of interest. In many cases, the optimal parameter values vary when different classification algorithms are applied on the same rendered subspace, making the results of such methods highly dependent upon the type of classifier implemented. Results We present the results of applying the spectral method of Lafon, a nonlinear DR method based on the weighted graph Laplacian, that minimizes the requirements for such parameter optimization for two biological data types. We demonstrate that it is successful in determining implicit ordering of brain slice image data and in classifying separate species in microarray data, as compared to two conventional linear methods and three nonlinear methods (one of which is an alternative spectral method). This spectral implementation is shown to provide more meaningful information, by preserving important relationships, than the methods of DR presented for comparison. Tuning parameter fitting is simple and is a general, rather than data type or experiment specific approach, for the two datasets analyzed here. Tuning parameter optimization is minimized in the DR step to each subsequent classification method, enabling the possibility of valid cross-experiment comparisons. Conclusion Results from the spectral method presented here exhibit the desirable properties of preserving meaningful nonlinear relationships in lower dimensional space and requiring minimal parameter fitting, providing a useful algorithm for purposes of visualization and classification across diverse datasets, a common challenge in systems biology. PMID:16483359
A New Scheme of Integrability for (bi)Hamiltonian PDE
NASA Astrophysics Data System (ADS)
De Sole, Alberto; Kac, Victor G.; Valeri, Daniele
2016-10-01
We develop a new method for constructing integrable Hamiltonian hierarchies of Lax type equations, which combines the fractional powers technique of Gelfand and Dickey, and the classical Hamiltonian reduction technique of Drinfeld and Sokolov. The method is based on the notion of an Adler type matrix pseudodifferential operator and the notion of a generalized quasideterminant. We also introduce the notion of a dispersionless Adler type series, which is applied to the study of dispersionless Hamiltonian equations. Non-commutative Hamiltonian equations are discussed in this framework as well.
Tong, Pan; Coombes, Kevin R
2012-11-15
Identifying genes altered in cancer plays a crucial role in both understanding the mechanism of carcinogenesis and developing novel therapeutics. It is known that there are various mechanisms of regulation that can lead to gene dysfunction, including copy number change, methylation, abnormal expression, mutation and so on. Nowadays, all these types of alterations can be simultaneously interrogated by different types of assays. Although many methods have been proposed to identify altered genes from a single assay, there is no method that can deal with multiple assays accounting for different alteration types systematically. In this article, we propose a novel method, integration using item response theory (integIRTy), to identify altered genes by using item response theory that allows integrated analysis of multiple high-throughput assays. When applied to a single assay, the proposed method is more robust and reliable than conventional methods such as Student's t-test or the Wilcoxon rank-sum test. When used to integrate multiple assays, integIRTy can identify novel-altered genes that cannot be found by looking at individual assay separately. We applied integIRTy to three public cancer datasets (ovarian carcinoma, breast cancer, glioblastoma) for cross-assay type integration which all show encouraging results. The R package integIRTy is available at the web site http://bioinformatics.mdanderson.org/main/OOMPA:Overview. kcoombes@mdanderson.org. Supplementary data are available at Bioinformatics online.
Prediction of sound fields in acoustical cavities using the boundary element method. M.S. Thesis
NASA Technical Reports Server (NTRS)
Kipp, C. R.; Bernhard, R. J.
1985-01-01
A method was developed to predict sound fields in acoustical cavities. The method is based on the indirect boundary element method. An isoparametric quadratic boundary element is incorporated. Pressure, velocity and/or impedance boundary conditions may be applied to a cavity by using this method. The capability to include acoustic point sources within the cavity is implemented. The method is applied to the prediction of sound fields in spherical and rectangular cavities. All three boundary condition types are verified. Cases with a point source within the cavity domain are also studied. Numerically determined cavity pressure distributions and responses are presented. The numerical results correlate well with available analytical results.
Efficacy of ankaferd blood stopper application on non-variceal upper gastrointestinal bleeding
Gungor, Gokhan; Goktepe, M Hakan; Biyik, Murat; Polat, Ilker; Tuna, Tuncer; Ataseven, Huseyin; Demir, Ali
2012-01-01
AIM: To prospectively assess the hemostatic efficacy of the endoscopic topical use of ankaferd blood stopper (ABS) in active non-variceal upper gastrointestinal system (GIS) bleeding. METHODS: Endoscopy was performed on 220 patients under suspiciency of GIS bleeding. Patients with active non-variceal upper gastrointestinal bleeding (NVUGIB) with a spurting or oozing type were included. Firstly, 8-10 cc of isotonic saline was sprayed to bleeding lesions. Then, 8 cc of ABS was applied on lesions in which bleeding continued after isotonic saline application. The other endoscopic therapeutic methods were applied on the lesions in which the bleeding did not stop after ABS. RESULTS: Twenty-seven patients had an active NVUGIB with a spurting or oozing type and 193 patients were excluded from the study since they did not have non-variceal active bleeding. 8 cc of ABS was sprayed on to the lesions of 26 patients whose bleeding continued after isotonic saline and in 19 of them, bleeding stopped after ABS. Other endoscopic treatment methods were applied to the remaining patients and the bleeding was stopped with these interventions in 6 of 7 patients. CONCLUSION: ABS is an effective method on NVUGIB, particularly on young patients with no coagulopathy. ABS may be considered as part of a combination treatment with other endoscopic methods. PMID:23293725
NASA Astrophysics Data System (ADS)
Pardimin, H.; Arcana, N.
2018-01-01
Many types of research in the field of mathematics education apply the Quasi-Experimental method and statistical analysis use t-test. Quasi-experiment has a weakness that is difficult to fulfil “the law of a single independent variable”. T-test also has a weakness that is a generalization of the conclusions obtained is less powerful. This research aimed to find ways to reduce the weaknesses of the Quasi-experimental method and improved the generalization of the research results. The method applied in the research was a non-interactive qualitative method, and the type was concept analysis. Concepts analysed are the concept of statistics, research methods of education, and research reports. The result represented a way to overcome the weaknesses of quasi-Experiments and T-test. In addition, the way was to apply a combination of Factorial Design and Balanced Design, which the authors refer to as Factorial-Balanced Design. The advantages of this design are: (1) almost fulfilling “the low of single independent variable” so no need to test the similarity of the academic ability, (2) the sample size of the experimental group and the control group became larger and equal; so it becomes robust to deal with violations of the assumptions of the ANOVA test.
Sensory quality of Camembert-type cheese: Relationship between starter cultures and ripening molds.
Galli, Bruno Domingues; Martin, José Guilherme Prado; da Silva, Paula Porrelli Moreira; Porto, Ernani; Spoto, Marta Helena Fillet
2016-10-03
Starter cultures and ripening molds used in the manufacture of moldy cheese aimed at obtaining characteristic flavors and textures considerably differ among dairy industries. Thus, the study of variables inherent to the process and their influence on sensory patterns in cheese can improve the standardization and control of the production process. The aim of this work was to study the influence of three different variables on the sensory quality of Camembert-type cheese: type of lactic bacteria, type of ripener molds and inoculation method. Batches of Camembert-type cheese were produced using O or DL-type mesophilic starter culture, ripened with Penicillium camemberti or Penicillium candidum and mold inoculation was made directly into the milk or by spraying. All batches were sensorially evaluated using Quantitative Descriptive Analysis (QDA) with panelists trained for various attributes. Among the combinations analyzed, those resulting in more typical Camembert-type cheese were those using O-type mesophilic starter culture and P. candidum maturation mold directly applied into the milk or sprayed and those using DL-type mesophilic starter and P. camemberti ripener mold applied by surface spraying. These results demonstrate, therefore, that the combination of different ripener molds, inoculation methods and starter cultures directly influences the sensory quality of Camembert-type cheese, modifying significantly its texture, appearance, aroma and taste. Copyright © 2016 Elsevier B.V. All rights reserved.
Fetal ECG extraction via Type-2 adaptive neuro-fuzzy inference systems.
Ahmadieh, Hajar; Asl, Babak Mohammadzadeh
2017-04-01
We proposed a noninvasive method for separating the fetal ECG (FECG) from maternal ECG (MECG) by using Type-2 adaptive neuro-fuzzy inference systems. The method can extract FECG components from abdominal signal by using one abdominal channel, including maternal and fetal cardiac signals and other environmental noise signals, and one chest channel. The proposed algorithm detects the nonlinear dynamics of the mother's body. So, the components of the MECG are estimated from the abdominal signal. By subtracting estimated mother cardiac signal from abdominal signal, fetal cardiac signal can be extracted. This algorithm was applied on synthetic ECG signals generated based on the models developed by McSharry et al. and Behar et al. and also on DaISy real database. In environments with high uncertainty, our method performs better than the Type-1 fuzzy method. Specifically, in evaluation of the algorithm with the synthetic data based on McSharry model, for input signals with SNR of -5dB, the SNR of the extracted FECG was improved by 38.38% in comparison with the Type-1 fuzzy method. Also, the results show that increasing the uncertainty or decreasing the input SNR leads to increasing the percentage of the improvement in SNR of the extracted FECG. For instance, when the SNR of the input signal decreases to -30dB, our proposed algorithm improves the SNR of the extracted FECG by 71.06% with respect to the Type-1 fuzzy method. The same results were obtained on synthetic data based on Behar model. Our results on real database reflect the success of the proposed method to separate the maternal and fetal heart signals even if their waves overlap in time. Moreover, the proposed algorithm was applied to the simulated fetal ECG with ectopic beats and achieved good results in separating FECG from MECG. The results show the superiority of the proposed Type-2 neuro-fuzzy inference method over the Type-1 neuro-fuzzy inference and the polynomial networks methods, which is due to its capability to capture the nonlinearities of the model better. Copyright © 2017 Elsevier B.V. All rights reserved.
Experimental design methodologies in the optimization of chiral CE or CEC separations: an overview.
Dejaegher, Bieke; Mangelings, Debby; Vander Heyden, Yvan
2013-01-01
In this chapter, an overview of experimental designs to develop chiral capillary electrophoresis (CE) and capillary electrochromatographic (CEC) methods is presented. Method development is generally divided into technique selection, method optimization, and method validation. In the method optimization part, often two phases can be distinguished, i.e., a screening and an optimization phase. In method validation, the method is evaluated on its fit for purpose. A validation item, also applying experimental designs, is robustness testing. In the screening phase and in robustness testing, screening designs are applied. During the optimization phase, response surface designs are used. The different design types and their application steps are discussed in this chapter and illustrated by examples of chiral CE and CEC methods.
Application of semiconductor diffusants to solar cells by screen printing
NASA Technical Reports Server (NTRS)
Evans, J. C., Jr.; Brandhorst, H. W., Jr.; Mazaris, G. A.; Scudder, L. R. (Inventor)
1978-01-01
Diffusants were applied onto semiconductor solar cell substrates, using screen printing techniques. The method was applicable to square and rectangular cells and can be used to apply dopants of opposite types to the front and back of the substrate. Then, simultaneous diffusion of both dopants can be performed with a single furnace pass.
Probabilistic framework for product design optimization and risk management
NASA Astrophysics Data System (ADS)
Keski-Rahkonen, J. K.
2018-05-01
Probabilistic methods have gradually gained ground within engineering practices but currently it is still the industry standard to use deterministic safety margin approaches to dimensioning components and qualitative methods to manage product risks. These methods are suitable for baseline design work but quantitative risk management and product reliability optimization require more advanced predictive approaches. Ample research has been published on how to predict failure probabilities for mechanical components and furthermore to optimize reliability through life cycle cost analysis. This paper reviews the literature for existing methods and tries to harness their best features and simplify the process to be applicable in practical engineering work. Recommended process applies Monte Carlo method on top of load-resistance models to estimate failure probabilities. Furthermore, it adds on existing literature by introducing a practical framework to use probabilistic models in quantitative risk management and product life cycle costs optimization. The main focus is on mechanical failure modes due to the well-developed methods used to predict these types of failures. However, the same framework can be applied on any type of failure mode as long as predictive models can be developed.
Xie, Tao; Zhang, Dingguo; Wu, Zehan; Chen, Liang; Zhu, Xiangyang
2015-01-01
In this work, some case studies were conducted to classify several kinds of hand motions from electrocorticography (ECoG) signals during intraoperative awake craniotomy & extraoperative seizure monitoring processes. Four subjects (P1, P2 with intractable epilepsy during seizure monitoring and P3, P4 with brain tumor during awake craniotomy) participated in the experiments. Subjects performed three types of hand motions (Grasp, Thumb-finger motion and Index-finger motion) contralateral to the motor cortex covered with ECoG electrodes. Two methods were used for signal processing. Method I: autoregressive (AR) model with burg method was applied to extract features, and additional waveform length (WL) feature has been considered, finally the linear discriminative analysis (LDA) was used as the classifier. Method II: stationary subspace analysis (SSA) was applied for data preprocessing, and the common spatial pattern (CSP) was used for feature extraction before LDA decoding process. Applying method I, the three-class accuracy of P1~P4 were 90.17, 96.00, 91.77, and 92.95% respectively. For method II, the three-class accuracy of P1~P4 were 72.00, 93.17, 95.22, and 90.36% respectively. This study verified the possibility of decoding multiple hand motion types during an awake craniotomy, which is the first step toward dexterous neuroprosthetic control during surgical implantation, in order to verify the optimal placement of electrodes. The accuracy during awake craniotomy was comparable to results during seizure monitoring. This study also indicated that ECoG was a promising approach for precise identification of eloquent cortex during awake craniotomy, and might form a promising BCI system that could benefit both patients and neurosurgeons. PMID:26483627
NASA Astrophysics Data System (ADS)
Xu, Zhuo; Sopher, Daniel; Juhlin, Christopher; Han, Liguo; Gong, Xiangbo
2018-04-01
In towed marine seismic data acquisition, a gap between the source and the nearest recording channel is typical. Therefore, extrapolation of the missing near-offset traces is often required to avoid unwanted effects in subsequent data processing steps. However, most existing interpolation methods perform poorly when extrapolating traces. Interferometric interpolation methods are one particular method that have been developed for filling in trace gaps in shot gathers. Interferometry-type interpolation methods differ from conventional interpolation methods as they utilize information from several adjacent shot records to fill in the missing traces. In this study, we aim to improve upon the results generated by conventional time-space domain interferometric interpolation by performing interferometric interpolation in the Radon domain, in order to overcome the effects of irregular data sampling and limited source-receiver aperture. We apply both time-space and Radon-domain interferometric interpolation methods to the Sigsbee2B synthetic dataset and a real towed marine dataset from the Baltic Sea with the primary aim to improve the image of the seabed through extrapolation into the near-offset gap. Radon-domain interferometric interpolation performs better at interpolating the missing near-offset traces than conventional interferometric interpolation when applied to data with irregular geometry and limited source-receiver aperture. We also compare the interferometric interpolated results with those obtained using solely Radon transform (RT) based interpolation and show that interferometry-type interpolation performs better than solely RT-based interpolation when extrapolating the missing near-offset traces. After data processing, we show that the image of the seabed is improved by performing interferometry-type interpolation, especially when Radon-domain interferometric interpolation is applied.
Novel Intersection Type Recognition for Autonomous Vehicles Using a Multi-Layer Laser Scanner.
An, Jhonghyun; Choi, Baehoon; Sim, Kwee-Bo; Kim, Euntai
2016-07-20
There are several types of intersections such as merge-roads, diverge-roads, plus-shape intersections and two types of T-shape junctions in urban roads. When an autonomous vehicle encounters new intersections, it is crucial to recognize the types of intersections for safe navigation. In this paper, a novel intersection type recognition method is proposed for an autonomous vehicle using a multi-layer laser scanner. The proposed method consists of two steps: (1) static local coordinate occupancy grid map (SLOGM) building and (2) intersection classification. In the first step, the SLOGM is built relative to the local coordinate using the dynamic binary Bayes filter. In the second step, the SLOGM is used as an attribute for the classification. The proposed method is applied to a real-world environment and its validity is demonstrated through experimentation.
Novel Intersection Type Recognition for Autonomous Vehicles Using a Multi-Layer Laser Scanner
An, Jhonghyun; Choi, Baehoon; Sim, Kwee-Bo; Kim, Euntai
2016-01-01
There are several types of intersections such as merge-roads, diverge-roads, plus-shape intersections and two types of T-shape junctions in urban roads. When an autonomous vehicle encounters new intersections, it is crucial to recognize the types of intersections for safe navigation. In this paper, a novel intersection type recognition method is proposed for an autonomous vehicle using a multi-layer laser scanner. The proposed method consists of two steps: (1) static local coordinate occupancy grid map (SLOGM) building and (2) intersection classification. In the first step, the SLOGM is built relative to the local coordinate using the dynamic binary Bayes filter. In the second step, the SLOGM is used as an attribute for the classification. The proposed method is applied to a real-world environment and its validity is demonstrated through experimentation. PMID:27447640
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shabbir, A., E-mail: aqsa.shabbir@ugent.be; Noterdaeme, J. M.; Max-Planck-Institut für Plasmaphysik, Garching D-85748
2014-11-15
Information visualization aimed at facilitating human perception is an important tool for the interpretation of experiments on the basis of complex multidimensional data characterizing the operational space of fusion devices. This work describes a method for visualizing the operational space on a two-dimensional map and applies it to the discrimination of type I and type III edge-localized modes (ELMs) from a series of carbon-wall ELMy discharges at JET. The approach accounts for stochastic uncertainties that play an important role in fusion data sets, by modeling measurements with probability distributions in a metric space. The method is aimed at contributing tomore » physical understanding of ELMs as well as their control. Furthermore, it is a general method that can be applied to the modeling of various other plasma phenomena as well.« less
The anisotropic Hooke's law for cancellous bone and wood.
Yang, G; Kabel, J; van Rietbergen, B; Odgaard, A; Huiskes, R; Cowin, S C
A method of data analysis for a set of elastic constant measurements is applied to data bases for wood and cancellous bone. For these materials the identification of the type of elastic symmetry is complicated by the variable composition of the material. The data analysis method permits the identification of the type of elastic symmetry to be accomplished independent of the examination of the variable composition. This method of analysis may be applied to any set of elastic constant measurements, but is illustrated here by application to hardwoods and softwoods, and to an extraordinary data base of cancellous bone elastic constants. The solid volume fraction or bulk density is the compositional variable for the elastic constants of these natural materials. The final results are the solid volume fraction dependent orthotropic Hooke's law for cancellous bone and a bulk density dependent one for hardwoods and softwoods.
Unsupervised change detection in a particular vegetation land cover type using spectral angle mapper
NASA Astrophysics Data System (ADS)
Renza, Diego; Martinez, Estibaliz; Molina, Iñigo; Ballesteros L., Dora M.
2017-04-01
This paper presents a new unsupervised change detection methodology for multispectral images applied to specific land covers. The proposed method involves comparing each image against a reference spectrum, where the reference spectrum is obtained from the spectral signature of the type of coverage you want to detect. In this case the method has been tested using multispectral images (SPOT5) of the community of Madrid (Spain), and multispectral images (Quickbird) of an area over Indonesia that was impacted by the December 26, 2004 tsunami; here, the tests have focused on the detection of changes in vegetation. The image comparison is obtained by applying Spectral Angle Mapper between the reference spectrum and each multitemporal image. Then, a threshold to produce a single image of change is applied, which corresponds to the vegetation zones. The results for each multitemporal image are combined through an exclusive or (XOR) operation that selects vegetation zones that have changed over time. Finally, the derived results were compared against a supervised method based on classification with the Support Vector Machine. Furthermore, the NDVI-differencing and the Spectral Angle Mapper techniques were selected as unsupervised methods for comparison purposes. The main novelty of the method consists in the detection of changes in a specific land cover type (vegetation), therefore, for comparison purposes, the best scenario is to compare it with methods that aim to detect changes in a specific land cover type (vegetation). This is the main reason to select NDVI-based method and the post-classification method (SVM implemented in a standard software tool). To evaluate the improvements using a reference spectrum vector, the results are compared with the basic-SAM method. In SPOT5 image, the overall accuracy was 99.36% and the κ index was 90.11%; in Quickbird image, the overall accuracy was 97.5% and the κ index was 82.16%. Finally, the precision results of the method are comparable to those of a supervised method, supported by low detection of false positives and false negatives, along with a high overall accuracy and a high kappa index. On the other hand, the execution times were comparable to those of unsupervised methods of low computational load.
Streaming Swarm of Nano Space Probes for Modern Analytical Methods Applied to Planetary Science
NASA Astrophysics Data System (ADS)
Vizi, P. G.; Horvath, A. F.; Berczi, Sz.
2017-11-01
Streaming swarms gives possibilities to collect data from big fields in one time. The whole streaming fleet possible to behave like one big organization and can be realized as a planetary mission solution with stream type analytical methods.
Spectral methods for partial differential equations
NASA Technical Reports Server (NTRS)
Hussaini, M. Y.; Streett, C. L.; Zang, T. A.
1983-01-01
Origins of spectral methods, especially their relation to the Method of Weighted Residuals, are surveyed. Basic Fourier, Chebyshev, and Legendre spectral concepts are reviewed, and demonstrated through application to simple model problems. Both collocation and tau methods are considered. These techniques are then applied to a number of difficult, nonlinear problems of hyperbolic, parabolic, elliptic, and mixed type. Fluid dynamical applications are emphasized.
Recent applications of spectral methods in fluid dynamics
NASA Technical Reports Server (NTRS)
Zang, T. A.; Hussaini, M. Y.
1985-01-01
Origins of spectral methods, especially their relation to the method of weighted residuals, are surveyed. Basic Fourier and Chebyshev spectral concepts are reviewed and demonstrated through application to simple model problems. Both collocation and tau methods are considered. These techniques are then applied to a number of difficult, nonlinear problems of hyperbolic, parabolic, elliptic and mixzed type. Fluid dynamical applications are emphasized.
Recurrence of attic cholesteatoma: different methods of estimating recurrence rates.
Stangerup, S E; Drozdziewicz, D; Tos, M; Hougaard-Jensen, A
2000-09-01
One problem in cholesteatoma surgery is recurrence of cholesteatoma, which is reported to vary from 5% to 71%. This great variability can be explained by issues such as the type of cholesteatoma, surgical technique, follow-up rate, length of the postoperative observation period, and statistical method applied. The aim of this study was to illustrate the impact of applying different statistical methods to the same material. Thirty-three children underwent single-stage surgery for attic cholesteatoma during a 15-year period. Thirty patients (94%) attended a re-evaluation. During the observation period of 15 years, recurrence of cholesteatoma occurred in 10 ears. The cumulative total recurrence rate varied from 30% to 67%, depending on the statistical method applied. In conclusion, the choice of statistical method should depend on the number of patients, follow-up rates, length of the postoperative observation period and presence of censored data.
Method of making diode structures
Compaan, Alvin D.; Gupta, Akhlesh
2006-11-28
A method of making a diode structure includes the step of depositing a transparent electrode layer of any one or more of the group ZnO, ZnS and CdO onto a substrate layer, and depositing an active semiconductor junction having an n-type layer and a p-type layer onto the transparent electrode layer under process conditions that avoid substantial degradation of the electrode layer. A back electrode coating layer is applied to form a diode structure.
Analytical methods applied to diverse types of Brazilian propolis
2011-01-01
Propolis is a bee product, composed mainly of plant resins and beeswax, therefore its chemical composition varies due to the geographic and plant origins of these resins, as well as the species of bee. Brazil is an important supplier of propolis on the world market and, although green colored propolis from the southeast is the most known and studied, several other types of propolis from Apis mellifera and native stingless bees (also called cerumen) can be found. Propolis is usually consumed as an extract, so the type of solvent and extractive procedures employed further affect its composition. Methods used for the extraction; analysis the percentage of resins, wax and insoluble material in crude propolis; determination of phenolic, flavonoid, amino acid and heavy metal contents are reviewed herein. Different chromatographic methods applied to the separation, identification and quantification of Brazilian propolis components and their relative strengths are discussed; as well as direct insertion mass spectrometry fingerprinting. Propolis has been used as a popular remedy for several centuries for a wide array of ailments. Its antimicrobial properties, present in propolis from different origins, have been extensively studied. But, more recently, anti-parasitic, anti-viral/immune stimulating, healing, anti-tumor, anti-inflammatory, antioxidant and analgesic activities of diverse types of Brazilian propolis have been evaluated. The most common methods employed and overviews of their relative results are presented. PMID:21631940
Application of type synthesis theory to the redesign of a complex surgical instrument.
Lim, Jonas J B; Erdman, Arthur G
2002-06-01
Surgical instruments consist of basic mechanical components such as gears, links, pivots, sliders, etc., which are common in mechanical design. This paper describes the application of a method in the analysis and design of complex surgical instruments such as those employed in laparoscopic surgery. This is believed to be the first application of type synthesis theory to a complex medical instrument. Type synthesis is a methodology that can be applied during the conceptual phase of mechanical design. A handle assembly from a patented laparoscopic surgical stapler is used to illustrate the application of the design method developed. Type synthesis is applied on specific subsystems of the mechanism within the handle assembly where alternative design concepts are generated. Chosen concepts are then combined to form a new conceptual design for the handle assembly. The new handle assembly is improved because it has fewer number of parts, is a simpler design and is easier to assemble. Surgical instrument designers may use the methodology presented here to analyze the mechanical subsystems within complex instruments and to create new options that may offer improvements to the original design.
NASA Technical Reports Server (NTRS)
Bever, R. S.
1984-01-01
Nondestructive high voltage test techniques (mostly electrical methods) are studied to prevent total or catastrophic breakdown of insulation systems under applied high voltage in space. Emphasis is on the phenomenon of partial breakdown or partial discharge (P.D.) as a symptom of insulation quality, notably partial discharge testing under D.C. applied voltage. Many of the electronic parts and high voltage instruments in space experience D.C. applied stress in service, and application of A.C. voltage to any portion thereof would be prohibited. Suggestions include: investigation of the ramp test method for D.C. partial discharge measurements; testing of actual flight-type insulation specimen; perfect plotting resin samples with controlled defects for test; several types of plotting resins and recommendations of the better ones from the electrical characteristics; thermal and elastic properties are also considered; testing of commercial capaciters; and approximate acceptance/rejection/rerating criteria for sample test elements for space use, based on D.C. partial discharge.
Dynamic game balancing implementation using adaptive algorithm in mobile-based Safari Indonesia game
NASA Astrophysics Data System (ADS)
Yuniarti, Anny; Nata Wardanie, Novita; Kuswardayan, Imam
2018-03-01
In developing a game there is one method that should be applied to maintain the interest of players, namely dynamic game balancing. Dynamic game balancing is a process to match a player’s playing style with the behaviour, attributes, and game environment. This study applies dynamic game balancing using adaptive algorithm in scrolling shooter game type called Safari Indonesia which developed using Unity. The game of this type is portrayed by a fighter aircraft character trying to defend itself from insistent enemy attacks. This classic game is chosen to implement adaptive algorithms because it has quite complex attributes to be developed using dynamic game balancing. Tests conducted by distributing questionnaires to a number of players indicate that this method managed to reduce frustration and increase the pleasure factor in playing.
Local regression type methods applied to the study of geophysics and high frequency financial data
NASA Astrophysics Data System (ADS)
Mariani, M. C.; Basu, K.
2014-09-01
In this work we applied locally weighted scatterplot smoothing techniques (Lowess/Loess) to Geophysical and high frequency financial data. We first analyze and apply this technique to the California earthquake geological data. A spatial analysis was performed to show that the estimation of the earthquake magnitude at a fixed location is very accurate up to the relative error of 0.01%. We also applied the same method to a high frequency data set arising in the financial sector and obtained similar satisfactory results. The application of this approach to the two different data sets demonstrates that the overall method is accurate and efficient, and the Lowess approach is much more desirable than the Loess method. The previous works studied the time series analysis; in this paper our local regression models perform a spatial analysis for the geophysics data providing different information. For the high frequency data, our models estimate the curve of best fit where data are dependent on time.
Structural damage detection-oriented multi-type sensor placement with multi-objective optimization
NASA Astrophysics Data System (ADS)
Lin, Jian-Fu; Xu, You-Lin; Law, Siu-Seong
2018-05-01
A structural damage detection-oriented multi-type sensor placement method with multi-objective optimization is developed in this study. The multi-type response covariance sensitivity-based damage detection method is first introduced. Two objective functions for optimal sensor placement are then introduced in terms of the response covariance sensitivity and the response independence. The multi-objective optimization problem is formed by using the two objective functions, and the non-dominated sorting genetic algorithm (NSGA)-II is adopted to find the solution for the optimal multi-type sensor placement to achieve the best structural damage detection. The proposed method is finally applied to a nine-bay three-dimensional frame structure. Numerical results show that the optimal multi-type sensor placement determined by the proposed method can avoid redundant sensors and provide satisfactory results for structural damage detection. The restriction on the number of each type of sensors in the optimization can reduce the searching space in the optimization to make the proposed method more effective. Moreover, how to select a most optimal sensor placement from the Pareto solutions via the utility function and the knee point method is demonstrated in the case study.
Solutions of interval type-2 fuzzy polynomials using a new ranking method
NASA Astrophysics Data System (ADS)
Rahman, Nurhakimah Ab.; Abdullah, Lazim; Ghani, Ahmad Termimi Ab.; Ahmad, Noor'Ani
2015-10-01
A few years ago, a ranking method have been introduced in the fuzzy polynomial equations. Concept of the ranking method is proposed to find actual roots of fuzzy polynomials (if exists). Fuzzy polynomials are transformed to system of crisp polynomials, performed by using ranking method based on three parameters namely, Value, Ambiguity and Fuzziness. However, it was found that solutions based on these three parameters are quite inefficient to produce answers. Therefore in this study a new ranking method have been developed with the aim to overcome the inherent weakness. The new ranking method which have four parameters are then applied in the interval type-2 fuzzy polynomials, covering the interval type-2 of fuzzy polynomial equation, dual fuzzy polynomial equations and system of fuzzy polynomials. The efficiency of the new ranking method then numerically considered in the triangular fuzzy numbers and the trapezoidal fuzzy numbers. Finally, the approximate solutions produced from the numerical examples indicate that the new ranking method successfully produced actual roots for the interval type-2 fuzzy polynomials.
[Metabolic surgery in treatment of diabetes mellitus of type II].
Sedov, V M; Fishman, M B
2013-01-01
Nowadays, according to data of WHO, the diabetes mellitus was diagnosed in more than 280 million people. The diabetes mellitus type II had 90% patients. The applied methods of conservative therapy seldom lead to euglycemia condition of patients. Last years the treatment of diabetes mellitus was carried out by the method of different bariatic interventions. Good results was obtained, they should be analyzed and investigate. The results of treatment of 142 patients from 628 patients (with type II) were estimated. The patients were undergone by different bariatic interventions. Modern laparoscopic operations were performed on all the patients. Controlled bandage of stomach had 81 of patients. Gastric resection was performed in 28. Gastric bypass surgery was carried out in 22 of patients and biliopancreatic diversion - in 11. The improvement of control of leukemia level was obtained. Diabetes type II could be treated by surgical methods. The best results were obtained after combined operations, which potentially could present an alternative method of treatment of type II diabetes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andronov, V.A.; Zhidov, I.G.; Meskov, E.E.
The report presents the basic results of some calculations, theoretical and experimental efforts in the study of Rayleigh-Taylor, Kelvin-Helmholtz, Richtmyer-Meshkov instabilities and the turbulent mixing which is caused by their evolution. Since the late forties the VNIIEF has been conducting these investigations. This report is based on the data which were published in different times in Russian and foreign journals. The first part of the report deals with calculations an theoretical techniques for the description of hydrodynamic instabilities applied currently, as well as with the results of several individual problems and their comparison with the experiment. These methods can bemore » divided into two types: direct numerical simulation methods and phenomenological methods. The first type includes the regular 2D and 3D gasdynamical techniques as well as the techniques based on small perturbation approximation and on incompressible liquid approximation. The second type comprises the techniques based on various phenomenological turbulence models. The second part of the report describes the experimental methods and cites the experimental results of Rayleigh-Taylor and Richtmyer-Meskov instability studies as well as of turbulent mixing. The applied methods were based on thin-film gaseous models, on jelly models and liquid layer models. The research was done for plane and cylindrical geometries. As drivers, the shock tubes of different designs were used as well as gaseous explosive mixtures, compressed air and electric wire explosions. The experimental results were applied in calculational-theoretical technique calibrations. The authors did not aim at covering all VNIIEF research done in this field of science. To a great extent the choice of the material depended on the personal contribution of the author in these studies.« less
Fine detrending of raw Kepler and MOST photometric data of KIC 6950556 and HD 37633
NASA Astrophysics Data System (ADS)
Mikulášek, Zdeněk; Paunzen, Ernst; Zejda, Miloslav; Semenko, Evgenij; Bernhard, Klaus; Hümmerich, Stefan; Zhang, Jia; Hubrig, Swetlana; Kuschnig, Rainer; Janík, Jan; Jagelka, Miroslav
2016-07-01
We present a simple phenomenological method for detrending of raw Kepler and MOST photometry, which is illustrated by means of photometric data processing of two periodically variable chemically peculiar stars, KIC 6950556 and HD 37633. In principle, this method may be applied to any type of periodically variable objects and satellite or ground based photometries. As a by product, we have identified KIC 6950556 as a magnetic chemically peculiar star with an ACV type variability.
CTE method and interaction solutions for the Kadomtsev-Petviashvili equation
NASA Astrophysics Data System (ADS)
Ren, Bo
2017-02-01
The consistent tanh expansion method is applied to the Kadomtsev-Petviashvili equation. The interaction solutions among one soliton and other types of solitary waves, such as multiple resonant soliton solutions and cnoidal waves, are explicitly given. Some special concrete interaction solutions are discussed both in analytical and graphical ways.
Asphalt in Pavement Maintenance.
ERIC Educational Resources Information Center
Asphalt Inst., College Park, MD.
Maintenance methods that can be used equally well in all regions of the country have been developed for the use of asphalt in pavement maintenance. Specific information covering methods, equipment and terminology that applies to the use of asphalt in the maintenance of all types of pavement structures, including shoulders, is provided. In many…
NASA Astrophysics Data System (ADS)
Kulkarni, Sandip; Ramaswamy, Bharath; Horton, Emily; Gangapuram, Sruthi; Nacev, Alek; Depireux, Didier; Shimoji, Mika; Shapiro, Benjamin
2015-11-01
This article presents a method to investigate how magnetic particle characteristics affect their motion inside tissues under the influence of an applied magnetic field. Particles are placed on top of freshly excised tissue samples, a calibrated magnetic field is applied by a magnet underneath each tissue sample, and we image and quantify particle penetration depth by quantitative metrics to assess how particle sizes, their surface coatings, and tissue resistance affect particle motion. Using this method, we tested available fluorescent particles from Chemicell of four sizes (100 nm, 300 nm, 500 nm, and 1 μm diameter) with four different coatings (starch, chitosan, lipid, and PEG/P) and quantified their motion through freshly excised rat liver, kidney, and brain tissues. In broad terms, we found that the applied magnetic field moved chitosan particles most effectively through all three tissue types (as compared to starch, lipid, and PEG/P coated particles). However, the relationship between particle properties and their resulting motion was found to be complex. Hence, it will likely require substantial further study to elucidate the nuances of transport mechanisms and to select and engineer optimal particle properties to enable the most effective transport through various tissue types under applied magnetic fields.
Domain decomposition methods for nonconforming finite element spaces of Lagrange-type
NASA Technical Reports Server (NTRS)
Cowsar, Lawrence C.
1993-01-01
In this article, we consider the application of three popular domain decomposition methods to Lagrange-type nonconforming finite element discretizations of scalar, self-adjoint, second order elliptic equations. The additive Schwarz method of Dryja and Widlund, the vertex space method of Smith, and the balancing method of Mandel applied to nonconforming elements are shown to converge at a rate no worse than their applications to the standard conforming piecewise linear Galerkin discretization. Essentially, the theory for the nonconforming elements is inherited from the existing theory for the conforming elements with only modest modification by constructing an isomorphism between the nonconforming finite element space and a space of continuous piecewise linear functions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wunschel, David S.; Melville, Angela M.; Ehrhardt, Christopher J.
2012-05-17
The investigation of crimes involving chemical or biological agents is infrequent, but presents unique analytical challenges. The protein toxin ricin is encountered more frequently than other agents and is found in the seeds of the castor plant Ricinus communis. Typically, the toxin is extracted from castor seeds utilizing a variety of different recipes that result in varying purity of the toxin. Moreover, these various purification steps can also leave or differentially remove a variety of exogenous and endogenous residual components with the toxin that may indicate the type and number of purification steps involved. We have applied three gas chromatographicmore » - mass spectrometric (GC-MS) based analytical methods to measure the variation in seed carbohydrates and castor oil ricinoleic acid as well as the presence of solvents used for purification. These methods were applied to the same samples prepared using four previously identified toxin preparation methods starting from four varieties of castor seeds. The individual data sets for seed carbohydrate profiles, ricinoleic acid or acetone amount each provided information capable of differentiating different types of toxin preparations across seed types. However, the integration of the data sets using multivariate factor analysis provided a clear distinction of all samples based on the preparation method and independent of the seed source. In particular the abundance of mannose, arabinose, fucose, ricinoleic acid and acetone were shown to be important differentiating factors. These complementary tools provide a more confident determination of the method of toxin preparation.« less
Development of a Thiolysis HPLC Method for the Analysis of Procyanidins in Cranberry Products.
Gao, Chi; Cunningham, David G; Liu, Haiyan; Khoo, Christina; Gu, Liwei
2018-03-07
The objective of this study was to develop a thiolysis HPLC method to quantify total procyanidins, the ratio of A-type linkages, and A-type procyanidin equivalents in cranberry products. Cysteamine was utilized as a low-odor substitute of toluene-α-thiol for thiolysis depolymerization. A reaction temperature of 70 °C and reaction time of 20 min, in 0.3 M of HCl, were determined to be optimum depolymerization conditions. Thiolytic products of cranberry procyanidins were separated by RP-HPLC and identified using high-resolution mass spectrometry. Standards curves of good linearity were obtained on thiolyzed procyanidin dimer A2 and B2 external standards. The detection and quantification limits, recovery, and precision of this method were validated. The new method was applied to quantitate total procyanidins, average degree of polymerization, ratio of A-type linkages, and A-type procyanidin equivalents in cranberry products. Results showed that the method was suitable for quantitative and qualitative analysis of procyanidins in cranberry products.
Alternative methods to model frictional contact surfaces using NASTRAN
NASA Technical Reports Server (NTRS)
Hoang, Joseph
1992-01-01
Elongated (slotted) holes have been used extensively for the integration of equipment into Spacelab racks. In the past, this type of interface has been modeled assuming that there is not slippage between contact surfaces, or that there is no load transfer in the direction of the slot. Since the contact surfaces are bolted together, the contact friction provides a load path determined by the normal applied force (bolt preload) and the coefficient of friction. Three alternate methods that utilize spring elements, externally applied couples, and stress dependent elements are examined to model the contacted surfaces. Results of these methods are compared with results obtained from methods that use GAP elements and rigid elements.
A particle-particle hybrid method for kinetic and continuum equations
NASA Astrophysics Data System (ADS)
Tiwari, Sudarshan; Klar, Axel; Hardt, Steffen
2009-10-01
We present a coupling procedure for two different types of particle methods for the Boltzmann and the Navier-Stokes equations. A variant of the DSMC method is applied to simulate the Boltzmann equation, whereas a meshfree Lagrangian particle method, similar to the SPH method, is used for simulations of the Navier-Stokes equations. An automatic domain decomposition approach is used with the help of a continuum breakdown criterion. We apply adaptive spatial and time meshes. The classical Sod's 1D shock tube problem is solved for a large range of Knudsen numbers. Results from Boltzmann, Navier-Stokes and hybrid solvers are compared. The CPU time for the hybrid solver is 3-4 times faster than for the Boltzmann solver.
Jeng, Ming-Dih; Liu, Po-Yi; Kuo, Jia-Hum; Lin, Chun-Li
2017-04-01
This study evaluates the load fatigue performance of different abutment-implant connection implant types-retaining-screw (RS) and taper integrated screwed-in (TIS) types under 3 applied torque levels based on the screw elastic limit. Three torque levels-the recommended torque (25 Ncm), 10% less, and 10% more than the ratio of recommended torque to screw elastic limits of different implants were applied to the implants to perform static and dynamic testing according to the ISO 14801 method. Removal torque loss was calculated for each group after the endurance limitation was reached (passed 5 × 10 6 cycles) in the fatigue test. The static fracture resistance results showed that the fracture resistance in the TIS-type implant significantly increased (P < .05) when the abutment screw was inserted tightly. The dynamic testing results showed that the endurance limitations for the RS-type implant were 229 N, 197 N, and 224 N and those for the TIS-type implant were 322 N, 364 N, and 376 N when the screw insertion torques were applied from low to high. The corresponding significant (P < .05) removal torque losses for the TIS-type implant were 13.2%, 5.3%, and 2.6% but no significant difference was found for the RS-type implant. This study concluded that the static fracture resistance and dynamic endurance limitation of the TIS-type implant (1-piece solid abutment) increased when torque was applied more tightly on the screw. Less torque loss was also found when increasing the screw insertion torque.
Arias, Jean Lucas de Oliveira; Schneider, Antunielle; Batista-Andrade, Jahir Antonio; Vieira, Augusto Alves; Caldas, Sergiane Souza; Primel, Ednei Gilberto
2018-02-01
Clean extracts are essential in LC-MS/MS, since the matrix effect can interfere in the analysis. Alternative materials which can be used as sorbents, such as chitosan in the clean-up step, are cheap and green options. In this study, chitosan from shrimp shell waste was evaluated as a sorbent in the QuEChERS method in order to determine multi-residues of veterinary drugs in different types of milk, i. e., fatty matrices. After optimization, the method showed correlation coefficients above 0.99, LOQs ranged between 1 and 50μgkg -1 and recoveries ranged between 62 and 125%, with RSD<20% for all veterinary drugs in all types of milk under study. The clean-up step which employed chitosan proved to be effective, since it reduced both the matrix effect (from values between -40 and -10% to values from -10 to +10%) and the extract turbidity (up to 95%). When the proposed method was applied to different milk samples, residues of albendazole (49μgkg -1 ), sulfamethazine (
Study of travelling wave solutions for some special-type nonlinear evolution equations
NASA Astrophysics Data System (ADS)
Song, Junquan; Hu, Lan; Shen, Shoufeng; Ma, Wen-Xiu
2018-07-01
The tanh-function expansion method has been improved and used to construct travelling wave solutions of the form U={\\sum }j=0n{a}j{\\tanh }jξ for some special-type nonlinear evolution equations, which have a variety of physical applications. The positive integer n can be determined by balancing the highest order linear term with the nonlinear term in the evolution equations. We improve the tanh-function expansion method with n = 0 by introducing a new transform U=-W\\prime (ξ )/{W}2. A nonlinear wave equation with source terms, and mKdV-type equations, are considered in order to show the effectiveness of the improved scheme. We also propose the tanh-function expansion method of implicit function form, and apply it to a Harry Dym-type equation as an example.
Pencil Lettering; Commercial and Advertising Art--Basic: 9183.02.
ERIC Educational Resources Information Center
Dade County Public Schools, Miami, FL.
The course outline is offered as a guide to teach the student the proper procedure in Commercial and Advertising Art Pencil Hand Lettering as it applies to several of the most popular type faces. The student will first master pencil stroking methods and branch off to specific mastery of type faces. Natural talent and aptitude, inherent taste, an…
THE COMPUTATION OF CHARACTERISTIC EXPONENTS IN THE PLANAR RESTRICTED PROBLEM OF THREE BODIES
methods are applied to evaluate the characteristic exponents of Rabe’s Trojan Orbits; they are found to be of the stable type for the ovals, and of...the unstable type for the horse -shoe shaped orbit. When the periodic orbit is symmetric with respect to the axis of syzygies, four independent
A variable pressure method for characterizing nanoparticle surface charge using pore sensors.
Vogel, Robert; Anderson, Will; Eldridge, James; Glossop, Ben; Willmott, Geoff
2012-04-03
A novel method using resistive pulse sensors for electrokinetic surface charge measurements of nanoparticles is presented. This method involves recording the particle blockade rate while the pressure applied across a pore sensor is varied. This applied pressure acts in a direction which opposes transport due to the combination of electro-osmosis, electrophoresis, and inherent pressure. The blockade rate reaches a minimum when the velocity of nanoparticles in the vicinity of the pore approaches zero, and the forces on typical nanoparticles are in equilibrium. The pressure applied at this minimum rate can be used to calculate the zeta potential of the nanoparticles. The efficacy of this variable pressure method was demonstrated for a range of carboxylated 200 nm polystyrene nanoparticles with different surface charge densities. Results were of the same order as phase analysis light scattering (PALS) measurements. Unlike PALS results, the sequence of increasing zeta potential for different particle types agreed with conductometric titration.
A method of monitoring contact (pointed) welding
NASA Astrophysics Data System (ADS)
Bessonov, V. B.; Staroverov, N. E.; Larionov, I. A.; Guk, K. K.; Obodovskiy, A. V.
2018-02-01
The technology of welding parts of different thicknesses from various materials is improved, which is why the range of applied types and methods of welding is constantly expanding. In this regard, the issue of monitoring welded joints is particularly acute. The goal was: to develop a method of non-destructive radiographic inspection of point welds with a high accuracy rating of its quality.
NASA Astrophysics Data System (ADS)
Chen, Jia-Wen; Lin, Chuen-Fu; Wang, Shyang-Guang; Lee, Yi-Chieh; Chiang, Chung-Han; Huang, Min-Hui; Lee, Yi-Hsiung; Vitrant, Guy; Pan, Ming-Jeng; Lee, Horng-Mo; Liu, Yi-Jui; Baldeck, Patrice L.; Lin, Chih-Lang
2013-09-01
Measurements of optical tweezers forces on biological micro-objects can be used to develop innovative biodiagnostics methods. In the first part of this report, we present a new sensitive method to determine A, B, D types of red blood cells. Target antibodies are coated on glass surfaces. Optical forces needed to pull away RBC from the glass surface increase when RBC antigens interact with their corresponding antibodies. In this work, measurements of stripping optical forces are used to distinguish the major RBC types: group O Rh(+), group A Rh(+) and group B Rh(+). The sensitivity of the method is found to be at least 16-folds higher than the conventional agglutination method. In the second part of this report, we present an original way to measure in real time the wall thickness of bacteria that is one of the most important diagnostic parameters of bacteria drug resistance in hospital diagnostics. The optical tweezers force on a shell bacterium is proportional to its wall thickness. Experimentally, we determine the optical tweezers force applied on each bacteria family by measuring their escape velocity. Then, the wall thickness of shell bacteria can be obtained after calibrating with known bacteria parameters. The method has been successfully applied to indentify, from blind tests, Methicillinresistant Staphylococcus aureus (MRSA), including VSSA (NCTC 10442), VISA (Mu 50), and heto-VISA (Mu 3)
A method for modeling discontinuities in a microwave coaxial transmission line
NASA Technical Reports Server (NTRS)
Otoshi, T. Y.
1992-01-01
A method for modeling discontinuities in a coaxial transmission line is presented. The methodology involves the use of a nonlinear least-squares fit program to optimize the fit between theoretical data (from the model) and experimental data. When this method was applied to modeling discontinuities in a slightly damaged Galileo spacecraft S-band (2.295-GHz) antenna cable, excellent agreement between theory and experiment was obtained over a frequency range of 1.70-2.85 GHz. The same technique can be applied for diagnostics and locating unknown discontinuities in other types of microwave transmission lines, such as rectangular, circular, and beam waveguides.
A method for modeling discontinuities in a microwave coaxial transmission line
NASA Astrophysics Data System (ADS)
Otoshi, T. Y.
1992-08-01
A method for modeling discontinuities in a coaxial transmission line is presented. The methodology involves the use of a nonlinear least-squares fit program to optimize the fit between theoretical data (from the model) and experimental data. When this method was applied to modeling discontinuities in a slightly damaged Galileo spacecraft S-band (2.295-GHz) antenna cable, excellent agreement between theory and experiment was obtained over a frequency range of 1.70-2.85 GHz. The same technique can be applied for diagnostics and locating unknown discontinuities in other types of microwave transmission lines, such as rectangular, circular, and beam waveguides.
NASA Astrophysics Data System (ADS)
Styk, Adam
2014-07-01
Classical time-averaging and stroboscopic interferometry are widely used for MEMS/MOEMS dynamic behavior investigations. Unfortunately both methods require an amplitude magnitude of at least 0.19λ to be able to detect resonant frequency of the object. Moreover the precision of measurement is limited. That puts strong constrains on the type of element to be tested. In this paper the comparison of two methods of microobject vibration measurements that overcome aforementioned problems are presented. Both methods maintain high speed measurement time and extend the range of amplitudes to be measured (below 0.19λ), moreover can be easily applied to MEMS/MOEMS dynamic parameters measurements.
Quantitative Evaluation of Management Courses: Part 1
ERIC Educational Resources Information Center
Cunningham, Cyril
1973-01-01
The author describes how he developed a method of evaluating and comparing management courses of different types and lengths by applying an ordinal system of relative values using a process of transmutation. (MS)
Inferring Single Neuron Properties in Conductance Based Balanced Networks
Pool, Román Rossi; Mato, Germán
2011-01-01
Balanced states in large networks are a usual hypothesis for explaining the variability of neural activity in cortical systems. In this regime the statistics of the inputs is characterized by static and dynamic fluctuations. The dynamic fluctuations have a Gaussian distribution. Such statistics allows to use reverse correlation methods, by recording synaptic inputs and the spike trains of ongoing spontaneous activity without any additional input. By using this method, properties of the single neuron dynamics that are masked by the balanced state can be quantified. To show the feasibility of this approach we apply it to large networks of conductance based neurons. The networks are classified as Type I or Type II according to the bifurcations which neurons of the different populations undergo near the firing onset. We also analyze mixed networks, in which each population has a mixture of different neuronal types. We determine under which conditions the intrinsic noise generated by the network can be used to apply reverse correlation methods. We find that under realistic conditions we can ascertain with low error the types of neurons present in the network. We also find that data from neurons with similar firing rates can be combined to perform covariance analysis. We compare the results of these methods (that do not requite any external input) to the standard procedure (that requires the injection of Gaussian noise into a single neuron). We find a good agreement between the two procedures. PMID:22016730
Application of the perturbation iteration method to boundary layer type problems.
Pakdemirli, Mehmet
2016-01-01
The recently developed perturbation iteration method is applied to boundary layer type singular problems for the first time. As a preliminary work on the topic, the simplest algorithm of PIA(1,1) is employed in the calculations. Linear and nonlinear problems are solved to outline the basic ideas of the new solution technique. The inner and outer solutions are determined with the iteration algorithm and matched to construct a composite expansion valid within all parts of the domain. The solutions are contrasted with the available exact or numerical solutions. It is shown that the perturbation-iteration algorithm can be effectively used for solving boundary layer type problems.
Application of Newton's method to the postbuckling of rings under pressure loadings
NASA Technical Reports Server (NTRS)
Thurston, Gaylen A.
1989-01-01
The postbuckling response of circular rings (or long cylinders) is examined. The rings are subjected to four types of external pressure loadings; each type of pressure is defined by its magnitude and direction at points on the buckled ring. Newton's method is applied to the nonlinear differential equations of the exact inextensional theory for the ring problem. A zeroth approximation for the solution of the nonlinear equations, based on the mode shape corresponding to the first buckling pressure, is derived in closed form for each of the four types of pressure. The zeroth approximation is used to start the iteration cycle in Newton's method to compute numerical solutions of the nonlinear equations. The zeroth approximations for the postbuckling pressure-deflection curves are compared with the converged solutions from Newton's method and with similar results reported in the literature.
Wunschel, David S; Melville, Angela M; Ehrhardt, Christopher J; Colburn, Heather A; Victry, Kristin D; Antolick, Kathryn C; Wahl, Jon H; Wahl, Karen L
2012-05-07
The investigation of crimes involving chemical or biological agents is infrequent, but presents unique analytical challenges. The protein toxin ricin is encountered more frequently than other agents and is found in the seeds of Ricinus communis, commonly known as the castor plant. Typically, the toxin is extracted from castor seeds utilizing a variety of different recipes that result in varying purity of the toxin. Moreover, these various purification steps can also leave or differentially remove a variety of exogenous and endogenous residual components with the toxin that may indicate the type and number of purification steps involved. We have applied three gas chromatography-mass spectrometry (GC-MS) based analytical methods to measure the variation in seed carbohydrates and castor oil ricinoleic acid, as well as the presence of solvents used for purification. These methods were applied to the same samples prepared using four previously identified toxin preparation methods, starting from four varieties of castor seeds. The individual data sets for seed carbohydrate profiles, ricinoleic acid, or acetone amount each provided information capable of differentiating different types of toxin preparations across seed types. However, the integration of the data sets using multivariate factor analysis provided a clear distinction of all samples based on the preparation method, independent of the seed source. In particular, the abundance of mannose, arabinose, fucose, ricinoleic acid, and acetone were shown to be important differentiating factors. These complementary tools provide a more confident determination of the method of toxin preparation than would be possible using a single analytical method.
NASA Astrophysics Data System (ADS)
Recchioni, Maria Cristina
2001-12-01
This paper investigates the application of the method introduced by L. Pasquini (1989) for simultaneously approaching the zeros of polynomial solutions to a class of second-order linear homogeneous ordinary differential equations with polynomial coefficients to a particular case in which these polynomial solutions have zeros symmetrically arranged with respect to the origin. The method is based on a family of nonlinear equations which is associated with a given class of differential equations. The roots of the nonlinear equations are related to the roots of the polynomial solutions of differential equations considered. Newton's method is applied to find the roots of these nonlinear equations. In (Pasquini, 1994) the nonsingularity of the roots of these nonlinear equations is studied. In this paper, following the lines in (Pasquini, 1994), the nonsingularity of the roots of these nonlinear equations is studied. More favourable results than the ones in (Pasquini, 1994) are proven in the particular case of polynomial solutions with symmetrical zeros. The method is applied to approximate the roots of Hermite-Sobolev type polynomials and Freud polynomials. A lower bound for the smallest positive root of Hermite-Sobolev type polynomials is given via the nonlinear equation. The quadratic convergence of the method is proven. A comparison with a classical method that uses the Jacobi matrices is carried out. We show that the algorithm derived by the proposed method is sometimes preferable to the classical QR type algorithms for computing the eigenvalues of the Jacobi matrices even if these matrices are real and symmetric.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsuchiya, Hikaru; Tanaka, Keiji, E-mail: tanaka-kj@igakuken.or.jp; Saeki, Yasushi, E-mail: saeki-ys@igakuken.or.jp
2013-06-28
Highlights: •The parallel reaction monitoring method was applied to ubiquitin quantification. •The ubiquitin PRM method is highly sensitive even in biological samples. •Using the method, we revealed that Ufd4 assembles the K29-linked ubiquitin chain. -- Abstract: Ubiquitylation is an essential posttranslational protein modification that is implicated in a diverse array of cellular functions. Although cells contain eight structurally distinct types of polyubiquitin chains, detailed function of several chain types including K29-linked chains has remained largely unclear. Current mass spectrometry (MS)-based quantification methods are highly inefficient for low abundant atypical chains, such as K29- and M1-linked chains, in complex mixtures thatmore » typically contain highly abundant proteins. In this study, we applied parallel reaction monitoring (PRM), a quantitative, high-resolution MS method, to quantify ubiquitin chains. The ubiquitin PRM method allows us to quantify 100 attomole amounts of all possible ubiquitin chains in cell extracts. Furthermore, we quantified ubiquitylation levels of ubiquitin-proline-β-galactosidase (Ub-P-βgal), a historically known model substrate of the ubiquitin fusion degradation (UFD) pathway. In wild-type cells, Ub-P-βgal is modified with ubiquitin chains consisting of 21% K29- and 78% K48-linked chains. In contrast, K29-linked chains are not detected in UFD4 knockout cells, suggesting that Ufd4 assembles the K29-linked ubiquitin chain(s) on Ub-P-βgal in vivo. Thus, the ubiquitin PRM is a novel, useful, quantitative method for analyzing the highly complicated ubiquitin system.« less
Sensor And Method For Detecting A Superstrate
NASA Technical Reports Server (NTRS)
Arndt, G. Dickey (Inventor); Cari, James R. (Inventor); Ngo, Phong H. (Inventor); Fink, Patrick W. (Inventor); Siekierski, James D. (Inventor)
2006-01-01
Method and apparatus are provided for determining a superstrate on or near a sensor, e.g., for detecting the presence of an ice superstrate on an airplane wing or a road. In one preferred embodiment, multiple measurement cells are disposed along a transmission line. While the present invention is operable with different types of transmission lines, construction details for a presently preferred coplanar waveguide and a microstrip waveguide are disclosed. A computer simulation is provided as part of the invention for predicting results of a simulated superstrate detector system. The measurement cells may be physically partitioned, nonphysically partitioned with software or firmware, or include a combination of different types of partitions. In one embodiment, a plurality of transmission lines are utilized wherein each transmission line includes a plurality of measurement cells. The plurality of transmission lines may be multiplexed with the signal from each transmission line being applied to the same phase detector. In one embodiment, an inverse problem method is applied to determine the superstrate dielectric for a transmission line with multiple measurement cells.
Missing Data in Clinical Studies: Issues and Methods
Ibrahim, Joseph G.; Chu, Haitao; Chen, Ming-Hui
2012-01-01
Missing data are a prevailing problem in any type of data analyses. A participant variable is considered missing if the value of the variable (outcome or covariate) for the participant is not observed. In this article, various issues in analyzing studies with missing data are discussed. Particularly, we focus on missing response and/or covariate data for studies with discrete, continuous, or time-to-event end points in which generalized linear models, models for longitudinal data such as generalized linear mixed effects models, or Cox regression models are used. We discuss various classifications of missing data that may arise in a study and demonstrate in several situations that the commonly used method of throwing out all participants with any missing data may lead to incorrect results and conclusions. The methods described are applied to data from an Eastern Cooperative Oncology Group phase II clinical trial of liver cancer and a phase III clinical trial of advanced non–small-cell lung cancer. Although the main area of application discussed here is cancer, the issues and methods we discuss apply to any type of study. PMID:22649133
Beluga whale, Delphinapterus leucas, vocalizations from the Churchill River, Manitoba, Canada.
Chmelnitsky, Elly G; Ferguson, Steven H
2012-06-01
Classification of animal vocalizations is often done by a human observer using aural and visual analysis but more efficient, automated methods have also been utilized to reduce bias and increase reproducibility. Beluga whale, Delphinapterus leucas, calls were described from recordings collected in the summers of 2006-2008, in the Churchill River, Manitoba. Calls (n=706) were classified based on aural and visual analysis, and call characteristics were measured; calls were separated into 453 whistles (64.2%; 22 types), 183 pulsed∕noisy calls (25.9%; 15 types), and 70 combined calls (9.9%; seven types). Measured parameters varied within each call type but less variation existed in pulsed and noisy call types and some combined call types than in whistles. A more efficient and repeatable hierarchical clustering method was applied to 200 randomly chosen whistles using six call characteristics as variables; twelve groups were identified. Call characteristics varied less in cluster analysis groups than in whistle types described by visual and aural analysis and results were similar to the whistle contours described. This study provided the first description of beluga calls in Hudson Bay and using two methods provides more robust interpretations and an assessment of appropriate methods for future studies.
NASA Astrophysics Data System (ADS)
Quy Muoi, Pham; Nho Hào, Dinh; Sahoo, Sujit Kumar; Tang, Dongliang; Cong, Nguyen Huu; Dang, Cuong
2018-05-01
In this paper, we study a gradient-type method and a semismooth Newton method for minimization problems in regularizing inverse problems with nonnegative and sparse solutions. We propose a special penalty functional forcing the minimizers of regularized minimization problems to be nonnegative and sparse, and then we apply the proposed algorithms in a practical the problem. The strong convergence of the gradient-type method and the local superlinear convergence of the semismooth Newton method are proven. Then, we use these algorithms for the phase retrieval problem and illustrate their efficiency in numerical examples, particularly in the practical problem of optical imaging through scattering media where all the noises from experiment are presented.
The development of a revised version of multi-center molecular Ornstein-Zernike equation
NASA Astrophysics Data System (ADS)
Kido, Kentaro; Yokogawa, Daisuke; Sato, Hirofumi
2012-04-01
Ornstein-Zernike (OZ)-type theory is a powerful tool to obtain 3-dimensional solvent distribution around solute molecule. Recently, we proposed multi-center molecular OZ method, which is suitable for parallel computing of 3D solvation structure. The distribution function in this method consists of two components, namely reference and residue parts. Several types of the function were examined as the reference part to investigate the numerical robustness of the method. As the benchmark, the method is applied to water, benzene in aqueous solution and single-walled carbon nanotube in chloroform solution. The results indicate that fully-parallelization is achieved by utilizing the newly proposed reference functions.
Inductive matrix completion for predicting gene-disease associations.
Natarajan, Nagarajan; Dhillon, Inderjit S
2014-06-15
Most existing methods for predicting causal disease genes rely on specific type of evidence, and are therefore limited in terms of applicability. More often than not, the type of evidence available for diseases varies-for example, we may know linked genes, keywords associated with the disease obtained by mining text, or co-occurrence of disease symptoms in patients. Similarly, the type of evidence available for genes varies-for example, specific microarray probes convey information only for certain sets of genes. In this article, we apply a novel matrix-completion method called Inductive Matrix Completion to the problem of predicting gene-disease associations; it combines multiple types of evidence (features) for diseases and genes to learn latent factors that explain the observed gene-disease associations. We construct features from different biological sources such as microarray expression data and disease-related textual data. A crucial advantage of the method is that it is inductive; it can be applied to diseases not seen at training time, unlike traditional matrix-completion approaches and network-based inference methods that are transductive. Comparison with state-of-the-art methods on diseases from the Online Mendelian Inheritance in Man (OMIM) database shows that the proposed approach is substantially better-it has close to one-in-four chance of recovering a true association in the top 100 predictions, compared to the recently proposed Catapult method (second best) that has <15% chance. We demonstrate that the inductive method is particularly effective for a query disease with no previously known gene associations, and for predicting novel genes, i.e. genes that are previously not linked to diseases. Thus the method is capable of predicting novel genes even for well-characterized diseases. We also validate the novelty of predictions by evaluating the method on recently reported OMIM associations and on associations recently reported in the literature. Source code and datasets can be downloaded from http://bigdata.ices.utexas.edu/project/gene-disease. © The Author 2014. Published by Oxford University Press.
A non-asymptotic model of dynamics of honeycomb lattice-type plates
NASA Astrophysics Data System (ADS)
Cielecka, Iwona; Jędrysiak, Jarosław
2006-09-01
Lightweight structures, consisted of special composite material systems like sandwich plates, are often used in aerospace or naval engineering. In composite sandwich plates, the intermediate core is usually made of cellular structures, e.g. honeycomb micro-frames, reinforcing static and dynamic properties of these plates. Here, a new non-asymptotic continuum model of honeycomb lattice-type plates is shown and applied to the analysis of dynamic problems. The general formulation of the model for periodic lattice-type plates of an arbitrary lay-out was presented by Cielecka and Jędrysiak [Journal of Theoretical and Applied Mechanics 40 (2002) 23-46]. This model, partly based on the tolerance averaging method developed for periodic composite solids by Woźniak and Wierzbicki [Averaging techniques in thermomechanics of composite solids, Wydawnictwo Politechniki Częstochowskiej, Częstochowa, 2000], takes into account the effect of the length microstructure size on the dynamic plate behaviour. The shown method leads to the model equations describing the above effect for honeycomb lattice-type plates. These equations have the form similar to equations for isotropic cases. The dynamic analysis of such plates exemplifies this effect, which is significant and cannot be neglected. The physical correctness of the obtained results is also discussed.
de Knegt, Leonardo V; Pires, Sara M; Löfström, Charlotta; Sørensen, Gitte; Pedersen, Karl; Torpdahl, Mia; Nielsen, Eva M; Hald, Tine
2016-03-01
Salmonella is an important cause of bacterial foodborne infections in Denmark. To identify the main animal-food sources of human salmonellosis, risk managers have relied on a routine application of a microbial subtyping-based source attribution model since 1995. In 2013, multiple locus variable number tandem repeat analysis (MLVA) substituted phage typing as the subtyping method for surveillance of S. Enteritidis and S. Typhimurium isolated from animals, food, and humans in Denmark. The purpose of this study was to develop a modeling approach applying a combination of serovars, MLVA types, and antibiotic resistance profiles for the Salmonella source attribution, and assess the utility of the results for the food safety decisionmakers. Full and simplified MLVA schemes from surveillance data were tested, and model fit and consistency of results were assessed using statistical measures. We conclude that loci schemes STTR5/STTR10/STTR3 for S. Typhimurium and SE9/SE5/SE2/SE1/SE3 for S. Enteritidis can be used in microbial subtyping-based source attribution models. Based on the results, we discuss that an adjustment of the discriminatory level of the subtyping method applied often will be required to fit the purpose of the study and the available data. The issues discussed are also considered highly relevant when applying, e.g., extended multi-locus sequence typing or next-generation sequencing techniques. © 2015 Society for Risk Analysis.
A One-Step Immunostaining Method to Visualize Rodent Muscle Fiber Type within a Single Specimen
Sawano, Shoko; Komiya, Yusuke; Ichitsubo, Riho; Ohkawa, Yasuyuki; Nakamura, Mako; Tatsumi, Ryuichi; Ikeuchi, Yoshihide; Mizunoya, Wataru
2016-01-01
In this study, we present a quadruple immunostaining method for rapid muscle fiber typing of mice and rats using antibodies specific to the adult myosin heavy chain (MyHC) isoforms MyHC1, 2A, 2X, and 2B, which are common marker proteins of distinct muscle fiber types. We developed rat monoclonal antibodies specific to each MyHC isoform and conjugated these four antibodies to fluorophores with distinct excitation and emission wavelengths. By mixing the four types of conjugated antibodies, MyHC1, 2A, 2X, and 2B could be distinguished within a single specimen allowing for facile delineation of skeletal muscle fiber types. Furthermore, we could observe hybrid fibers expressing MyHC2X and MyHC2B together in single longitudinal muscle sections from mice and rats, that was not attained in previous techniques. This staining method is expected to be applied to study muscle fiber type transition in response to environmental factors, and to ultimately develop techniques to regulate animal muscle fiber types. PMID:27814384
Bourrinet, P; Conduzorgues, J P; Dutertre, H; Macabies, J; Masson, P; Maurin, J; Mercier, O
1995-02-01
An interlaboratory study was carried out to determine the feasibility and reliability of a method using the hamster cheek pouch as a model for assessing the potential irritative properties of substances intended to be applied to the lips or other mucous membranes. The test substances were applied once daily to both pouches for 14 consecutive days. Local and general tolerances were appraised throughout the study. At the end of the study, histologic examination of the pouches and the main organs was performed. Results of the feasibility study, conducted on various types of commercial products, indicated that this model is suitable for preparations of various consistence and composition. Results of the reliability study, carried out on gel-type preparations containing various concentrations of a known irritant, sodium lauryl sulfate, indicated that the method elicits a dose-dependent reaction for this compound. This hamster cheek pouch method was reproducible for the various parameters under consideration: local tolerance, general tolerance, histologic examination. For all products, results were in good agreement among the various laboratories participating in the study. The French regulatory authorities of the Fraud Repression Department have accepted it as an official method for the evaluation of the potential irritative properties of cosmetics and hygiene products intended to be applied to the lips or other mucous membranes.
Verweij, Jaco J; Stensvold, C Rune
2014-04-01
Over the past few decades, nucleic acid-based methods have been developed for the diagnosis of intestinal parasitic infections. Advantages of nucleic acid-based methods are numerous; typically, these include increased sensitivity and specificity and simpler standardization of diagnostic procedures. DNA samples can also be stored and used for genetic characterization and molecular typing, providing a valuable tool for surveys and surveillance studies. A variety of technologies have been applied, and some specific and general pitfalls and limitations have been identified. This review provides an overview of the multitude of methods that have been reported for the detection of intestinal parasites and offers some guidance in applying these methods in the clinical laboratory and in epidemiological studies.
Stensvold, C. Rune
2014-01-01
SUMMARY Over the past few decades, nucleic acid-based methods have been developed for the diagnosis of intestinal parasitic infections. Advantages of nucleic acid-based methods are numerous; typically, these include increased sensitivity and specificity and simpler standardization of diagnostic procedures. DNA samples can also be stored and used for genetic characterization and molecular typing, providing a valuable tool for surveys and surveillance studies. A variety of technologies have been applied, and some specific and general pitfalls and limitations have been identified. This review provides an overview of the multitude of methods that have been reported for the detection of intestinal parasites and offers some guidance in applying these methods in the clinical laboratory and in epidemiological studies. PMID:24696439
Improved dynamical scaling analysis using the kernel method for nonequilibrium relaxation.
Echinaka, Yuki; Ozeki, Yukiyasu
2016-10-01
The dynamical scaling analysis for the Kosterlitz-Thouless transition in the nonequilibrium relaxation method is improved by the use of Bayesian statistics and the kernel method. This allows data to be fitted to a scaling function without using any parametric model function, which makes the results more reliable and reproducible and enables automatic and faster parameter estimation. Applying this method, the bootstrap method is introduced and a numerical discrimination for the transition type is proposed.
Truncated RAP-MUSIC (TRAP-MUSIC) for MEG and EEG source localization.
Mäkelä, Niko; Stenroos, Matti; Sarvas, Jukka; Ilmoniemi, Risto J
2018-02-15
Electrically active brain regions can be located applying MUltiple SIgnal Classification (MUSIC) on magneto- or electroencephalographic (MEG; EEG) data. We introduce a new MUSIC method, called truncated recursively-applied-and-projected MUSIC (TRAP-MUSIC). It corrects a hidden deficiency of the conventional RAP-MUSIC algorithm, which prevents estimation of the true number of brain-signal sources accurately. The correction is done by applying a sequential dimension reduction to the signal-subspace projection. We show that TRAP-MUSIC significantly improves the performance of MUSIC-type localization; in particular, it successfully and robustly locates active brain regions and estimates their number. We compare TRAP-MUSIC and RAP-MUSIC in simulations with varying key parameters, e.g., signal-to-noise ratio, correlation between source time-courses, and initial estimate for the dimension of the signal space. In addition, we validate TRAP-MUSIC with measured MEG data. We suggest that with the proposed TRAP-MUSIC method, MUSIC-type localization could become more reliable and suitable for various online and offline MEG and EEG applications. Copyright © 2017 Elsevier Inc. All rights reserved.
Application of simple negative feedback model for avalanche photodetectors investigation
NASA Astrophysics Data System (ADS)
Kushpil, V. V.
2009-10-01
A simple negative feedback model based on Miller's formula is used to investigate the properties of Avalanche Photodetectors (APDs). The proposed method can be applied to study classical APD as well as new type of devices, which are operating in the Internal Negative Feedback (INF) regime. The method shows a good sensitivity to technological APD parameters making it possible to use it as a tool to analyse various APD parameters. It also allows better understanding of the APD operation conditions. The simulations and experimental data analysis for different types of APDs are presented.
Charting improvements in US registry HLA typing ambiguity using a typing resolution score.
Paunić, Vanja; Gragert, Loren; Schneider, Joel; Müller, Carlheinz; Maiers, Martin
2016-07-01
Unrelated stem cell registries have been collecting HLA typing of volunteer bone marrow donors for over 25years. Donor selection for hematopoietic stem cell transplantation is based primarily on matching the alleles of donors and patients at five polymorphic HLA loci. As HLA typing technologies have continually advanced since the beginnings of stem cell transplantation, registries have accrued typings of varied HLA typing ambiguity. We present a new typing resolution score (TRS), based on the likelihood of self-match, that allows the systematic comparison of HLA typings across different methods, data sets and populations. We apply the TRS to chart improvement in HLA typing within the Be The Match Registry of the United States from the initiation of DNA-based HLA typing to the current state of high-resolution typing using next-generation sequencing technologies. In addition, we present a publicly available online tool for evaluation of any given HLA typing. This TRS objectively evaluates HLA typing methods and can help define standards for acceptable recruitment HLA typing. Copyright © 2016 American Society for Histocompatibility and Immunogenetics. Published by Elsevier Inc. All rights reserved.
Legendre spectral-collocation method for solving some types of fractional optimal control problems
Sweilam, Nasser H.; Al-Ajami, Tamer M.
2014-01-01
In this paper, the Legendre spectral-collocation method was applied to obtain approximate solutions for some types of fractional optimal control problems (FOCPs). The fractional derivative was described in the Caputo sense. Two different approaches were presented, in the first approach, necessary optimality conditions in terms of the associated Hamiltonian were approximated. In the second approach, the state equation was discretized first using the trapezoidal rule for the numerical integration followed by the Rayleigh–Ritz method to evaluate both the state and control variables. Illustrative examples were included to demonstrate the validity and applicability of the proposed techniques. PMID:26257937
NASA Technical Reports Server (NTRS)
Liu, A. F.
1974-01-01
A systematic approach for applying methods for fracture control in the structural components of space vehicles consists of four major steps. The first step is to define the primary load-carrying structural elements and the type of load, environment, and design stress levels acting upon them. The second step is to identify the potential fracture-critical parts by means of a selection logic flow diagram. The third step is to evaluate the safe-life and fail-safe capabilities of the specified part. The last step in the sequence is to apply the control procedures that will prevent damage to the fracture-critical parts. The fracture control methods discussed include fatigue design and analysis methods, methods for preventing crack-like defects, fracture mechanics analysis methods, and nondestructive evaluation methods. An example problem is presented for evaluation of the safe-crack-growth capability of the space shuttle crew compartment skin structure.
ACCESS 3. Approximation concepts code for efficient structural synthesis: User's guide
NASA Technical Reports Server (NTRS)
Fleury, C.; Schmit, L. A., Jr.
1980-01-01
A user's guide is presented for ACCESS-3, a research oriented program which combines dual methods and a collection of approximation concepts to achieve excellent efficiency in structural synthesis. The finite element method is used for structural analysis and dual algorithms of mathematical programming are applied in the design optimization procedure. This program retains all of the ACCESS-2 capabilities and the data preparation formats are fully compatible. Four distinct optimizer options were added: interior point penalty function method (NEWSUMT); second order primal projection method (PRIMAL2); second order Newton-type dual method (DUAL2); and first order gradient projection-type dual method (DUAL1). A pure discrete and mixed continuous-discrete design variable capability, and zero order approximation of the stress constraints are also included.
Numerical methods for systems of conservation laws of mixed type using flux splitting
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
1990-01-01
The essentially non-oscillatory (ENO) finite difference scheme is applied to systems of conservation laws of mixed hyperbolic-elliptic type. A flux splitting, with the corresponding Jacobi matrices having real and positive/negative eigenvalues, is used. The hyperbolic ENO operator is applied separately. The scheme is numerically tested on the van der Waals equation in fluid dynamics. Convergence was observed with good resolution to weak solutions for various Riemann problems, which are then numerically checked to be admissible as the viscosity-capillarity limits. The interesting phenomena of the shrinking of elliptic regions if they are present in the initial conditions were also observed.
NASA Astrophysics Data System (ADS)
Hirano, Taichi; Sakai, Keiji
2017-07-01
Viscoelasticity is a unique characteristic of soft materials and describes its dynamic response to mechanical stimulations. A creep test is an experimental method for measuring the strain ratio/rate against an applied stress, thereby assessing the viscoelasticity of the materials. We propose two advanced experimental systems suitable for the creep test, adopting our original electromagnetically spinning (EMS) technique. This technique can apply a constant torque by a noncontact mechanism, thereby allowing more sensitive and rapid measurements. The viscosity and elasticity of a semidilute wormlike micellar solution were determined using two setups, and the consistency between the results was assessed.
2013-01-01
The comparative study of the results of various segmentation methods for the digital images of the follicular lymphoma cancer tissue section is described in this paper. The sensitivity and specificity and some other parameters of the following adaptive threshold methods of segmentation: the Niblack method, the Sauvola method, the White method, the Bernsen method, the Yasuda method and the Palumbo method, are calculated. Methods are applied to three types of images constructed by extraction of the brown colour information from the artificial images synthesized based on counterpart experimentally captured images. This paper presents usefulness of the microscopic image synthesis method in evaluation as well as comparison of the image processing results. The results of thoughtful analysis of broad range of adaptive threshold methods applied to: (1) the blue channel of RGB, (2) the brown colour extracted by deconvolution and (3) the ’brown component’ extracted from RGB allows to select some pairs: method and type of image for which this method is most efficient considering various criteria e.g. accuracy and precision in area detection or accuracy in number of objects detection and so on. The comparison shows that the White, the Bernsen and the Sauvola methods results are better than the results of the rest of the methods for all types of monochromatic images. All three methods segments the immunopositive nuclei with the mean accuracy of 0.9952, 0.9942 and 0.9944 respectively, when treated totally. However the best results are achieved for monochromatic image in which intensity shows brown colour map constructed by colour deconvolution algorithm. The specificity in the cases of the Bernsen and the White methods is 1 and sensitivities are: 0.74 for White and 0.91 for Bernsen methods while the Sauvola method achieves sensitivity value of 0.74 and the specificity value of 0.99. According to Bland-Altman plot the Sauvola method selected objects are segmented without undercutting the area for true positive objects but with extra false positive objects. The Sauvola and the Bernsen methods gives complementary results what will be exploited when the new method of virtual tissue slides segmentation be develop. Virtual Slides The virtual slides for this article can be found here: slide 1: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617947952577 and slide 2: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617948230017. PMID:23531405
Korzynska, Anna; Roszkowiak, Lukasz; Lopez, Carlos; Bosch, Ramon; Witkowski, Lukasz; Lejeune, Marylene
2013-03-25
The comparative study of the results of various segmentation methods for the digital images of the follicular lymphoma cancer tissue section is described in this paper. The sensitivity and specificity and some other parameters of the following adaptive threshold methods of segmentation: the Niblack method, the Sauvola method, the White method, the Bernsen method, the Yasuda method and the Palumbo method, are calculated. Methods are applied to three types of images constructed by extraction of the brown colour information from the artificial images synthesized based on counterpart experimentally captured images. This paper presents usefulness of the microscopic image synthesis method in evaluation as well as comparison of the image processing results. The results of thoughtful analysis of broad range of adaptive threshold methods applied to: (1) the blue channel of RGB, (2) the brown colour extracted by deconvolution and (3) the 'brown component' extracted from RGB allows to select some pairs: method and type of image for which this method is most efficient considering various criteria e.g. accuracy and precision in area detection or accuracy in number of objects detection and so on. The comparison shows that the White, the Bernsen and the Sauvola methods results are better than the results of the rest of the methods for all types of monochromatic images. All three methods segments the immunopositive nuclei with the mean accuracy of 0.9952, 0.9942 and 0.9944 respectively, when treated totally. However the best results are achieved for monochromatic image in which intensity shows brown colour map constructed by colour deconvolution algorithm. The specificity in the cases of the Bernsen and the White methods is 1 and sensitivities are: 0.74 for White and 0.91 for Bernsen methods while the Sauvola method achieves sensitivity value of 0.74 and the specificity value of 0.99. According to Bland-Altman plot the Sauvola method selected objects are segmented without undercutting the area for true positive objects but with extra false positive objects. The Sauvola and the Bernsen methods gives complementary results what will be exploited when the new method of virtual tissue slides segmentation be develop. The virtual slides for this article can be found here: slide 1: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617947952577 and slide 2: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617948230017.
Resolution in forensic microbial genotyping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Velsko, S P
2005-08-30
Resolution is a key parameter for differentiating among the large number of strain typing methods that could be applied to pathogens involved in bioterror events or biocrimes. In this report we develop a first-principles analysis of strain typing resolution using a simple mathematical model to provide a basis for the rational design of microbial typing systems for forensic applications. We derive two figures of merit that describe the resolving power and phylogenetic depth of a strain typing system. Rough estimates of these figures-of-merit for MLVA, MLST, IS element, AFLP, hybridization microarrays, and other bacterial typing methods are derived from mutationmore » rate data reported in the literature. We also discuss the general problem of how to construct a ''universal'' practical typing system that has the highest possible resolution short of whole-genome sequencing, and that is applicable with minimal modification to a wide range of pathogens.« less
Application of neutron transmutation doping method to initially p-type silicon material.
Kim, Myong-Seop; Kang, Ki-Doo; Park, Sang-Jun
2009-01-01
The neutron transmutation doping (NTD) method was applied to the initially p-type silicon in order to extend the NTD applications at HANARO. The relationship between the irradiation neutron fluence and the final resistivity of the initially p-type silicon material was investigated. The proportional constant between the neutron fluence and the resistivity was determined to be 2.3473x10(19)nOmegacm(-1). The deviation of the final resistivity from the target for almost all the irradiation results of the initially p-type silicon ingots was at a range from -5% to 2%. In addition, the burn-up effect of the boron impurities, the residual (32)P activity and the effect of the compensation characteristics for the initially p-type silicon were studied. Conclusively, the practical methodology to perform the neutron transmutation doping of the initially p-type silicon ingot was established.
NASA Astrophysics Data System (ADS)
Kim, Kwangmin; Go, Byeong-Soo; Sung, Hae-Jin; Park, Hea-chul; Kim, Seokho; Lee, Sangjin; Jin, Yoon-Su; Oh, Yunsang; Park, Minwon; Yu, In-Keun
2014-09-01
This paper describes the design specifications and performance of a real toroid-type high temperature superconducting (HTS) DC reactor. The HTS DC reactor was designed using 2G HTS wires. The HTS coils of the toroid-type DC reactor magnet were made in the form of a D-shape. The target inductance of the HTS DC reactor was 400 mH. The expected operating temperature was under 20 K. The electromagnetic performance of the toroid-type HTS DC reactor magnet was analyzed using the finite element method program. A conduction cooling method was adopted for reactor magnet cooling. Performances of the toroid-type HTS DC reactor were analyzed through experiments conducted under the steady-state and charge conditions. The fundamental design specifications and the data obtained from this research will be applied to the design of a commercial-type HTS DC reactor.
Dispersive shock waves in systems with nonlocal dispersion of Benjamin-Ono type
NASA Astrophysics Data System (ADS)
El, G. A.; Nguyen, L. T. K.; Smyth, N. F.
2018-04-01
We develop a general approach to the description of dispersive shock waves (DSWs) for a class of nonlinear wave equations with a nonlocal Benjamin-Ono type dispersion term involving the Hilbert transform. Integrability of the governing equation is not a pre-requisite for the application of this method which represents a modification of the DSW fitting method previously developed for dispersive-hydrodynamic systems of Korteweg-de Vries (KdV) type (i.e. reducible to the KdV equation in the weakly nonlinear, long wave, unidirectional approximation). The developed method is applied to the Calogero-Sutherland dispersive hydrodynamics for which the classification of all solution types arising from the Riemann step problem is constructed and the key physical parameters (DSW edge speeds, lead soliton amplitude, intermediate shelf level) of all but one solution type are obtained in terms of the initial step data. The analytical results are shown to be in excellent agreement with results of direct numerical simulations.
Improving the analysis of composite endpoints in rare disease trials.
McMenamin, Martina; Berglind, Anna; Wason, James M S
2018-05-22
Composite endpoints are recommended in rare diseases to increase power and/or to sufficiently capture complexity. Often, they are in the form of responder indices which contain a mixture of continuous and binary components. Analyses of these outcomes typically treat them as binary, thus only using the dichotomisations of continuous components. The augmented binary method offers a more efficient alternative and is therefore especially useful for rare diseases. Previous work has indicated the method may have poorer statistical properties when the sample size is small. Here we investigate small sample properties and implement small sample corrections. We re-sample from a previous trial with sample sizes varying from 30 to 80. We apply the standard binary and augmented binary methods and determine the power, type I error rate, coverage and average confidence interval width for each of the estimators. We implement Firth's adjustment for the binary component models and a small sample variance correction for the generalized estimating equations, applying the small sample adjusted methods to each sub-sample as before for comparison. For the log-odds treatment effect the power of the augmented binary method is 20-55% compared to 12-20% for the standard binary method. Both methods have approximately nominal type I error rates. The difference in response probabilities exhibit similar power but both unadjusted methods demonstrate type I error rates of 6-8%. The small sample corrected methods have approximately nominal type I error rates. On both scales, the reduction in average confidence interval width when using the adjusted augmented binary method is 17-18%. This is equivalent to requiring a 32% smaller sample size to achieve the same statistical power. The augmented binary method with small sample corrections provides a substantial improvement for rare disease trials using composite endpoints. We recommend the use of the method for the primary analysis in relevant rare disease trials. We emphasise that the method should be used alongside other efforts in improving the quality of evidence generated from rare disease trials rather than replace them.
Technical product bulletin: this surface washing agent, used in oil spill cleanups, may be applied by any method (drum pump, pressurized spray applicator, brush, or aqueous wash tank) depending on surface and the type and viscosity of oil/contamination.
29 CFR 779.413 - Methods of compensation of retail store employees.
Code of Federal Regulations, 2010 CFR
2010-07-01
... STANDARDS ACT AS APPLIED TO RETAILERS OF GOODS OR SERVICES Provisions Relating to Certain Employees of... represent commissions “on goods or services,” which would include all types of commissions customarily based...
NASA Technical Reports Server (NTRS)
Takeda, K.
1985-01-01
A method was developed for estimating the distribution of snow and the snow water equivalent in Japan by combining LANDSAT data with the degree day method. A snow runoff model was improved and applied to the Okutadami River basin. The Martinec Rango model from the U.S. was applied to Japanese river basins to verify its applicability. This model was then compared with the Japanese model. Analysis of microwave measurements obtained by a radiometer on a tower over dry snow in Hokkaido indicate a certain correlation between brightness temperature and snowpack properties. A correlation between brightness temperature and depth of dry snow in an inland plain area was revealed in NIMBUS SMMR data obtained from the U.S. Calculation of evaporation using airborne remote sensing data and a Priestley-Taylor type of equation shows that the differentiation of evaporation with vegetation type is not remarkable because of little evapotransportation in winter.
Flux splitting algorithms for two-dimensional viscous flows with finite-rate chemistry
NASA Technical Reports Server (NTRS)
Shuen, Jian-Shun; Liou, Meng-Sing
1989-01-01
The Roe flux difference splitting method was extended to treat 2-D viscous flows with nonequilibrium chemistry. The derivations have avoided unnecessary assumptions or approximations. For spatial discretization, the second-order Roe upwind differencing is used for the convective terms and central differencing for the viscous terms. An upwind-based TVD scheme is applied to eliminate oscillations and obtain a sharp representation of discontinuities. A two-state Runge-Kutta method is used to time integrate the discretized Navier-Stokes and species transport equations for the asymptotic steady solutions. The present method is then applied to two types of flows: the shock wave/boundary layer interaction problems and the jet in cross flows.
Flux splitting algorithms for two-dimensional viscous flows with finite-rate chemistry
NASA Technical Reports Server (NTRS)
Shuen, Jian-Shun; Liou, Meng-Sing
1989-01-01
The Roe flux-difference splitting method has been extended to treat two-dimensional viscous flows with nonequilibrium chemistry. The derivations have avoided unnecessary assumptions or approximations. For spatial discretization, the second-order Roe upwind differencing is used for the convective terms and central differencing for the viscous terms. An upwind-based TVD scheme is applied to eliminate oscillations and obtain a sharp representation of discontinuities. A two-stage Runge-Kutta method is used to time integrate the discretized Navier-Stokes and species transport equations for the asymptotic steady solutions. The present method is then applied to two types of flows: the shock wave/boundary layer interaction problems and the jet in cross flows.
NASA Astrophysics Data System (ADS)
Yuan, Na
2018-04-01
With the aid of the symbolic computation, we present an improved ( G ‧ / G ) -expansion method, which can be applied to seek more types of exact solutions for certain nonlinear evolution equations. In illustration, we choose the (3 + 1)-dimensional potential Yu-Toda-Sasa-Fukuyama equation to demonstrate the validity and advantages of the method. As a result, abundant explicit and exact nontraveling wave solutions are obtained including two solitary waves solutions, nontraveling wave solutions and dromion soliton solutions. Some particular localized excitations and the interactions between two solitary waves are researched. The method can be also applied to other nonlinear partial differential equations.
A comparison of automated crater detection methods
NASA Astrophysics Data System (ADS)
Bandeira, L.; Barreira, C.; Pina, P.; Saraiva, J.
2008-09-01
Abstract This work presents early results of a comparison between some common methodologies for automated crater detection. The three procedures considered were applied to images of the surface of Mars, thus illustrating some pros and cons of their use. We aim to establish the clear advantages in using this type of methods in the study of planetary surfaces.
Training Needs Assessment of Technical Skills in Managers of Tehran Electricity Distribution Company
ERIC Educational Resources Information Center
Koohi, Amir Hasan; Ghandali, Fatemeh; Dehghan, Hasan; Ghandali, Najme
2016-01-01
Current dissertation has been conducted in order to investigate and detect training needs of the mangers (top and middle) in Tehran Electricity Distribution Company. Research method is applied kind based on its purpose. Due to data collection method, this query is descriptive-survey type. Statistical population in this study is all of managers in…
Teachers' Adoptation Level of Student Centered Education Approach
ERIC Educational Resources Information Center
Arseven, Zeynep; Sahin, Seyma; Kiliç, Abdurrahman
2016-01-01
The aim of this study is to identify how far the student centered education approach is applied in the primary, middle and high schools in Düzce. Explanatory design which is one type of mixed research methods and "sequential mixed methods sampling" were used in the study. 685 teachers constitute the research sample of the quantitative…
Method for separating single-wall carbon nanotubes and compositions thereof
NASA Technical Reports Server (NTRS)
Hauge, Robert H. (Inventor); Kittrell, W. Carter (Inventor); Sivarajan, Ramesh (Inventor); Bachilo, Sergei M. (Inventor); Weisman, R. Bruce (Inventor); Smalley, Richard E. (Inventor); Strano, Michael S. (Inventor)
2006-01-01
The invention relates to a process for sorting and separating a mixture of (n, m) type single-wall carbon nanotubes according to (n, m) type. A mixture of (n, m) type single-wall carbon nanotubes is suspended such that the single-wall carbon nanotubes are individually dispersed. The nanotube suspension can be done in a surfactant-water solution and the surfactant surrounding the nanotubes keeps the nanotube isolated and from aggregating with other nanotubes. The nanotube suspension is acidified to protonate a fraction of the nanotubes. An electric field is applied and the protonated nanotubes migrate in the electric fields at different rates dependent on their (n, m) type. Fractions of nanotubes are collected at different fractionation times. The process of protonation, applying an electric field, and fractionation is repeated at increasingly higher pH to separated the (n, m) nanotube mixture into individual (n, m) nanotube fractions. The separation enables new electronic devices requiring selected (n, m) nanotube types.
Deep neural network and noise classification-based speech enhancement
NASA Astrophysics Data System (ADS)
Shi, Wenhua; Zhang, Xiongwei; Zou, Xia; Han, Wei
2017-07-01
In this paper, a speech enhancement method using noise classification and Deep Neural Network (DNN) was proposed. Gaussian mixture model (GMM) was employed to determine the noise type in speech-absent frames. DNN was used to model the relationship between noisy observation and clean speech. Once the noise type was determined, the corresponding DNN model was applied to enhance the noisy speech. GMM was trained with mel-frequency cepstrum coefficients (MFCC) and the parameters were estimated with an iterative expectation-maximization (EM) algorithm. Noise type was updated by spectrum entropy-based voice activity detection (VAD). Experimental results demonstrate that the proposed method could achieve better objective speech quality and smaller distortion under stationary and non-stationary conditions.
Efficient composite broadband polarization retarders and polarization filters
NASA Astrophysics Data System (ADS)
Dimova, E.; Ivanov, S. S.; Popkirov, G.; Vitanov, N. V.
2014-12-01
A new type of broadband polarization half-wave retarder and narrowband polarization filters are described and experimentally tested. Both, the retarders and the filters are designed as composite stacks of standard optical half-wave plates, each of them twisted at specific angles. The theoretical background of the proposed optical devices was obtained by analogy with the method of composite pulses, known from the nuclear and quantum physics. We show that combining two composite filters built from different numbers and types of waveplates, the transmission spectrum is reduced from about 700 nm to about 10 nm width.We experimentally demonstrate that this method can be applied to different types of waveplates (broadband, zero-order, multiple order, etc.).
A survey of kernel-type estimators for copula and their applications
NASA Astrophysics Data System (ADS)
Sumarjaya, I. W.
2017-10-01
Copulas have been widely used to model nonlinear dependence structure. Main applications of copulas include areas such as finance, insurance, hydrology, rainfall to name but a few. The flexibility of copula allows researchers to model dependence structure beyond Gaussian distribution. Basically, a copula is a function that couples multivariate distribution functions to their one-dimensional marginal distribution functions. In general, there are three methods to estimate copula. These are parametric, nonparametric, and semiparametric method. In this article we survey kernel-type estimators for copula such as mirror reflection kernel, beta kernel, transformation method and local likelihood transformation method. Then, we apply these kernel methods to three stock indexes in Asia. The results of our analysis suggest that, albeit variation in information criterion values, the local likelihood transformation method performs better than the other kernel methods.
Tobias, Irene S; Lazauskas, Kara K; Arevalo, Jose A; Bagley, James R; Brown, Lee E; Galpin, Andrew J
2018-04-01
Human skeletal muscle is a heterogeneous mixture of multiple fiber types (FT). Unfortunately, present methods for FT-specific study are constrained by limits of protein detection in single-fiber samples. These limitations beget compensatory resource-intensive procedures, ultimately dissuading investigators from pursuing FT-specific research. Additionally, previous studies neglected hybrid FT, confining their analyses to only pure FT. Here we present novel methods of protein detection across a wider spectrum of human skeletal muscle FT using fully automated capillary nanoimmunoassay (CNIA) technology. CNIA allowed a ~20-fold-lower limit of 5'-AMP-activated protein kinase (AMPK) detection compared with Western blotting. We then performed FT-specific assessment of AMPK expression as a proof of concept. Individual human muscle fibers were mechanically isolated, dissolved, and myosin heavy chain (MHC) fiber typed via SDS-PAGE. Single-fiber samples were combined in pairs and grouped into MHC I, MHC I/IIa, MHC IIa, and MHC IIa/IIx for expression analysis of AMPK isoforms α 1 , α 2 , β 1 , β 2 , γ 2 , and γ 3 with a tubulin loading control. Significant FT-specific differences were found for α 2 (1.7-fold higher in MHC IIa and MHC IIa/IIx vs. others), γ 2 (2.5-fold higher in MHC IIa vs. others), and γ 3 (2-fold higher in MHC IIa and 4-fold higher in MHC IIa/IIx vs. others). Development of a protocol that combines the efficient and sensitive CNIA technology with comprehensive SDS-PAGE fiber typing marks an important advancement in FT-specific research because it allows more precise study of the molecular mechanisms governing metabolism, adaptation, and regulation in human muscle. NEW & NOTEWORTHY We demonstrate the viability of applying capillary nanoimmunoassay technology to the study of fiber type-specific protein analysis in human muscle fibers. This novel technique enables a ~20-fold-lower limit of protein detection compared with traditional Western blotting methods. Combined with SDS-PAGE methods of fiber typing, we apply this technique to compare 5'-AMP-activated protein kinase isoform expression in myosin heavy chain (MHC) I, MHC I/IIa, MHC IIa, and MHC IIa/IIx fiber types.
Bioengineered anterior cruciate ligament
NASA Technical Reports Server (NTRS)
Martin, Ivan (Inventor); Altman, Gregory (Inventor); Kaplan, David (Inventor); Vunjak-Novakovic, Gordana (Inventor)
2001-01-01
The present invention provides a method for producing an anterior cruciate ligament ex vivo. The method comprises seeding pluripotent stem cells in a three dimensional matrix, anchoring the seeded matrix by attachment to two anchors, and culturing the cells within the matrix under conditions appropriate for cell growth and regeneration, while subjecting the matrix to one or more mechanical forces via movement of one or both of the attached anchors. Bone marrow stromal cells are preferably used as the pluripotent cells in the method. Suitable matrix materials are materials to which cells can adhere, such as a gel made from collagen type I. Suitable anchor materials are materials to which the matrix can attach, such as Goinopra coral and also demineralized bone. Optimally, the mechanical forces to which the matrix is subjected mimic mechanical stimuli experienced by an anterior cruciate ligament in vivo. This is accomplished by delivering the appropriate combination of tension, compression, torsion, and shear, to the matrix. The bioengineered ligament which is produced by this method is characterized by a cellular orientation and/or matrix crimp pattern in the direction of the applied mechanical forces, and also by the production of collagen type I, collagen type III, and fibronectin proteins along the axis of mechanical load produced by the mechanical forces. Optimally, the ligament produced has fiber bundles which are arranged into a helical organization. The method for producing an anterior cruciate ligament can be adapted to produce a wide range of tissue types ex vivo by adapting the anchor size and attachment sites to reflect the size of the specific type of tissue to be produced, and also adapting the specific combination of forces applied, to mimic the mechanical stimuli experienced in vivo by the specific type of tissue to be produced. The methods of the present invention can be further modified to incorporate other stimuli experienced in vivo by the particular developing tissue, some examples of the stimuli being chemical stimuli, and electro-magnetic stimuli. Some examples of tissue which can be produced include other ligaments in the body (hand, wrist, elbow, knee), tendon, cartilage, bone, muscle, and blood vessels.
NASA Technical Reports Server (NTRS)
Jones, H. W.
1984-01-01
The computer-assisted C-matrix, Loewdin-alpha-function, single-center expansion method in spherical harmonics has been applied to the three-center nuclear-attraction integral (potential due to the product of separated Slater-type orbitals). Exact formulas are produced for 13 terms of an infinite series that permits evaluation to ten decimal digits of an example using 1s orbitals.
The local properties of ocean surface waves by the phase-time method
NASA Technical Reports Server (NTRS)
Huang, Norden E.; Long, Steven R.; Tung, Chi-Chao; Donelan, Mark A.; Yuan, Yeli; Lai, Ronald J.
1992-01-01
A new approach using phase information to view and study the properties of frequency modulation, wave group structures, and wave breaking is presented. The method is applied to ocean wave time series data and a new type of wave group (containing the large 'rogue' waves) is identified. The method also has the capability of broad applications in the analysis of time series data in general.
Multistage morphological segmentation of bright-field and fluorescent microscopy images
NASA Astrophysics Data System (ADS)
Korzyńska, A.; Iwanowski, M.
2012-06-01
This paper describes the multistage morphological segmentation method (MSMA) for microscopic cell images. The proposed method enables us to study the cell behaviour by using a sequence of two types of microscopic images: bright field images and/or fluorescent images. The proposed method is based on two types of information: the cell texture coming from the bright field images and intensity of light emission, done by fluorescent markers. The method is dedicated to the image sequences segmentation and it is based on mathematical morphology methods supported by other image processing techniques. The method allows for detecting cells in image independently from a degree of their flattening and from presenting structures which produce the texture. It makes use of some synergic information from the fluorescent light emission image as the support information. The MSMA method has been applied to images acquired during the experiments on neural stem cells as well as to artificial images. In order to validate the method, two types of errors have been considered: the error of cell area detection and the error of cell position using artificial images as the "gold standard".
Structural dynamics and vibrations of damped, aircraft-type structures
NASA Technical Reports Server (NTRS)
Young, Maurice I.
1992-01-01
Engineering preliminary design methods for approximating and predicting the effects of viscous or equivalent viscous-type damping treatments on the free and forced vibration of lightly damped aircraft-type structures are developed. Similar developments are presented for dynamic hysteresis viscoelastic-type damping treatments. It is shown by both engineering analysis and numerical illustrations that the intermodal coupling of the undamped modes arising from the introduction of damping may be neglected in applying these preliminary design methods, except when dissimilar modes of these lightly damped, complex aircraft-type structures have identical or nearly identical natural frequencies. In such cases, it is shown that a relatively simple, additional interaction calculation between pairs of modes exhibiting this 'modal response' phenomenon suffices in the prediction of interacting modal damping fractions. The accuracy of the methods is shown to be very good to excellent, depending on the normal natural frequency separation of the system modes, thereby permitting a relatively simple preliminary design approach. This approach is shown to be a natural precursor to elaborate finite element, digital computer design computations in evaluating the type, quantity, and location of damping treatment.
The Schwinger Variational Method
NASA Technical Reports Server (NTRS)
Huo, Winifred M.
1995-01-01
Variational methods have proven invaluable in theoretical physics and chemistry, both for bound state problems and for the study of collision phenomena. For collisional problems they can be grouped into two types: those based on the Schroedinger equation and those based on the Lippmann-Schwinger equation. The application of the Schwinger variational (SV) method to e-molecule collisions and photoionization has been reviewed previously. The present chapter discusses the implementation of the SV method as applied to e-molecule collisions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao Yajun
A previously established Hauser-Ernst-type extended double-complex linear system is slightly modified and used to develop an inverse scattering method for the stationary axisymmetric general symplectic gravity model. The reduction procedures in this inverse scattering method are found to be fairly simple, which makes the inverse scattering method applied fine and effective. As an application, a concrete family of soliton double solutions for the considered theory is obtained.
Analysing attitude data through ridit schemes.
El-rouby, M G
1994-12-02
The attitudes of individuals and populations on various issues are usually assessed through sample surveys. Responses to survey questions are then scaled and combined into a meaningful whole which defines the measured attitude. The applied scales may be of nominal, ordinal, interval, or ratio nature depending upon the degree of sophistication the researcher wants to introduce into the measurement. This paper discusses methods of analysis for categorical variables of the type used in attitude and human behavior research, and recommends adoption of ridit analysis, a technique which has been successfully applied to epidemiological, clinical investigation, laboratory, and microbiological data. The ridit methodology is described after reviewing some general attitude scaling methods and problems of analysis related to them. The ridit method is then applied to a recent study conducted to assess health care service quality in North Carolina. This technique is conceptually and computationally more simple than other conventional statistical methods, and is also distribution-free. Basic requirements and limitations on its use are indicated.
Applying Quantum Monte Carlo to the Electronic Structure Problem
NASA Astrophysics Data System (ADS)
Powell, Andrew D.; Dawes, Richard
2016-06-01
Two distinct types of Quantum Monte Carlo (QMC) calculations are applied to electronic structure problems such as calculating potential energy curves and producing benchmark values for reaction barriers. First, Variational and Diffusion Monte Carlo (VMC and DMC) methods using a trial wavefunction subject to the fixed node approximation were tested using the CASINO code.[1] Next, Full Configuration Interaction Quantum Monte Carlo (FCIQMC), along with its initiator extension (i-FCIQMC) were tested using the NECI code.[2] FCIQMC seeks the FCI energy for a specific basis set. At a reduced cost, the efficient i-FCIQMC method can be applied to systems in which the standard FCIQMC approach proves to be too costly. Since all of these methods are statistical approaches, uncertainties (error-bars) are introduced for each calculated energy. This study tests the performance of the methods relative to traditional quantum chemistry for some benchmark systems. References: [1] R. J. Needs et al., J. Phys.: Condensed Matter 22, 023201 (2010). [2] G. H. Booth et al., J. Chem. Phys. 131, 054106 (2009).
NASA Astrophysics Data System (ADS)
Pozderac, Preston; Leary, Cody
We investigated the solutions to the Helmholtz equation in the case of a spherically symmetric refractive index using three different methods. The first method involves solving the Helmholtz equation for a step index profile and applying further constraints contained in Maxwell's equations. Utilizing these equations, we can simultaneously solve for the electric and magnetic fields as well as the allowed energies of photons propagating in this system. The second method applies a perturbative correction to these energies, which surfaces when deriving a Helmholtz type equation in a medium with an inhomogeneous refractive index. Applying first order perturbation theory, we examine how the correction term affects the energy of the photon. In the third method, we investigate the effects of the above perturbation upon solutions to the scalar Helmholtz equation, which are separable with respect to its polarization and spatial degrees of freedom. This work provides insights into the vector field structure of a photon guided by a glass microsphere.
Southwest electronic one-stop shopping, motor carrier test report
DOT National Transportation Integrated Search
1997-12-22
The Electronic One-Stop System (EOSS) used in this credential test was designed to replace current normal credentialling procedures with a personal computer-based electronic method that allows users to prepare, apply for, and obtain certain types of ...
Southwest electronic one-stop shopping, state agency test report
DOT National Transportation Integrated Search
1997-12-22
The Electronic One-Stop System (EOSS) used in this credential test was designed to replace current normal credentialling procedures with a personal computer-based electronic method that allows users to prepare, apply for, and obtain certain types of ...
Kim, Sungjin; Jinich, Adrián; Aspuru-Guzik, Alán
2017-04-24
We propose a multiple descriptor multiple kernel (MultiDK) method for efficient molecular discovery using machine learning. We show that the MultiDK method improves both the speed and accuracy of molecular property prediction. We apply the method to the discovery of electrolyte molecules for aqueous redox flow batteries. Using multiple-type-as opposed to single-type-descriptors, we obtain more relevant features for machine learning. Following the principle of "wisdom of the crowds", the combination of multiple-type descriptors significantly boosts prediction performance. Moreover, by employing multiple kernels-more than one kernel function for a set of the input descriptors-MultiDK exploits nonlinear relations between molecular structure and properties better than a linear regression approach. The multiple kernels consist of a Tanimoto similarity kernel and a linear kernel for a set of binary descriptors and a set of nonbinary descriptors, respectively. Using MultiDK, we achieve an average performance of r 2 = 0.92 with a test set of molecules for solubility prediction. We also extend MultiDK to predict pH-dependent solubility and apply it to a set of quinone molecules with different ionizable functional groups to assess their performance as flow battery electrolytes.
Influence of the implant abutment types and the dynamic loading on initial screw loosening
Kim, Eun-Sook
2013-01-01
PURPOSE This study examined the effects of the abutment types and dynamic loading on the stability of implant prostheses with three types of implant abutments prepared using different fabrication methods by measuring removal torque both before and after dynamic loading. MATERIALS AND METHODS Three groups of abutments were produced using different types of fabrication methods; stock abutment, gold cast abutment, and CAD/CAM custom abutment. A customized jig was fabricated to apply the load at 30° to the long axis. The implant fixtures were fixed to the jig, and connected to the abutments with a 30 Ncm tightening torque. A sine curved dynamic load was applied for 105 cycles between 25 and 250 N at 14 Hz. Removal torque before loading and after loading were evaluated. The SPSS was used for statistical analysis of the results. A Kruskal-Wallis test was performed to compare screw loosening between the abutment systems. A Wilcoxon signed-rank test was performed to compare screw loosening between before and after loading in each group (α=0.05). RESULTS Removal torque value before loading and after loading was the highest in stock abutment, which was then followed by gold cast abutment and CAD/CAM custom abutment, but there were no significant differences. CONCLUSION The abutment types did not have a significant influence on short term screw loosening. On the other hand, after 105 cycles dynamic loading, CAD/CAM custom abutment affected the initial screw loosening, but stock abutment and gold cast abutment did not. PMID:23509006
NASA Astrophysics Data System (ADS)
Song, Linze; Shi, Qiang
2017-02-01
We present a theoretical approach to study nonequilibrium quantum heat transport in molecular junctions described by a spin-boson type model. Based on the Feynman-Vernon path integral influence functional formalism, expressions for the average value and high-order moments of the heat current operators are derived, which are further obtained directly from the auxiliary density operators (ADOs) in the hierarchical equations of motion (HEOM) method. Distribution of the heat current is then derived from the high-order moments. As the HEOM method is nonperturbative and capable of treating non-Markovian system-environment interactions, the method can be applied to various problems of nonequilibrium quantum heat transport beyond the weak coupling regime.
Weighted analysis of paired microarray experiments.
Kristiansson, Erik; Sjögren, Anders; Rudemo, Mats; Nerman, Olle
2005-01-01
In microarray experiments quality often varies, for example between samples and between arrays. The need for quality control is therefore strong. A statistical model and a corresponding analysis method is suggested for experiments with pairing, including designs with individuals observed before and after treatment and many experiments with two-colour spotted arrays. The model is of mixed type with some parameters estimated by an empirical Bayes method. Differences in quality are modelled by individual variances and correlations between repetitions. The method is applied to three real and several simulated datasets. Two of the real datasets are of Affymetrix type with patients profiled before and after treatment, and the third dataset is of two-colour spotted cDNA type. In all cases, the patients or arrays had different estimated variances, leading to distinctly unequal weights in the analysis. We suggest also plots which illustrate the variances and correlations that affect the weights computed by our analysis method. For simulated data the improvement relative to previously published methods without weighting is shown to be substantial.
Non-destructive scanning for applied stress by the continuous magnetic Barkhausen noise method
NASA Astrophysics Data System (ADS)
Franco Grijalba, Freddy A.; Padovese, L. R.
2018-01-01
This paper reports the use of a non-destructive continuous magnetic Barkhausen noise technique to detect applied stress on steel surfaces. The stress profile generated in a sample of 1070 steel subjected to a three-point bending test is analyzed. The influence of different parameters such as pickup coil type, scanner speed, applied magnetic field and frequency band analyzed on the effectiveness of the technique is investigated. A moving smoothing window based on a second-order statistical moment is used to analyze the time signal. The findings show that the technique can be used to detect applied stress profiles.
Alternative methods to evaluate trial level surrogacy.
Abrahantes, Josè Cortiñas; Shkedy, Ziv; Molenberghs, Geert
2008-01-01
The evaluation and validation of surrogate endpoints have been extensively studied in the last decade. Prentice [1] and Freedman, Graubard and Schatzkin [2] laid the foundations for the evaluation of surrogate endpoints in randomized clinical trials. Later, Buyse et al. [5] proposed a meta-analytic methodology, producing different methods for different settings, which was further studied by Alonso and Molenberghs [9], in their unifying approach based on information theory. In this article, we focus our attention on the trial-level surrogacy and propose alternative procedures to evaluate such surrogacy measure, which do not pre-specify the type of association. A promising correction based on cross-validation is investigated. As well as the construction of confidence intervals for this measure. In order to avoid making assumption about the type of relationship between the treatment effects and its distribution, a collection of alternative methods, based on regression trees, bagging, random forests, and support vector machines, combined with bootstrap-based confidence interval and, should one wish, in conjunction with a cross-validation based correction, will be proposed and applied. We apply the various strategies to data from three clinical studies: in opthalmology, in advanced colorectal cancer, and in schizophrenia. The results obtained for the three case studies are compared; they indicate that using random forest or bagging models produces larger estimated values for the surrogacy measure, which are in general stabler and the confidence interval narrower than linear regression and support vector regression. For the advanced colorectal cancer studies, we even found the trial-level surrogacy is considerably different from what has been reported. In general the alternative methods are more computationally demanding, and specially the calculation of the confidence intervals, require more computational time that the delta-method counterpart. First, more flexible modeling techniques can be used, allowing for other type of association. Second, when no cross-validation-based correction is applied, overly optimistic trial-level surrogacy estimates will be found, thus cross-validation is highly recommendable. Third, the use of the delta method to calculate confidence intervals is not recommendable since it makes assumptions valid only in very large samples. It may also produce range-violating limits. We therefore recommend alternatives: bootstrap methods in general. Also, the information-theoretic approach produces comparable results with the bagging and random forest approaches, when cross-validation correction is applied. It is also important to observe that, even for the case in which the linear model might be a good option too, bagging methods perform well too, and their confidence intervals were more narrow.
Non-Unitary Boson Mapping and Its Application to Nuclear Collective Motions
NASA Astrophysics Data System (ADS)
Takada, K.
First, the general theory of boson mapping for even-number many-fermion systems is surveyed. In order to overcome the confusion concerning the so-called unphysical or spurious states in the boson mapping, the correct concept of the unphysical states is precisely given in a clear-cut way. Next, a method to apply the boson mapping to a truncated many-fermion Hilbert space consisting of collective phonons is proposed, by putting special emphasis on the Dyson-type non-unitary boson mapping. On the basis of this method, it becomes possible for the first time to apply the Dyson-type boson mapping to analyses of collective motions in realistic nuclei. This method is also extended to be applicable to odd-number-fermion systems. As known well, the Dyson-type boson mapping is a non-unitary transformation and it gives a non-Hermitian boson Hamiltonian. It is not easy (but not impossible) to solve the eigenstates of the non-Hermitian Hamiltonian. A Hermitian treatment of this non-Hermitian eigenvalue problem is discussed and it is shown that this treatment is a very good approximation. Using this Hermitian treatment, we can obtain the normal-ordered Holstein-Primakoff-type boson expansion in the multi-collective-phonon subspace. Thereby the convergence of the boson expansion can be tested. Some examples of application of the Dyson-type non-unitary boson mapping to simplified models and realistic nuclei are also shown, and we can see that it is quite useful for analysis of the collective motions in realistic nuclei. In contrast to the above-mentioned ordinary type of boson mapping, which may be called a ``static'' boson mapping, the Dyson-type non-unitary selfconsistent-collective-coordinate method is discussed. The latter is, so to speak, a ``dynamical'' boson mapping, which is a dynamical extension of the ordinary boson mapping to be capable to include the coupling effects from the non-collective degrees of freedom selfconsistently. Thus all of the Dyson-type non-unitary boson mapping from A to Z is summarized in this paper.
Tada, Atsuko; Ishizuki, Kyoko; Yamazaki, Takeshi; Sugimoto, Naoki; Akiyama, Hiroshi
2014-07-01
Natural ester-type gum bases, which are used worldwide as food additives, mainly consist of wax esters composed of long-chain fatty acids and long-chain fatty alcohols. There are many varieties of ester-type gum bases, and thus a useful method for their discrimination is needed in order to establish official specifications and manage their quality control. Herein is reported a rapid and simple method for the analysis of different ester-type gum bases used as food additives by high-temperature gas chromatography/mass spectrometry (GC/MS). With this method, the constituent wax esters in ester-type gum bases can be detected without hydrolysis and derivatization. The method was applied to the determination of 10 types of gum bases, including beeswax, carnauba wax, lanolin, and jojoba wax, and it was demonstrated that the gum bases derived from identical origins have specific and characteristic total ion chromatogram (TIC) patterns and ester compositions. Food additive gum bases were thus distinguished from one another based on their TIC patterns and then more clearly discriminated using simultaneous monitoring of the fragment ions corresponding to the fatty acid moieties of the individual molecular species of the wax esters. This direct high-temperature GC/MS method was shown to be very useful for the rapid and simple discrimination of varieties of ester-type gum bases used as food additives.
Tada, Atsuko; Ishizuki, Kyoko; Yamazaki, Takeshi; Sugimoto, Naoki; Akiyama, Hiroshi
2014-01-01
Natural ester-type gum bases, which are used worldwide as food additives, mainly consist of wax esters composed of long-chain fatty acids and long-chain fatty alcohols. There are many varieties of ester-type gum bases, and thus a useful method for their discrimination is needed in order to establish official specifications and manage their quality control. Herein is reported a rapid and simple method for the analysis of different ester-type gum bases used as food additives by high-temperature gas chromatography/mass spectrometry (GC/MS). With this method, the constituent wax esters in ester-type gum bases can be detected without hydrolysis and derivatization. The method was applied to the determination of 10 types of gum bases, including beeswax, carnauba wax, lanolin, and jojoba wax, and it was demonstrated that the gum bases derived from identical origins have specific and characteristic total ion chromatogram (TIC) patterns and ester compositions. Food additive gum bases were thus distinguished from one another based on their TIC patterns and then more clearly discriminated using simultaneous monitoring of the fragment ions corresponding to the fatty acid moieties of the individual molecular species of the wax esters. This direct high-temperature GC/MS method was shown to be very useful for the rapid and simple discrimination of varieties of ester-type gum bases used as food additives. PMID:25473499
NASA Astrophysics Data System (ADS)
Kim, Yong-Sang; Ko, Sang-Jin; Lee, Sangkyu; Kim, Jung-Gu
2018-03-01
An interpretation of the relation between the electric field and the applied current for cathodic protection is investigated using a boundary element method simulation. Also, a conductivity-difference environment is set for the interface influence. The variation of the potential distribution is increased with the increase of the applied current and the conductivity difference due to the rejection of the current at the interface. In the case of the electric field, the tendencies of the increasing rate and the applied currents are similar, but the interface influence is different according to the directional component and field type (decrease of E z and increases of E x and E y) due to the directional difference between the electric fields. Also, the change tendencies of the electric fields versus the applied current plots are affected by the polarization curve tendency regarding the polarization type (activation and concentration polarizations in the oxygen-reduction and hydrogen-reduction reactions). This study shows that the underwater electric signature is determined by the polarization behavior of the materials.
ERIC Educational Resources Information Center
Liu, Boquan; Polce, Evan; Sprott, Julien C.; Jiang, Jack J.
2018-01-01
Purpose: The purpose of this study is to introduce a chaos level test to evaluate linear and nonlinear voice type classification method performances under varying signal chaos conditions without subjective impression. Study Design: Voice signals were constructed with differing degrees of noise to model signal chaos. Within each noise power, 100…
NASA Technical Reports Server (NTRS)
Fitzjerrell, D. G.
1974-01-01
A general study of the stability of nonlinear as compared to linear control systems is presented. The analysis is general and, therefore, applies to other types of nonlinear biological control systems as well as the cardiovascular control system models. Both inherent and numerical stability are discussed for corresponding analytical and graphic methods and numerical methods.
49 CFR Appendix D to Part 173 - Test Methods for Dynamite (Explosive, Blasting, Type A)
Code of Federal Regulations, 2011 CFR
2011-10-01
... weighed to determine the percent of weight loss. 3. Test method D-3—Compression Exudation Test The entire... from the glass tube and weighed to determine the percent of weight loss. EC02MR91.067 ... assembly is placed under the compression rod, and compression is applied by means of the weight on the...
NASA Astrophysics Data System (ADS)
Jovanović, J.; Petronijević, R. B.; Lukić, M.; Karan, D.; Parunović, N.; Branković-Lazić, I.
2017-09-01
During the previous development of a chemometric method for estimating the amount of added colorant in meat products, it was noticed that the natural colorant most commonly added to boiled sausages, E 120, has different CIE-LAB behavior compared to artificial colors that are used for the same purpose. This has opened the possibility of transforming the developed method into a method for identifying the addition of natural or synthetic colorants in boiled sausages based on the measurement of the color of the cross-section. After recalibration of the CIE-LAB method using linear discriminant analysis, verification was performed on 76 boiled sausages, of either frankfurters or Parisian sausage types. The accuracy and reliability of the classification was confirmed by comparison with the standard HPLC method. Results showed that the LDA + CIE-LAB method can be applied with high accuracy, 93.42 %, to estimate food color type in boiled sausages. Natural orange colors can give false positive results. Pigments from spice mixtures had no significant effect on CIE-LAB results.
NASA Astrophysics Data System (ADS)
Makó, Éva; Kovács, András; Ható, Zoltán; Kristóf, Tamás
2015-12-01
Recent experimental and simulation findings with kaolinite-methanol intercalation complexes raised the question of the existence of more stable structures in wet and dry state, which has not been fully cleared up yet. Experimental and molecular simulation analyses were used to investigate different types of kaolinite-methanol complexes, revealing their real structures. Cost-efficient homogenization methods were applied to synthesize the kaolinite-dimethyl sulfoxide and kaolinite-urea pre-intercalation complexes of the kaolinite-methanol ones. The tested homogenization method required an order of magnitude lower amount of reagents than the generally applied solution method. The influence of the type of pre-intercalated molecules and of the wetting or drying (at room temperature and at 150 °C) procedure on the intercalation was characterized experimentally by X-ray diffraction and thermal analysis. Consistent with the suggestion from the present simulations, 1.12-nm and 0.83-nm stable kaolinite-methanol complexes were identified. For these complexes, our molecular simulations predict either single-layered structures of mobile methanol/water molecules or non-intercalated structures of methoxy-functionalized kaolinite. We found that the methoxy-modified kaolinite can easily be intercalated by liquid methanol.
Treatment of cervical spine fractures with halo vest method in children and young people.
Tomaszewski, Ryszard; Pyzińska, Marta
2014-01-01
The Halo Vest method is a non-invasive treatment of cervical spine fractures. It is successfully applied in adults, which is supported by numerous studies, but has rarely been used among children and young people. There is little published research in this field. The aim of the paper is to present the effectiveness of Halo Vest external fixation in children and to evaluate the complication rate of this method. A retrospective study of 6 patients with cervical spine fractures with an average age of 13.3 years (range: 10 to 17 years) treated with Halo Vest external fixation between 2004 and 2013. The type and cause of fracture, treatment outcome and complications were evaluated. The average duration of follow-up was 55 months. In 5 cases, the treatment result was satisfactory. In one case, there were complications in the form of an external infection around the cranial pins. 1. The Halo Vest system can be applied as a non-operative method of treating cervical spine fractures in children and young people. 2. The criteria of eligibility for specific types of cervical spine fracture treatment in children and young people require further investigation, especially with regard to eliminating complications.
Benzi, Michele; Evans, Thomas M.; Hamilton, Steven P.; ...
2017-03-05
Here, we consider hybrid deterministic-stochastic iterative algorithms for the solution of large, sparse linear systems. Starting from a convergent splitting of the coefficient matrix, we analyze various types of Monte Carlo acceleration schemes applied to the original preconditioned Richardson (stationary) iteration. We expect that these methods will have considerable potential for resiliency to faults when implemented on massively parallel machines. We also establish sufficient conditions for the convergence of the hybrid schemes, and we investigate different types of preconditioners including sparse approximate inverses. Numerical experiments on linear systems arising from the discretization of partial differential equations are presented.
Genetic reinforcement learning through symbiotic evolution for fuzzy controller design.
Juang, C F; Lin, J Y; Lin, C T
2000-01-01
An efficient genetic reinforcement learning algorithm for designing fuzzy controllers is proposed in this paper. The genetic algorithm (GA) adopted in this paper is based upon symbiotic evolution which, when applied to fuzzy controller design, complements the local mapping property of a fuzzy rule. Using this Symbiotic-Evolution-based Fuzzy Controller (SEFC) design method, the number of control trials, as well as consumed CPU time, are considerably reduced when compared to traditional GA-based fuzzy controller design methods and other types of genetic reinforcement learning schemes. Moreover, unlike traditional fuzzy controllers, which partition the input space into a grid, SEFC partitions the input space in a flexible way, thus creating fewer fuzzy rules. In SEFC, different types of fuzzy rules whose consequent parts are singletons, fuzzy sets, or linear equations (TSK-type fuzzy rules) are allowed. Further, the free parameters (e.g., centers and widths of membership functions) and fuzzy rules are all tuned automatically. For the TSK-type fuzzy rule especially, which put the proposed learning algorithm in use, only the significant input variables are selected to participate in the consequent of a rule. The proposed SEFC design method has been applied to different simulated control problems, including the cart-pole balancing system, a magnetic levitation system, and a water bath temperature control system. The proposed SEFC has been verified to be efficient and superior from these control problems, and from comparisons with some traditional GA-based fuzzy systems.
Li, Zhao-Liang
2018-01-01
Few studies have examined hyperspectral remote-sensing image classification with type-II fuzzy sets. This paper addresses image classification based on a hyperspectral remote-sensing technique using an improved interval type-II fuzzy c-means (IT2FCM*) approach. In this study, in contrast to other traditional fuzzy c-means-based approaches, the IT2FCM* algorithm considers the ranking of interval numbers and the spectral uncertainty. The classification results based on a hyperspectral dataset using the FCM, IT2FCM, and the proposed improved IT2FCM* algorithms show that the IT2FCM* method plays the best performance according to the clustering accuracy. In this paper, in order to validate and demonstrate the separability of the IT2FCM*, four type-I fuzzy validity indexes are employed, and a comparative analysis of these fuzzy validity indexes also applied in FCM and IT2FCM methods are made. These four indexes are also applied into different spatial and spectral resolution datasets to analyze the effects of spectral and spatial scaling factors on the separability of FCM, IT2FCM, and IT2FCM* methods. The results of these validity indexes from the hyperspectral datasets show that the improved IT2FCM* algorithm have the best values among these three algorithms in general. The results demonstrate that the IT2FCM* exhibits good performance in hyperspectral remote-sensing image classification because of its ability to handle hyperspectral uncertainty. PMID:29373548
NASA Astrophysics Data System (ADS)
Tichý, Vladimír; Hudec, René; Němcová, Šárka
2016-06-01
The algorithm presented is intended mainly for lobster eye optics. This type of optics (and some similar types) allows for a simplification of the classical ray-tracing procedure that requires great many rays to simulate. The method presented performs the simulation of a only few rays; therefore it is extremely effective. Moreover, to simplify the equations, a specific mathematical formalism is used. Only a few simple equations are used, therefore the program code can be simple as well. The paper also outlines how to apply the method to some other reflective optical systems.
Scaling-law equilibria for calcium in canopy-type models of the solar chromosphere
NASA Technical Reports Server (NTRS)
Jones, H. P.
1982-01-01
Scaling laws for resonance line formation are used to obtain approximate excitation and ionization equilibria for a three-level model of singly ionized calcium. The method has been developed for and is applied to the study of magnetograph response in the 8542 A infrared triplet line to magnetostatic canopies which schematically model diffuse, nearly horizontal fields in the low solar chromosphere. For this application, the method is shown to be efficient and semi-quantitative, and the results indicate the type and range of effects on calcium-line radiation which result from reduced gas pressure inside the magnetic regions.
NASA Astrophysics Data System (ADS)
Houben, Georg J.; Blümel, Martin
2017-11-01
Porosity is a fundamental parameter in hydrogeology. The empirical method of Beyer and Schweiger (1969) allows the calculation of hydraulic conductivity and both the total and effective porosity from granulometric data. However, due to its graphical nature with type curves, it is tedious to apply and prone to reading errors. In this work, the type curves were digitized and emulated by mathematical functions. The latter were implemented into a spreadsheet and a visual basic program, allowing the fast automated application of the method for any number of samples.
Estimating fire-caused mortality and injury in oak-hickory forests.
Robert M. Loomis
1973-01-01
Presents equations and graphs for predicting fire-caused tree mortality and equations for estimating basal wound dimensions for surviving trees. The methods apply to black oak, white oak, and some other species of the oak-hickory forest type.
Objective methods for developing indices of pilot workload.
DOT National Transportation Integrated Search
1977-07-01
This paper discusses the various types of objective methodologies that either have been or have the potential of being applied to the general problem of the measurement of pilot workload as it occurs on relatively short missions or mission phases. Se...
Sensory characteristics and consumer acceptability of decaffeinated green teas.
Lee, S M; Lee, H-S; Kim, K-H; Kim, K-O
2009-04-01
Green tea has been widely consumed for its mild flavors and its health benefits, yet caffeine in green tea has been a limitation for those who want to avoid it. The limitation brought increase in need for decaffeinated products in the green tea market. Most of the conventional decaffeination techniques applied in food use organic solvents. However, supercritical carbon dioxide fluid extraction (SC-CO2) method is gaining its intension as one of the future decaffeination methods that overcomes the problems of conventional methods. The purpose of this study was to identify sensory characteristics of decaffeinated green teas applied with SC-CO2 method and to observe the relationship with consumer acceptability to elucidate the potentiality of applying SC-CO2 technique in decaffeinated green tea market. Descriptive analysis was performed on 8 samples: green teas containing 4 caffeine levels (10%, 35%, 60%, and 100%) infused at 2 infusing periods (1 or 2 min). It was found that the SC-CO2 process not only reduced caffeine but also decreased some important features of original tea flavors. Two groups were recruited for consumer acceptability test: one (GP I, N = 52), consuming all types of green teas including hot/cold canned teas; and the other (GP II, N = 40), only consuming the loose type. While GP II liked original green tea the most, GP I liked highly decaffeinated green teas. Although the SC-CO2 method had limitations of losing complex flavors of green teas, it appeared to have future potential in the decaffeinated green tea market within or without the addition of desirable flavors.
Barnett, Patrick D; Strange, K Alicia; Angel, S Michael
2017-06-01
This work describes a method of applying the Fourier transform to the two-dimensional Fizeau fringe patterns generated by the spatial heterodyne Raman spectrometer (SHRS), a dispersive interferometer, to correct the effects of certain types of optical alignment errors. In the SHRS, certain types of optical misalignments result in wavelength-dependent and wavelength-independent rotations of the fringe pattern on the detector. We describe here a simple correction technique that can be used in post-processing, by applying the Fourier transform in a row-by-row manner. This allows the user to be more forgiving of fringe alignment and allows for a reduction in the mechanical complexity of the SHRS.
Reliability verification of vehicle speed estimate method in forensic videos.
Kim, Jong-Hyuk; Oh, Won-Taek; Choi, Ji-Hun; Park, Jong-Chan
2018-06-01
In various types of traffic accidents, including car-to-car crash, vehicle-pedestrian collision, and hit-and-run accident, driver overspeed is one of the critical issues of traffic accident analysis. Hence, analysis of vehicle speed at the moment of accident is necessary. The present article proposes a vehicle speed estimate method (VSEM) applying a virtual plane and a virtual reference line to a forensic video. The reliability of the VSEM was verified by comparing the results obtained by applying the VSEM to videos from a test vehicle driving with a global positioning system (GPS)-based Vbox speed. The VSEM verified by these procedures was applied to real traffic accident examples to evaluate the usability of the VSEM. Copyright © 2018 Elsevier B.V. All rights reserved.
Advances and future directions of research on spectral methods
NASA Technical Reports Server (NTRS)
Patera, A. T.
1986-01-01
Recent advances in spectral methods are briefly reviewed and characterized with respect to their convergence and computational complexity. Classical finite element and spectral approaches are then compared, and spectral element (or p-type finite element) approximations are introduced. The method is applied to the full Navier-Stokes equations, and examples are given of the application of the technique to several transitional flows. Future directions of research in the field are outlined.
Generation of medium frequency electrotherapeutic signals
NASA Astrophysics Data System (ADS)
Płaza, Mirosław; Szcześniak, Zbigniew; Dudek, Jolanta
2017-08-01
In this paper, generation methods of sinusoidal medium frequency electrotherapeutic signals have been studied. Signals of this type are increasingly used in electrotherapy owing to the development of both physical medicine and engineering sciences. The article presents analysis and comparison of analogue and digital methods of generation therapeutic signals. Analysis presented in the paper attempts to answer the question which technique of medium frequency signal generation can be most broadly applied in electrotherapy methods.
Limited-memory trust-region methods for sparse relaxation
NASA Astrophysics Data System (ADS)
Adhikari, Lasith; DeGuchy, Omar; Erway, Jennifer B.; Lockhart, Shelby; Marcia, Roummel F.
2017-08-01
In this paper, we solve the l2-l1 sparse recovery problem by transforming the objective function of this problem into an unconstrained differentiable function and applying a limited-memory trust-region method. Unlike gradient projection-type methods, which uses only the current gradient, our approach uses gradients from previous iterations to obtain a more accurate Hessian approximation. Numerical experiments show that our proposed approach eliminates spurious solutions more effectively while improving computational time.
Empirical Flutter Prediction Method.
1988-03-05
been used in this way to discover species or subspecies of animals, and to discover different types of voter or comsumer requiring different persuasions...respect to behavior or performance or response variables. Once this were done, corresponding clusters might be sought among descriptive or predictive or...jump in a response. The first sort of usage does not apply to the flutter prediction problem. Here the types of behavior are the different kinds of
Optical Fourier diffractometry applied to degraded bone structure recognition
NASA Astrophysics Data System (ADS)
Galas, Jacek; Godwod, Krzysztof; Szawdyn, Jacek; Sawicki, Andrzej
1993-09-01
Image processing and recognition methods are useful in many fields. This paper presents the hybrid optical and digital method applied to recognition of pathological changes in bones involved by metabolic bone diseases. The trabecular bone structure, registered by x ray on the photographic film, is analyzed in the new type of computer controlled diffractometer. The set of image parameters, extracted from diffractogram, is evaluated by statistical analysis. The synthetic image descriptors in discriminant space, constructed on the base of 3 training groups of images (control, osteoporosis, and osteomalacia groups) by discriminant analysis, allow us to recognize bone samples with degraded bone structure and to recognize the disease. About 89% of the images were classified correctly. This method after optimization process will be verified in medical investigations.
Multi-resolution analysis for ear recognition using wavelet features
NASA Astrophysics Data System (ADS)
Shoaib, M.; Basit, A.; Faye, I.
2016-11-01
Security is very important and in order to avoid any physical contact, identification of human when they are moving is necessary. Ear biometric is one of the methods by which a person can be identified using surveillance cameras. Various techniques have been proposed to increase the ear based recognition systems. In this work, a feature extraction method for human ear recognition based on wavelet transforms is proposed. The proposed features are approximation coefficients and specific details of level two after applying various types of wavelet transforms. Different wavelet transforms are applied to find the suitable wavelet. Minimum Euclidean distance is used as a matching criterion. Results achieved by the proposed method are promising and can be used in real time ear recognition system.
On controlling networks of limit-cycle oscillators
NASA Astrophysics Data System (ADS)
Skardal, Per Sebastian; Arenas, Alex
2016-09-01
The control of network-coupled nonlinear dynamical systems is an active area of research in the nonlinear science community. Coupled oscillator networks represent a particularly important family of nonlinear systems, with applications ranging from the power grid to cardiac excitation. Here, we study the control of network-coupled limit cycle oscillators, extending the previous work that focused on phase oscillators. Based on stabilizing a target fixed point, our method aims to attain complete frequency synchronization, i.e., consensus, by applying control to as few oscillators as possible. We develop two types of controls. The first type directs oscillators towards larger amplitudes, while the second does not. We present numerical examples of both control types and comment on the potential failures of the method.
NASA Astrophysics Data System (ADS)
Yamamoto, Kazuya; Takaoka, Toshimitsu; Fukui, Hidetoshi; Haruta, Yasuyuki; Yamashita, Tomoya; Kitagawa, Seiichiro
2016-03-01
In general, thin-film coating process is widely applied on optical lens surface as anti-reflection function. In normal production process, at first lens is manufactured by molding, then anti-reflection is added by thin-film coating. In recent years, instead of thin-film coating, sub-wavelength structures adding on surface of molding die are widely studied and development to keep anti-reflection performance. As merits, applying sub-wavelength structure, coating process becomes unnecessary and it is possible to reduce man-hour costs. In addition to cost merit, these are some technical advantages on this study. Adhesion of coating depends on material of plastic, and it is impossible to apply anti-reflection function on arbitrary surface. Sub-wavelength structure can solve both problems. Manufacturing method of anti-reflection structure can be divided into two types mainly. One method is with the resist patterning, and the other is mask-less method that does not require patterning. What we have developed is new mask-less method which is no need for resist patterning and possible to impart an anti-reflection structure to large area and curved lens surface, and can be expected to apply to various market segments. We report developed technique and characteristics of production lens.
Atomistic cluster alignment method for local order mining in liquids and glasses
NASA Astrophysics Data System (ADS)
Fang, X. W.; Wang, C. Z.; Yao, Y. X.; Ding, Z. J.; Ho, K. M.
2010-11-01
An atomistic cluster alignment method is developed to identify and characterize the local atomic structural order in liquids and glasses. With the “order mining” idea for structurally disordered systems, the method can detect the presence of any type of local order in the system and can quantify the structural similarity between a given set of templates and the aligned clusters in a systematic and unbiased manner. Moreover, population analysis can also be carried out for various types of clusters in the system. The advantages of the method in comparison with other previously developed analysis methods are illustrated by performing the structural analysis for four prototype systems (i.e., pure Al, pure Zr, Zr35Cu65 , and Zr36Ni64 ). The results show that the cluster alignment method can identify various types of short-range orders (SROs) in these systems correctly while some of these SROs are difficult to capture by most of the currently available analysis methods (e.g., Voronoi tessellation method). Such a full three-dimensional atomistic analysis method is generic and can be applied to describe the magnitude and nature of noncrystalline ordering in many disordered systems.
[Triple-type theory of statistics and its application in the scientific research of biomedicine].
Hu, Liang-ping; Liu, Hui-gang
2005-07-20
To point out the crux of why so many people failed to grasp statistics and to bring forth a "triple-type theory of statistics" to solve the problem in a creative way. Based on the experience in long-time teaching and research in statistics, the "three-type theory" was raised and clarified. Examples were provided to demonstrate that the 3 types, i.e., expressive type, prototype and the standardized type are the essentials for people to apply statistics rationally both in theory and practice, and moreover, it is demonstrated by some instances that the "three types" are correlated with each other. It can help people to see the essence by interpreting and analyzing the problems of experimental designs and statistical analyses in medical research work. Investigations reveal that for some questions, the three types are mutually identical; for some questions, the prototype is their standardized type; however, for some others, the three types are distinct from each other. It has been shown that in some multifactor experimental researches, it leads to the nonexistence of the standardized type corresponding to the prototype at all, because some researchers have committed the mistake of "incomplete control" in setting experimental groups. This is a problem which should be solved by the concept and method of "division". Once the "triple-type" for each question is clarified, a proper experimental design and statistical method can be carried out easily. "Triple-type theory of statistics" can help people to avoid committing statistical mistakes or at least to decrease the misuse rate dramatically and improve the quality, level and speed of biomedical research during the process of applying statistics. It can also help people to improve the quality of statistical textbooks and the teaching effect of statistics and it has demonstrated how to advance biomedical statistics.
An Overview of the Evolution of Infrared Spectroscopy Applied to Bacterial Typing.
Quintelas, Cristina; Ferreira, Eugénio C; Lopes, João A; Sousa, Clara
2018-01-01
The sustained emergence of new declared bacterial species makes typing a continuous challenge for microbiologists. Molecular biology techniques have a very significant role in the context of bacterial typing, but they are often very laborious, time consuming, and eventually fail when dealing with very closely related species. Spectroscopic-based techniques appear in some situations as a viable alternative to molecular methods with advantages in terms of analysis time and cost. Infrared and mass spectrometry are among the most exploited techniques in this context: particularly, infrared spectroscopy emerged as a very promising method with multiple reported successful applications. This article presents a systematic review on infrared spectroscopy applications for bacterial typing, highlighting fundamental aspects of infrared spectroscopy, a detailed literature review (covering different taxonomic levels and bacterial species), advantages, and limitations of the technique over molecular biology methods and a comparison with other competing spectroscopic techniques such as MALDI-TOF MS, Raman, and intrinsic fluorescence. Infrared spectroscopy possesses a high potential for bacterial typing at distinct taxonomic levels and worthy of further developments and systematization. The development of databases appears fundamental toward the establishment of infrared spectroscopy as a viable method for bacterial typing. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.
Wang, Zuozhen
2018-01-01
Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.
Reverse-time migration for subsurface imaging using single- and multi- frequency components
NASA Astrophysics Data System (ADS)
Ha, J.; Kim, Y.; Kim, S.; Chung, W.; Shin, S.; Lee, D.
2017-12-01
Reverse-time migration is a seismic data processing method for obtaining accurate subsurface structure images from seismic data. This method has been applied to obtain more precise complex geological structure information, including steep dips, by considering wave propagation characteristics based on two-way traveltime. Recently, various studies have reported the characteristics of acquired datasets from different types of media. In particular, because real subsurface media is comprised of various types of structures, seismic data represent various responses. Among them, frequency characteristics can be used as an important indicator for analyzing wave propagation in subsurface structures. All frequency components are utilized in conventional reverse-time migration, but analyzing each component is required because they contain inherent seismic response characteristics. In this study, we propose a reverse-time migration method that utilizes single- and multi- frequency components for analyzing subsurface imaging. We performed a spectral decomposition to utilize the characteristics of non-stationary seismic data. We propose two types of imaging conditions, in which decomposed signals are applied in complex and envelope traces. The SEG/EAGE Overthrust model was used to demonstrate the proposed method, and the 1st derivative Gaussian function with a 10 Hz cutoff was used as the source signature. The results were more accurate and stable when relatively lower frequency components in the effective frequency range were used. By combining the gradient obtained from various frequency components, we confirmed that the results are clearer than the conventional method using all frequency components. Also, further study is required to effectively combine the multi-frequency components.
Next-generation genome-scale models for metabolic engineering.
King, Zachary A; Lloyd, Colton J; Feist, Adam M; Palsson, Bernhard O
2015-12-01
Constraint-based reconstruction and analysis (COBRA) methods have become widely used tools for metabolic engineering in both academic and industrial laboratories. By employing a genome-scale in silico representation of the metabolic network of a host organism, COBRA methods can be used to predict optimal genetic modifications that improve the rate and yield of chemical production. A new generation of COBRA models and methods is now being developed--encompassing many biological processes and simulation strategies-and next-generation models enable new types of predictions. Here, three key examples of applying COBRA methods to strain optimization are presented and discussed. Then, an outlook is provided on the next generation of COBRA models and the new types of predictions they will enable for systems metabolic engineering. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Rzasnicki, W.
1973-01-01
A method of solution is presented, which, when applied to the elasto-plastic analysis of plates having a v-notch on one edge and subjected to pure bending, will produce stress and strain fields in much greater detail than presently available. Application of the boundary integral equation method results in two coupled Fredholm-type integral equations, subject to prescribed boundary conditions. These equations are replaced by a system of simultaneous algebraic equations and solved by a successive approximation method employing Prandtl-Reuss incremental plasticity relations. The method is first applied to number of elasto-static problems and the results compared with available solutions. Good agreement is obtained in all cases. The elasto-plastic analysis provides detailed stress and strain distributions for several cases of plates with various notch angles and notch depths. A strain hardening material is assumed and both plane strain and plane stress conditions are considered.
The stress analysis method for three-dimensional composite materials
NASA Astrophysics Data System (ADS)
Nagai, Kanehiro; Yokoyama, Atsushi; Maekawa, Zen'ichiro; Hamada, Hiroyuki
1994-05-01
This study proposes a stress analysis method for three-dimensionally fiber reinforced composite materials. In this method, the rule-of mixture for composites is successfully applied to 3-D space in which material properties would change 3-dimensionally. The fundamental formulas for Young's modulus, shear modulus, and Poisson's ratio are derived. Also, we discuss a strength estimation and an optimum material design technique for 3-D composite materials. The analysis is executed for a triaxial orthogonally woven fabric, and their results are compared to the experimental data in order to verify the accuracy of this method. The present methodology can be easily understood with basic material mechanics and elementary mathematics, so it enables us to write a computer program of this theory without difficulty. Furthermore, this method can be applied to various types of 3-D composites because of its general-purpose characteristics.
Savareear, Benjamin; Brokl, Michał; Wright, Chris; Focant, Jean-Francois
2017-11-24
A thermal desorption comprehensive two-dimensional gas chromatography coupled to time-of-flight mass spectrometry (TD-GC×GC-TOFMS) method has been developed for the analysis of mainstream tobacco smoke (MTS) vapour phase (VP). The selection process of the sample introduction approach involved comparing the results obtained from three different approaches: a) use of gas sampling bag followed by SPME (Tedlar ® -SPME), b) gas sampling bag followed by TD (Tedlar ® -TD), and c) sampling directly on TD sorbents (Direct-TD). Six different SPME fibers and six different TD sorbent beds were evaluated for the extraction capacities in terms of total number of peaks and related intensities or peak areas. The best results were obtained for the Direct-TD approach using Tenax TA/Carbograph1TD/Carboxen1003 sorbent tubes. The optimisation of TD tube desorption parameters was carried out using a face-centered central composite experimental design and resulted in the use of the Tenax TA/Carbograph 1TD/Carboxen 1003 sorbent with a 7.5min desorption time, a 60mL/min tube desorption flow, and a 250°C tube desorption temperature. The optimised method was applied to the separation of MTS-VP constituents, with 665 analytes detected. The method precision ranged from 1% to 15% for over 99% of identified peak areas and from 0% to 3% and 0% to 1% for both first ( 1 t R ) and second ( 2 t R ) dimension retention times, respectively. The method was applied to the analyses of two cigarette types differing in their filter construction. Principal component analysis (PCA) allowed a clear differentiation of the studied cigarette types (PC1 describing 94% of the explained variance). Supervised Fisher ratio analysis permitted the identification of compounds responsible for the chemical differences between the two sample types. A set of 91 most relevant compounds was selected by applying a Fisher ratio cut-off approach and most of them were selectively removed by one of the cigarette filter types. Copyright © 2017 Elsevier B.V. All rights reserved.
An adaptive finite element method for the inequality-constrained Reynolds equation
NASA Astrophysics Data System (ADS)
Gustafsson, Tom; Rajagopal, Kumbakonam R.; Stenberg, Rolf; Videman, Juha
2018-07-01
We present a stabilized finite element method for the numerical solution of cavitation in lubrication, modeled as an inequality-constrained Reynolds equation. The cavitation model is written as a variable coefficient saddle-point problem and approximated by a residual-based stabilized method. Based on our recent results on the classical obstacle problem, we present optimal a priori estimates and derive novel a posteriori error estimators. The method is implemented as a Nitsche-type finite element technique and shown in numerical computations to be superior to the usually applied penalty methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakayasu, Ernesto S.; Nicora, Carrie D.; Sims, Amy C.
2016-05-03
ABSTRACT Integrative multi-omics analyses can empower more effective investigation and complete understanding of complex biological systems. Despite recent advances in a range of omics analyses, multi-omic measurements of the same sample are still challenging and current methods have not been well evaluated in terms of reproducibility and broad applicability. Here we adapted a solvent-based method, widely applied for extracting lipids and metabolites, to add proteomics to mass spectrometry-based multi-omics measurements. Themetabolite,protein, andlipidextraction (MPLEx) protocol proved to be robust and applicable to a diverse set of sample types, including cell cultures, microbial communities, and tissues. To illustrate the utility of thismore » protocol, an integrative multi-omics analysis was performed using a lung epithelial cell line infected with Middle East respiratory syndrome coronavirus, which showed the impact of this virus on the host glycolytic pathway and also suggested a role for lipids during infection. The MPLEx method is a simple, fast, and robust protocol that can be applied for integrative multi-omic measurements from diverse sample types (e.g., environmental,in vitro, and clinical). IMPORTANCEIn systems biology studies, the integration of multiple omics measurements (i.e., genomics, transcriptomics, proteomics, metabolomics, and lipidomics) has been shown to provide a more complete and informative view of biological pathways. Thus, the prospect of extracting different types of molecules (e.g., DNAs, RNAs, proteins, and metabolites) and performing multiple omics measurements on single samples is very attractive, but such studies are challenging due to the fact that the extraction conditions differ according to the molecule type. Here, we adapted an organic solvent-based extraction method that demonstrated broad applicability and robustness, which enabled comprehensive proteomics, metabolomics, and lipidomics analyses from the same sample.« less
Huang, Yong; Wang, Xin-Ling; Qiu, Heng; Xiao, Yi-Cheng; Wu, Zong-Hong; Xu, Jian
2018-02-01
Two types(A model and B model) of articular cartilage defect models were prepared by using adult New Zealand white rabbits. A model group was applied by drilling without through subchondral bone, whose right joint was repaired by composite scaffolds made by seed cell, gum-bletilla as well as Pluronic F-127, and left side was blank control. B model group was applied by subchondral drilling method, whose right joint was repaired by using composite scaffolds made by gum-bletilla and Pluronic F-127 without seed cells, and left side was blank control. Autogenous contrast was used in both model types. In addition, another group was applied with B model type rabbits, which was repaired with artificial complex material of Pluronic F-127 in both joint sides. 4, 12 and 24 weeks after operation, the animals were sacrificed and the samples were collected from repaired area for staining with HE, typeⅡcollagen immunohistochemical method, Alcian blue, and toluidine blue, and then were observed with optical microscope. Semi-quantitative scores were graded by referring to Wakitanis histological scoring standard to investigate the histomorphology of repaired tissue. Hyaline cartilage repairing was achieved in both Group A and Group B, with satisfactory results. There were no significant differences on repairing effects for articular cartilage defects between composite scaffolds made by seed cell, gum-bletilla and Pluronic F-127, and the composite scaffolds made by gum-bletilla and Pluronic F-127 without seed cell. Better repairing effects for articular cartilage defects were observed in groups with use of gum-bletilla, indicating that gum-bletilla is a vital part in composite scaffolds material. Copyright© by the Chinese Pharmaceutical Association.
NASA Astrophysics Data System (ADS)
Tolba, Khaled Ibrahim; Morgenthal, Guido
2018-01-01
This paper presents an analysis of the scalability and efficiency of a simulation framework based on the vortex particle method. The code is applied for the numerical aerodynamic analysis of line-like structures. The numerical code runs on multicore CPU and GPU architectures using OpenCL framework. The focus of this paper is the analysis of the parallel efficiency and scalability of the method being applied to an engineering test case, specifically the aeroelastic response of a long-span bridge girder at the construction stage. The target is to assess the optimal configuration and the required computer architecture, such that it becomes feasible to efficiently utilise the method within the computational resources available for a regular engineering office. The simulations and the scalability analysis are performed on a regular gaming type computer.
Duan, Ran; Fu, Haoda
2015-08-30
Recurrent event data are an important data type for medical research. In particular, many safety endpoints are recurrent outcomes, such as hypoglycemic events. For such a situation, it is important to identify the factors causing these events and rank these factors by their importance. Traditional model selection methods are not able to provide variable importance in this context. Methods that are able to evaluate the variable importance, such as gradient boosting and random forest algorithms, cannot directly be applied to recurrent events data. In this paper, we propose a two-step method that enables us to evaluate the variable importance for recurrent events data. We evaluated the performance of our proposed method by simulations and applied it to a data set from a diabetes study. Copyright © 2015 John Wiley & Sons, Ltd.
Pham, Tuyen Danh; Lee, Dong Eun; Park, Kang Ryoung
2017-07-08
Automatic recognition of banknotes is applied in payment facilities, such as automated teller machines (ATMs) and banknote counters. Besides the popular approaches that focus on studying the methods applied to various individual types of currencies, there have been studies conducted on simultaneous classification of banknotes from multiple countries. However, their methods were conducted with limited numbers of banknote images, national currencies, and denominations. To address this issue, we propose a multi-national banknote classification method based on visible-light banknote images captured by a one-dimensional line sensor and classified by a convolutional neural network (CNN) considering the size information of each denomination. Experiments conducted on the combined banknote image database of six countries with 62 denominations gave a classification accuracy of 100%, and results show that our proposed algorithm outperforms previous methods.
Pham, Tuyen Danh; Lee, Dong Eun; Park, Kang Ryoung
2017-01-01
Automatic recognition of banknotes is applied in payment facilities, such as automated teller machines (ATMs) and banknote counters. Besides the popular approaches that focus on studying the methods applied to various individual types of currencies, there have been studies conducted on simultaneous classification of banknotes from multiple countries. However, their methods were conducted with limited numbers of banknote images, national currencies, and denominations. To address this issue, we propose a multi-national banknote classification method based on visible-light banknote images captured by a one-dimensional line sensor and classified by a convolutional neural network (CNN) considering the size information of each denomination. Experiments conducted on the combined banknote image database of six countries with 62 denominations gave a classification accuracy of 100%, and results show that our proposed algorithm outperforms previous methods. PMID:28698466
Unconventional Hamilton-type variational principle in phase space and symplectic algorithm
NASA Astrophysics Data System (ADS)
Luo, En; Huang, Weijiang; Zhang, Hexin
2003-06-01
By a novel approach proposed by Luo, the unconventional Hamilton-type variational principle in phase space for elastodynamics of multidegree-of-freedom system is established in this paper. It not only can fully characterize the initial-value problem of this dynamic, but also has a natural symplectic structure. Based on this variational principle, a symplectic algorithm which is called a symplectic time-subdomain method is proposed. A non-difference scheme is constructed by applying Lagrange interpolation polynomial to the time subdomain. Furthermore, it is also proved that the presented symplectic algorithm is an unconditionally stable one. From the results of the two numerical examples of different types, it can be seen that the accuracy and the computational efficiency of the new method excel obviously those of widely used Wilson-θ and Newmark-β methods. Therefore, this new algorithm is a highly efficient one with better computational performance.
Co-amplification at lower denaturation temperature-PCR: methodology and applications.
Liang, Hui; Chen, Guo-Jie; Yu, Yan; Xiong, Li-Kuan
2018-03-20
Co-amplification at lower denaturation temperature-polymerase chain reaction (COLD-PCR) is a novel form of PCR that selectively denatures and amplifies low-abundance mutations from mixtures of wild-type and mutation-containing sequences, enriching the mutation 10 to 100 folds. Due to the slightly altered melting temperature (Tm) of the double-stranded DNA and the formation of the mutation/wild-type heteroduplex DNA, COLD-PCR methods are sensitive, specific, accurate, cost-effective and easy to maneuver, and can enrich mutations of any type and at any position, even unknown mutations within amplicons. COLD-PCR and its improved methods are now applied in cancer, microorganisms, prenatal screening, animals and plants. They are extremely useful for early diagnosis, monitoring the prognosis of disease and the efficiency of the treatment, drug selection, prediction of prognosis, plant breeding and etc. In this review, we introduce the principles, key techniques, derived methods and applications of COLD-PCR.
Kim, Keun Ho; Ku, Boncho; Kang, Namsik; Kim, Young-Su; Jang, Jun-Su; Kim, Jong Yeol
2012-01-01
The voice has been used to classify the four constitution types, and to recognize a subject's health condition by extracting meaningful physical quantities, in traditional Korean medicine. In this paper, we propose a method of selecting the reliable variables from various voice features, such as frequency derivative features, frequency band ratios, and intensity, from vowels and a sentence. Further, we suggest a process to extract independent variables by eliminating explanatory variables and reducing their correlation and remove outlying data to enable reliable discriminant analysis. Moreover, the suitable division of data for analysis, according to the gender and age of subjects, is discussed. Finally, the vocal features are applied to a discriminant analysis to classify each constitution type. This method of voice classification can be widely used in the u-Healthcare system of personalized medicine and for improving diagnostic accuracy. PMID:22529874
Reverberation Modelling Using a Parabolic Equation Method
2012-10-01
the limits of their applicability. Results: Transmission loss estimates produced by the PECan parabolic equation acoustic model were used in...environments is possible when used in concert with a parabolic equation passive acoustic model . Future plans: The authors of this report recommend further...technique using other types of acoustic models should be undertaken. Furthermore, as the current method when applied as-is results in estimates that reflect
Nattrass, C; Ireland, A J; Sherriff, M
1997-05-01
This in vitro investigation was designed to establish not only how clinicians apply forces for space closure when using the straight wire appliance and sliding mechanics, but also to quantify the initial force levels produced. A single typodont, with residual extraction space in each quadrant, was set up to simulate space closure using sliding mechanics. On two occasions, at least 2 months apart, 18 clinicians were asked to apply three force delivery systems to the typodont, in the manner in which they would apply it in a clinical situation. The three types of force delivery system investigated were elastomeric chain, an elastomeric module on a steel ligature, and a nickel-titanium closed coil spring. A choice of spaced or unspaced elastomeric chain produced by a single manufacturer was provided. The amount of stretch which was placed on each type of system was measured and, using an Instron Universal Testing Machine, the initial force which would be generated by each force delivery system was established. Clinicians were assessed to examine their consistency in the amount of stretch which each placed on the force delivery systems, their initial force application and their ability to apply equivalent forces with the different types of force delivery system. The clinicians were found to be consistent in their method of application of the force delivery systems and, therefore, their force application, as individuals, but there was a wide range of forces applied as a group. However, most clinicians applied very different forces when using different force delivery systems. When using the module on a ligature the greatest force was applied, whilst the nickel titanium coil springs provided the least force.
Individualisation of Lean Concept in Companies Dealing with Mass Production
NASA Astrophysics Data System (ADS)
Bednár, Roman
2012-12-01
The methods of lean manufacturing primarily designed for businesses dealing with serial production, are also used in other types of production. However the concept of lean production was not designed for these types of businesses, they are utilized only partially. Paper focuses on applying methods of lean concept in companies which are dealing with mass production and their options of exchange for other methods in the event of disagreement. Basis of the article is a list of lean methods with its description and its utilization in practice. The questionnaire was utilized to identify information from the practice. Based on this survey were identified the critical methods that are no longer appropriate for companies dealing with mass production. However, there are alternative methods of describing the problem. It is possible to say that companies are trying to get closer to their goal by modification of the basic concepts. And the concept of Lean Enterprise serves as a standard.
NASA Technical Reports Server (NTRS)
Periaux, J.
1979-01-01
The numerical simulation of the transonic flows of idealized fluids and of incompressible viscous fluids, by the nonlinear least squares methods is presented. The nonlinear equations, the boundary conditions, and the various constraints controlling the two types of flow are described. The standard iterative methods for solving a quasi elliptical nonlinear equation with partial derivatives are reviewed with emphasis placed on two examples: the fixed point method applied to the Gelder functional in the case of compressible subsonic flows and the Newton method used in the technique of decomposition of the lifting potential. The new abstract least squares method is discussed. It consists of substituting the nonlinear equation by a problem of minimization in a H to the minus 1 type Sobolev functional space.
Predicting Protein Function by Genomic Context: Quantitative Evaluation and Qualitative Inferences
Huynen, Martijn; Snel, Berend; Lathe, Warren; Bork, Peer
2000-01-01
Various new methods have been proposed to predict functional interactions between proteins based on the genomic context of their genes. The types of genomic context that they use are Type I: the fusion of genes; Type II: the conservation of gene-order or co-occurrence of genes in potential operons; and Type III: the co-occurrence of genes across genomes (phylogenetic profiles). Here we compare these types for their coverage, their correlations with various types of functional interaction, and their overlap with homology-based function assignment. We apply the methods to Mycoplasma genitalium, the standard benchmarking genome in computational and experimental genomics. Quantitatively, conservation of gene order is the technique with the highest coverage, applying to 37% of the genes. By combining gene order conservation with gene fusion (6%), the co-occurrence of genes in operons in absence of gene order conservation (8%), and the co-occurrence of genes across genomes (11%), significant context information can be obtained for 50% of the genes (the categories overlap). Qualitatively, we observe that the functional interactions between genes are stronger as the requirements for physical neighborhood on the genome are more stringent, while the fraction of potential false positives decreases. Moreover, only in cases in which gene order is conserved in a substantial fraction of the genomes, in this case six out of twenty-five, does a single type of functional interaction (physical interaction) clearly dominate (>80%). In other cases, complementary function information from homology searches, which is available for most of the genes with significant genomic context, is essential to predict the type of interaction. Using a combination of genomic context and homology searches, new functional features can be predicted for 10% of M. genitalium genes. PMID:10958638
1981-01-01
instruction and type of task, and method of instruction and ability level were found with respect to school ... student . Methods of instruction that reduce the intellectual demand often reduce the differences between high and low ability students (Cronbach and...Snow, 1977). If these methods are applied to instruction of low ability students over a long period, many low ability students may equal or excel
A Design Method for a State Feedback Microcomputer Controller of a Wide Bandwidth Analog Plant.
1983-12-01
Il IIIz NAVAL POSTGRADUATE SCHOOLMonterey, California THESIS A A DESIGN METHOD FOR A STATE FEEDBACK MICROCOMPUTER CONTROLLER OF A WIDE BANDWIDTH...of a microcomputer regulator, continuous or discrete method can be applied. The o:bjective of this thesis is to provide a continuous controller ...estimation and control type problem. In this thesis , a wide bandwidth analog computer system is chosen as the plant so that the effect of transport
Nonstatic radiating spheres in general relativity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krori, K.D.; Borgohain, P.; Sarma, R.
1985-02-15
The method of Herrera, Jimenez, and Ruggeri of obtaining nonstatic solutions of Einstein's field equations to study the evolution of stellar bodies is applied to obtain two models of nonstatic radiating spheres from two well-known static solutions of field equations, viz., Tolman's solutions IV and V. Whereas Tolman's type-IV model is found to be contracting for the period under investigation, Tolman's type-V model shows a bounce after attaining a minimum radius.
Chen, Hsin-Chang; Ding, Wang-Hsien
2006-03-10
A comprehensive method for the determination of four stilbene-type disulfonate and one distyrylbiphenyl-type fluorescent whitening agents (FWAs) in paper materials (napkin and paper tissue) and infant clothes was developed. FWAs were extracted from paper material and cloth samples using a hot-water extraction, and the aqueous extracts were then preconcentrated with the newly developed Oasis WAX (mixed-mode of weak anion exchange and reversed-phase sorbent) solid-phase extraction cartridge. The analytes were unequivocal determined by ion pair chromatography coupled with negative electrospray ionization-tandem mass spectrometry (HPLC-ESI-MS-MS), applying a di-n-hexyl-ammonium acetate (DHAA) as the ion-pairing reagent in mobile phase. Limits of quantitation (LOQ) were established between 0.2 and 0.9 ng/g in 2 g of samples. Recovery of five FWAs in spiked commercial samples was between 42 and 95% and RSD (n = 3) ranging from 2 to 11%. The method was finally applied to commercial samples, showing that two stilbene-type disulfonates were predominant FWAs detected in napkin and infant cloth samples.
Monitoring Traffic Information with a Developed Acceleration Sensing Node.
Ye, Zhoujing; Wang, Linbing; Xu, Wen; Gao, Zhifei; Yan, Guannan
2017-12-05
In this paper, an acceleration sensing node for pavement vibration was developed to monitor traffic information, including vehicle speed, vehicle types, and traffic flow, where a hardware design with low energy consumption and node encapsulation could be accomplished. The service performance of the sensing node was evaluated, by methods including waterproof test, compression test, sensing performance analysis, and comparison test. The results demonstrate that the sensing node is low in energy consumption, high in strength, IPX8 waterproof, and high in sensitivity and resolution. These characteristics can be applied to practical road environments. Two sensing nodes were spaced apart in the direction of travelling. In the experiment, three types of vehicles passed by the monitoring points at several different speeds and values of d (the distance between the sensor and the nearest tire center line). Based on cross-correlation with kernel pre-smoothing, a calculation method was applied to process the raw data. New algorithms for traffic flow, speed, and axle length were proposed. Finally, the effects of vehicle speed, vehicle weight, and d value on acceleration amplitude were statistically evaluated. It was found that the acceleration sensing node can be used for traffic flow, vehicle speed, and other types of monitoring.
Monitoring Traffic Information with a Developed Acceleration Sensing Node
Ye, Zhoujing; Wang, Linbing; Xu, Wen; Gao, Zhifei; Yan, Guannan
2017-01-01
In this paper, an acceleration sensing node for pavement vibration was developed to monitor traffic information, including vehicle speed, vehicle types, and traffic flow, where a hardware design with low energy consumption and node encapsulation could be accomplished. The service performance of the sensing node was evaluated, by methods including waterproof test, compression test, sensing performance analysis, and comparison test. The results demonstrate that the sensing node is low in energy consumption, high in strength, IPX8 waterproof, and high in sensitivity and resolution. These characteristics can be applied to practical road environments. Two sensing nodes were spaced apart in the direction of travelling. In the experiment, three types of vehicles passed by the monitoring points at several different speeds and values of d (the distance between the sensor and the nearest tire center line). Based on cross-correlation with kernel pre-smoothing, a calculation method was applied to process the raw data. New algorithms for traffic flow, speed, and axle length were proposed. Finally, the effects of vehicle speed, vehicle weight, and d value on acceleration amplitude were statistically evaluated. It was found that the acceleration sensing node can be used for traffic flow, vehicle speed, and other types of monitoring. PMID:29206169
Graph Structured Program Evolution: Evolution of Loop Structures
NASA Astrophysics Data System (ADS)
Shirakawa, Shinichi; Nagao, Tomoharu
Recently, numerous automatic programming techniques have been developed and applied in various fields. A typical example is genetic programming (GP), and various extensions and representations of GP have been proposed thus far. Complex programs and hand-written programs, however, may contain several loops and handle multiple data types. In this chapter, we propose a new method called Graph Structured Program Evolution (GRAPE). The representation of GRAPE is a graph structure; therefore, it can represent branches and loops using this structure. Each programis constructed as an arbitrary directed graph of nodes and a data set. The GRAPE program handles multiple data types using the data set for each type, and the genotype of GRAPE takes the form of a linear string of integers. We apply GRAPE to three test problems, factorial, exponentiation, and list sorting, and demonstrate that the optimum solution in each problem is obtained by the GRAPE system.
Shock wave refraction enhancing conditions on an extended interface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Markhotok, A.; Popovic, S.
2013-04-15
We determined the law of shock wave refraction for a class of extended interfaces with continuously variable gradients. When the interface is extended or when the gas parameters vary fast enough, the interface cannot be considered as sharp or smooth and the existing calculation methods cannot be applied. The expressions we derived are general enough to cover all three types of the interface and are valid for any law of continuously varying parameters. We apply the equations to the case of exponentially increasing temperature on the boundary and compare the results for all three types of interfaces. We have demonstratedmore » that the type of interface can increase or inhibit the shock wave refraction. Our findings can be helpful in understanding the results obtained in energy deposition experiments as well as for controlling the shock-plasma interaction in other settings.« less
Treating Depression during Pregnancy and the Postpartum: A Preliminary Meta-Analysis
ERIC Educational Resources Information Center
Bledsoe, Sarah E.; Grote, Nancy K.
2006-01-01
Objectives: This meta-analysis evaluates treatment effects for nonpsychotic major depression during pregnancy and postpartum comparing interventions by type and timing. Methods: Studies for decreasing depressive severity during pregnancy and postpartum applying treatment trials and standardized measures were included. Standardized mean differences…
NASA Astrophysics Data System (ADS)
Shvelidze, T. D.; Malyuto, V. D.
Quantitative spectral classification of F, G and K stars with the 70-cm telescope of the Ambastumani Astrophysical Observatory in areas of the main meridional section of the Galaxy, and for which proper motion data are available, has been performed. Fundamental parameters have been obtained for 333 stars in four areas. Space densities of stars of different spectral types, the stellar luminosity function and the relationships between the kinematics and metallicity of stars have been studied. The results have confirmed and completed the conclusions made on the basis of some previous spectroscopic and photometric surveys. Many plates have been obtained for other important directions in the sky: the Kapteyn areas, the Galactic anticentre and the main meridional section of the Galaxy. The data can be treated with the same quantitative method applied here. This method may also be applied to other available and future spectroscopic data of similar resolution, notably that obtained with large format CCD detectors on Schmidt-type telescopes.
Zhou, Wei; Lyu, Teng Fei; Yang, Zhi Ping; Sun, Hong; Yang, Liang Jie; Chen, Yong; Ren, Wan Jun
2016-09-01
Unreasonable application of nitrogen fertilizer to cropland decreases nitrogen use efficiency of crop. A large amount of nitrogen loss to environment through runoff, leaching, ammonia volati-lization, nitrification-denitrification, etc., causes water and atmospheric pollution, poses serious environmental problems and threatens human health. The type of nitrogen fertilizer and its application rate, time, and method have significant effects on nitrogen loss. The primary reason for nitrogen loss is attributed to the supersaturated soil nitrogen concentration. Making full use of environmental nitrogen sources, reducing the application rate of chemical nitrogen fertilizers, applying deep placement fertilizing method, and applying organic fertilizers with chemical nitrogen fertilizers, are effective practices for reducing nitrogen loss and improving nitrogen use efficiency. It is suggested that deve-loping new high efficiency nitrogen fertilizers, enhancing nitrogen management, and strengthening the monitoring and use of environmental nitrogen sources are the powerful tools to decrease nitrogen application rate and increase efficiency of cropland.
Vincent, Jordan; Wang, Hui; Nibouche, Omar; Maguire, Paul
2018-05-25
Food fraud, the sale of goods that have in some way been mislabelled or tampered with, is an increasing concern, with a number of high profile documented incidents in recent years. These recent incidents and their scope show that there are gaps in the food chain where food authentication methods are not applied or otherwise not sufficient and more accessible detection methods would be beneficial. This paper investigates the utility of affordable and portable visible range spectroscopy hardware with partial least squares discriminant analysis (PLS-DA) when applied to the differentiation of apple types and organic status. This method has the advantage that it is accessible throughout the supply chain, including at the consumer level. Scans were acquired of 132 apples of three types, half of which are organic and the remaining non-organic. The scans were preprocessed with zero correction, normalisation and smoothing. Two tests were used to determine accuracy, the first using 10-fold cross-validation and the second using a test set collected in different ambient conditions. Overall, the system achieved an accuracy of 94% when predicting the type of apple and 66% when predicting the organic status. Additionally, the resulting models were analysed to find the regions of the spectrum that had the most significance. Then, the accuracy when using three-channel information (RGB) is presented and shows the improvement provided by spectroscopic data.
NASA Astrophysics Data System (ADS)
Liang, J.; Liu, D.
2017-12-01
Emergency responses to floods require timely information on water extents that can be produced by satellite-based remote sensing. As SAR image can be acquired in adverse illumination and weather conditions, it is particularly suitable for delineating water extent during a flood event. Thresholding SAR imagery is one of the most widely used approaches to delineate water extent. However, most studies apply only one threshold to separate water and dry land without considering the complexity and variability of different dry land surface types in an image. This paper proposes a new thresholding method for SAR image to delineate water from other different land cover types. A probability distribution of SAR backscatter intensity is fitted for each land cover type including water before a flood event and the intersection between two distributions is regarded as a threshold to classify the two. To extract water, a set of thresholds are applied to several pairs of land cover types—water and urban or water and forest. The subsets are merged to form the water distribution for the SAR image during or after the flooding. Experiments show that this land cover based thresholding approach outperformed the traditional single thresholding by about 5% to 15%. This method has great application potential with the broadly acceptance of the thresholding based methods and availability of land cover data, especially for heterogeneous regions.
Studies on the estimation of the postmortem interval. 3. Rigor mortis (author's transl).
Suzutani, T; Ishibashi, H; Takatori, T
1978-11-01
The authors have devised a method for classifying rigor mortis into 10 types based on its appearance and strength in various parts of a cadaver. By applying the method to the findings of 436 cadavers which were subjected to medico-legal autopsies in our laboratory during the last 10 years, it has been demonstrated that the classifying method is effective for analyzing the phenomenon of onset, persistence and disappearance of rigor mortis statistically. The investigation of the relationship between each type of rigor mortis and the postmortem interval has demonstrated that rigor mortis may be utilized as a basis for estimating the postmortem interval but the values have greater deviation than those described in current textbooks.
NASA Astrophysics Data System (ADS)
Liu, Zhigang; Song, Wenguang; Kochan, Orest; Mykyichuk, Mykola; Jun, Su
2017-07-01
The method of theoretical analysis of temperature ranges for the maximum manifestation of the error due to acquired thermoelectric inhomogeneity of thermocouple legs is proposed in this paper. The drift function of the reference function of a type K thermocouples in a ceramic insulation, that consisted of 1.2 mm diameter thermoelements after their exposure to 800°C for 10 000 h in an oxidizing atmosphere (air), is analyzed. The method takes into account various operating conditions to determine the optimal conditions for studying inhomogeneous thermocouples. The method can be applied for other types of thermocouples when taking into account their specific characteristics and the conditions that they have been exposed to.
Prokopenko, L V; Afanas'eva, R F; Bessonova, N A; Burmistrova, O V; Losik, T K; Konstantinov, E I
2013-01-01
Studies of heat state of human involved into physical work in heating environment and having various protective clothing on demonstrated value of the protective clothing in modifying thermal load on the body and possible decrease of this load through air temperature and humidity correction, shorter stay at workplace. The authors presented hygienic requirements to air temperatures range in accordance with allowable body heating degree, suggested mathematic model to forecast integral parameter of human functional state in accordance with type of protective clothing applied. The article also covers necessity of upper air temperature limit during hot season, for applying protective clothing made of materials with low air permeability and hydraulic conductivity.
Probabilistic Exposure Analysis for Chemical Risk Characterization
Bogen, Kenneth T.; Cullen, Alison C.; Frey, H. Christopher; Price, Paul S.
2009-01-01
This paper summarizes the state of the science of probabilistic exposure assessment (PEA) as applied to chemical risk characterization. Current probabilistic risk analysis methods applied to PEA are reviewed. PEA within the context of risk-based decision making is discussed, including probabilistic treatment of related uncertainty, interindividual heterogeneity, and other sources of variability. Key examples of recent experience gained in assessing human exposures to chemicals in the environment, and other applications to chemical risk characterization and assessment, are presented. It is concluded that, although improvements continue to be made, existing methods suffice for effective application of PEA to support quantitative analyses of the risk of chemically induced toxicity that play an increasing role in key decision-making objectives involving health protection, triage, civil justice, and criminal justice. Different types of information required to apply PEA to these different decision contexts are identified, and specific PEA methods are highlighted that are best suited to exposure assessment in these separate contexts. PMID:19223660
Fleming, R. M.; Seager, C. H.; Lang, D. V.; ...
2015-07-02
In this study, an improved method for measuring the cross sections for carrier trapping at defects in semiconductors is described. This method, a variation of deep level transient spectroscopy(DLTS) used with bipolar transistors, is applied to hot carrier trapping at vacancy-oxygen, carbon-oxygen, and three charge states of divacancy centers (V 2) in n- and p-type silicon. Unlike standard DLTS, we fill traps by injecting carriers into the depletion region of a bipolar transistor diode using a pulse of forward bias current applied to the adjacent diode. We show that this technique is capable of accurately measuring a wide range ofmore » capture cross sections at varying electric fields due to the control of the carrier density it provides. Because this technique can be applied to a variety of carrier energy distributions, it should be valuable in modeling the effect of radiation-induced generation-recombination currents in bipolar devices.« less
NASA Astrophysics Data System (ADS)
Diller, Christian; Karic, Sarah; Oberding, Sarah
2017-06-01
The topic of this article ist the question, in which phases oft he political planning process planners apply their methodological set of tools. That for the results of a research-project are presented, which were gained by an examination of planning-cases in learned journals. Firstly it is argued, which model oft he planning-process is most suitable to reflect the regarded cases and how it is positioned to models oft he political process. Thereafter it is analyzed, which types of planning methods are applied in the several stages oft he planning process. The central findings: Although complex, many planning processes can be thouroughly pictured by a linear modell with predominantly simple feedback loops. Even in times of he communicative turn, concerning their set of tools, planners should pay attention to apply not only communicative methods but as well the classical analytical-rational methods. They are helpful especially for the understanding of the political process before and after the actual planning phase.
BAYESIAN ESTIMATION OF THERMONUCLEAR REACTION RATES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iliadis, C.; Anderson, K. S.; Coc, A.
The problem of estimating non-resonant astrophysical S -factors and thermonuclear reaction rates, based on measured nuclear cross sections, is of major interest for nuclear energy generation, neutrino physics, and element synthesis. Many different methods have been applied to this problem in the past, almost all of them based on traditional statistics. Bayesian methods, on the other hand, are now in widespread use in the physical sciences. In astronomy, for example, Bayesian statistics is applied to the observation of extrasolar planets, gravitational waves, and Type Ia supernovae. However, nuclear physics, in particular, has been slow to adopt Bayesian methods. We presentmore » astrophysical S -factors and reaction rates based on Bayesian statistics. We develop a framework that incorporates robust parameter estimation, systematic effects, and non-Gaussian uncertainties in a consistent manner. The method is applied to the reactions d(p, γ ){sup 3}He, {sup 3}He({sup 3}He,2p){sup 4}He, and {sup 3}He( α , γ ){sup 7}Be, important for deuterium burning, solar neutrinos, and Big Bang nucleosynthesis.« less
Petegrosso, Raphael; Tolar, Jakub
2018-01-01
Single-cell RNA sequencing (scRNA-seq) has been widely applied to discover new cell types by detecting sub-populations in a heterogeneous group of cells. Since scRNA-seq experiments have lower read coverage/tag counts and introduce more technical biases compared to bulk RNA-seq experiments, the limited number of sampled cells combined with the experimental biases and other dataset specific variations presents a challenge to cross-dataset analysis and discovery of relevant biological variations across multiple cell populations. In this paper, we introduce a method of variance-driven multitask clustering of single-cell RNA-seq data (scVDMC) that utilizes multiple single-cell populations from biological replicates or different samples. scVDMC clusters single cells in multiple scRNA-seq experiments of similar cell types and markers but varying expression patterns such that the scRNA-seq data are better integrated than typical pooled analyses which only increase the sample size. By controlling the variance among the cell clusters within each dataset and across all the datasets, scVDMC detects cell sub-populations in each individual experiment with shared cell-type markers but varying cluster centers among all the experiments. Applied to two real scRNA-seq datasets with several replicates and one large-scale droplet-based dataset on three patient samples, scVDMC more accurately detected cell populations and known cell markers than pooled clustering and other recently proposed scRNA-seq clustering methods. In the case study applied to in-house Recessive Dystrophic Epidermolysis Bullosa (RDEB) scRNA-seq data, scVDMC revealed several new cell types and unknown markers validated by flow cytometry. MATLAB/Octave code available at https://github.com/kuanglab/scVDMC. PMID:29630593
Using the Fusion Proximal Area Method and Gravity Method to Identify Areas with Physician Shortages
Xiong, Xuechen; Jin, Chao; Chen, Haile; Luo, Li
2016-01-01
Objectives This paper presents a geographic information system (GIS)-based proximal area method and gravity method for identifying areas with physician shortages. The innovation of this paper is that it uses the appropriate methods to discover each type of health resource and then integrates all these methods to assess spatial access to health resources using population distribution data. In this way, spatial access to health resources for an entire city can be visualized in one neat package, which can help health policy makers quickly comprehend realistic distributions of health resources at a macro level. Methods First, classify health resources according to the trade areas of the patients they serve. Second, apply an appropriate method to each different type of health resource to measure spatial access to those resources. Third, integrate all types of access using population distribution data. Results In case study of Shanghai with the fusion method, areas with physician shortages are located primarily in suburban districts, especially in district junction areas. The result suggests that the government of Shanghai should pay more attention to these areas by investing in new or relocating existing health resources. Conclusion The fusion method is demonstrated to be more accurate and practicable than using a single method to assess spatial access to health resources. PMID:27695105
Multimodal electromechanical model of piezoelectric transformers by Hamilton's principle.
Nadal, Clement; Pigache, Francois
2009-11-01
This work deals with a general energetic approach to establish an accurate electromechanical model of a piezoelectric transformer (PT). Hamilton's principle is used to obtain the equations of motion for free vibrations. The modal characteristics (mass, stiffness, primary and secondary electromechanical conversion factors) are also deduced. Then, to illustrate this general electromechanical method, the variational principle is applied to both homogeneous and nonhomogeneous Rosen-type PT models. A comparison of modal parameters, mechanical displacements, and electrical potentials are presented for both models. Finally, the validity of the electrodynamical model of nonhomogeneous Rosen-type PT is confirmed by a numerical comparison based on a finite elements method and an experimental identification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kubouchi, Masatoshi; Hojo, Hidemitsu
The thermal shock resistance of epoxy resin specimens toughened with carboxy-terminated poly(butadiene-acrylonitrile) (CTBN) and poly-glycol were tested using a new notched disk-type specimen. The new thermal shock testing method consists of quenching a notched disk-type specimen and applying a theoretical analysis to the test results to determine crack propagation conditions. For both toughened epoxy resins, this test method evaluated improvements in thermal shock resistance. The thermal shock resistance of epoxy resin toughened with CTBN exhibited a maximum at a 35 parts per hundred resin content of CTBN. The epoxy resin toughened with polyglycol exhibited improved thermal shock resistance with increasingmore » glycol content. 7 refs., 14 figs., 1 tab.« less
Condenser-type diffusion denuders for the collection of sulfur dioxide in a cleanroom.
Chang, In-Hyoung; Lee, Dong Soo; Ock, Soon-Ho
2003-02-01
High-efficiency condenser-type diffusion denuders of cylindrical and planar geometries are described. The film condensation of water vapor onto a cooled denuder surface can be used as a method for collecting water-soluble gases. By using SO(2) as the test gas, the planar design offers quantitative collection efficiency at air sampling rates up to 5 L min(-1). Coupled to ion chromatography, the limit of detection (LOD) for SO(2) is 0.014 ppbv with a 30-min successive analysis sequence. The method has been successfully applied to the analysis of temperature- and humidity-controlled cleanroom air.
Some spectral approximation of one-dimensional fourth-order problems
NASA Technical Reports Server (NTRS)
Bernardi, Christine; Maday, Yvon
1989-01-01
Some spectral type collocation method well suited for the approximation of fourth-order systems are proposed. The model problem is the biharmonic equation, in one and two dimensions when the boundary conditions are periodic in one direction. It is proved that the standard Gauss-Lobatto nodes are not the best choice for the collocation points. Then, a new set of nodes related to some generalized Gauss type quadrature formulas is proposed. Also provided is a complete analysis of these formulas including some new issues about the asymptotic behavior of the weights and we apply these results to the analysis of the collocation method.
Production of multicharged metal ion beams on the first stage of tandem-type ECRIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagino, Shogo, E-mail: hagino@nf.eie.eng.osaka-u.ac.jp; Nagaya, Tomoki; Nishiokada, Takuya
2016-02-15
Multicharged metal ion beams are required to be applied in a wide range of fields. We aim at synthesizing iron-endohedral fullerene by transporting iron ion beams from the first stage into the fullerene plasma in the second stage of the tandem-type electron cyclotron resonance ion source (ECRIS). We developed new evaporators by using a direct ohmic heating method and a radiation heating method from solid state pure metal materials. We investigate their properties in the test chamber and produce iron ions on the first stage of the tandem-type ECRIS. As a result, we were successful in extracting Fe{sup +} ionmore » beams from the first stage and introducing Fe{sup +} ion beams to the second stage. We will try synthesizing iron-endohedral fullerene on the tandem-type ECRIS by using these evaporators.« less
Wu, Jin-Lei; Ji, Xin; Zhang, Shou
2016-01-01
Recently, a novel three-dimensional entangled state called tree-type entanglement, which is likely to have applications for improving quantum communication security, was prepared via adiabatic passage by Song et al. Here we propose two schemes for fast generating tree-type three-dimensional entanglement among three spatially separated atoms via shortcuts to adiabatic passage. With the help of quantum Zeno dynamics, two kinds of different but equivalent methods, Lewis-Riesenfeld invariants and transitionless quantum driving, are applied to construct shortcuts to adiabatic passage. The comparisons between the two methods are discussed. The strict numerical simulations show that the tree-type three-dimensional entangled states can be fast prepared with quite high fidelities and the two schemes are both robust against the variations in the parameters, atomic spontaneous emissions and the cavity-fiber photon leakages. PMID:27667583
NASA Astrophysics Data System (ADS)
Kacenelenbogen, M. S.; Tan, Q.; Johnson, M. S.; Burton, S. P.; Redemann, J.; Hasekamp, O. P.; Dawson, K. W.; Hair, J. W.; Ferrare, R. A.; Butler, C. F.; Holben, B. N.; Beyersdorf, A. J.; Ziemba, L. D.; Froyd, K. D.; Dibb, J. E.; Shingler, T.; Sorooshian, A.; Jimenez, J. L.; Campuzano Jost, P.; Jacob, D.; Kim, P. S.; Travis, K.; Lacagnina, C.
2016-12-01
It is essential to evaluate and refine aerosol classification methods applied to passive satellite remote sensing. We have developed an aerosol classification algorithm (called Specified Clustering and Mahalanobis Classification, SCMC) that assigns an aerosol type to multi-parameter retrievals by spaceborne, airborne or ground-based passive remote sensing instruments [1]. The aerosol types identified by our scheme are pure dust, polluted dust, urban-industrial/developed economy, urban-industrial/developing economy, dark biomass smoke, light biomass smoke and pure marine. We apply the SCMC method to inversions from the ground-based AErosol RObotic NETwork (AERONET [2]) and retrievals from the space-borne Polarization and Directionality of Earth's Reflectances instrument (POLDER, [3]). The POLDER retrievals that we use differ from the standard POLDER retrievals [4] as they make full use of multi-angle, multispectral polarimetric data [5]. We analyze agreement in the aerosol types inferred from both AERONET and POLDER and evaluate GEOS-Chem [6] simulations over the globe. Finally, we use in-situ observations from the SEAC4RS airborne field experiment to bridge the gap between remote sensing-inferred qualitative SCMC aerosol types and their corresponding quantitative chemical speciation. We apply the SCMC method to airborne in-situ observations from the NASA Langley Aerosol Research Group Experiment (LARGE, [7]) and the Differential Aerosol Sizing and Hygroscopicity Spectrometer Probe (DASH-SP, [8]) instruments; we then relate each coarsely defined SCMC type to a sum of percentage of individual aerosol species, using in-situ observations from the Particle Analysis by Laser Mass Spectrometry (PALMS, [9]), the Soluble Acidic Gases and Aerosol (SAGA, [10]), and the High - Resolution Time - of - Flight Aerosol Mass Spectrometer (HR ToF AMS, [11]). [1] Russell P. B., et al., JGR, 119.16 (2014) [2] Holben B. N., et al., RSE, 66.1 (1998) [3] Tanré D., et al., AMT, 4.7 (2011) [4] Deuzé J. L., et al., JGR, 106.D5 (2001) [5] Hasekamp O. P., et al., JGR, 116.D14 (2011) [6] Bey I., et al., JGR, 106.D19 (2001) [7] Ziemba L. D., et al., GRL, 40.2 (2013) [8] Sorooshian A., et al., AST, 42.6 (2008) [9] Murphy D. M., et al., JGR, 111.D23 (2006) [10] Dibb J. E., et al., JGR, 108.D21 (2003) [11] DeCarlo P. F., et al., AC, 78.24 (2006)
Reiman, Arto; Pekkala, Janne; Väyrynen, Seppo; Putkonen, Ari; Forsman, Mikael
2014-01-01
The aim of this study was to identify risks and ergonomics discomfort during work of local and short haul delivery truck drivers outside a cab. The study used a video- and computer-based method (VIDAR). VIDAR is a participatory method identifying demanding work situations and their potential risks. The drivers' work was videoed and analysed by subjects and ergonomists. Delivery truck drivers should not be perceived as one group with equal risks because there were significant differences between the 2 types of transportation and specific types of risks. VIDAR produces visual material for risk management processes. VIDAR as a participatory approach stimulates active discussion about work-related risks and discomfort, and about possibilities for improvement. VIDAR may be also applied to work which comprises different working environments.
Huffman and linear scanning methods with statistical language models.
Roark, Brian; Fried-Oken, Melanie; Gibbons, Chris
2015-03-01
Current scanning access methods for text generation in AAC devices are limited to relatively few options, most notably row/column variations within a matrix. We present Huffman scanning, a new method for applying statistical language models to binary-switch, static-grid typing AAC interfaces, and compare it to other scanning options under a variety of conditions. We present results for 16 adults without disabilities and one 36-year-old man with locked-in syndrome who presents with complex communication needs and uses AAC scanning devices for writing. Huffman scanning with a statistical language model yielded significant typing speedups for the 16 participants without disabilities versus any of the other methods tested, including two row/column scanning methods. A similar pattern of results was found with the individual with locked-in syndrome. Interestingly, faster typing speeds were obtained with Huffman scanning using a more leisurely scan rate than relatively fast individually calibrated scan rates. Overall, the results reported here demonstrate great promise for the usability of Huffman scanning as a faster alternative to row/column scanning.
NASA Astrophysics Data System (ADS)
van Horssen, Wim T.; Wang, Yandong; Cao, Guohua
2018-06-01
In this paper, it is shown how characteristic coordinates, or equivalently how the well-known formula of d'Alembert, can be used to solve initial-boundary value problems for wave equations on fixed, bounded intervals involving Robin type of boundary conditions with time-dependent coefficients. A Robin boundary condition is a condition that specifies a linear combination of the dependent variable and its first order space-derivative on a boundary of the interval. Analytical methods, such as the method of separation of variables (SOV) or the Laplace transform method, are not applicable to those types of problems. The obtained analytical results by applying the proposed method, are in complete agreement with those obtained by using the numerical, finite difference method. For problems with time-independent coefficients in the Robin boundary condition(s), the results of the proposed method also completely agree with those as for instance obtained by the method of separation of variables, or by the finite difference method.
Preliminary sizing and performance of aircraft
NASA Technical Reports Server (NTRS)
Fetterman, D. E., Jr.
1985-01-01
The basic processes of a program that performs sizing operations on a baseline aircraft and determines their subsequent effects on aerodynamics, propulsion, weights, and mission performance are described. Input requirements are defined and output listings explained. Results obtained by applying the method to several types of aircraft are discussed.
75 FR 81592 - National Energy Technology Laboratory; Notice of Intent To Grant Exclusive License
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-28
..., entitled ``Method for designing a reforming and/or combustion catalyst system'' and ``Pyrochlore-type catalysts for the reforming of hydrocarbon fuels,'' respectively, to Pyrochem Catalyst [[Page 81593... filing written objections. Pyrochem Catalyst Corporation, a new small business, has applied for an...
Speciation of mercury in sludge solids: washed sludge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bannochie, C. J.; Lourie, A. P.
2017-10-24
The objective of this applied research task was to study the type and concentration of mercury compounds found within the contaminated Savannah River Site Liquid Waste System (SRS LWS). A method of selective sequential extraction (SSE), developed by Eurofins Frontier Global Sciences1,2 and adapted by SRNL, utilizes an extraction procedure divided into seven separate tests for different species of mercury. In the SRNL’s modified procedure four of these tests were applied to a washed sample of high level radioactive waste sludge.
NASA Astrophysics Data System (ADS)
Trandafir, Laura; Alexandru, Mioara; Constantin, Mihai; Ioniţă, Anca; Zorilă, Florina; Moise, Valentin
2012-09-01
EN ISO 11137 established regulations for setting or substantiating the dose for achieving the desired sterility assurance level. The validation studies can be designed in particular for different types of products. Each product needs distinct protocols for bioburden determination and sterility testing. The Microbiological Laboratory from Irradiation Processing Center (IRASM) deals with different types of products, mainly for the VDmax25 method. When it comes to microbiological evaluation the most challenging was cotton gauze. A special situation for establishing the sterilization validation method appears in cases of cotton packed in large quantities. The VDmax25 method cannot be applied for items with average bioburden more than 1000 CFU/pack, irrespective of the weight of the package. This is a method limitation and implies increased costs for the manufacturer when choosing other methods. For microbiological tests, culture condition should be selected in both cases of the bioburden and sterility testing. Details about choosing criteria are given.
Multivariate Boosting for Integrative Analysis of High-Dimensional Cancer Genomic Data
Xiong, Lie; Kuan, Pei-Fen; Tian, Jianan; Keles, Sunduz; Wang, Sijian
2015-01-01
In this paper, we propose a novel multivariate component-wise boosting method for fitting multivariate response regression models under the high-dimension, low sample size setting. Our method is motivated by modeling the association among different biological molecules based on multiple types of high-dimensional genomic data. Particularly, we are interested in two applications: studying the influence of DNA copy number alterations on RNA transcript levels and investigating the association between DNA methylation and gene expression. For this purpose, we model the dependence of the RNA expression levels on DNA copy number alterations and the dependence of gene expression on DNA methylation through multivariate regression models and utilize boosting-type method to handle the high dimensionality as well as model the possible nonlinear associations. The performance of the proposed method is demonstrated through simulation studies. Finally, our multivariate boosting method is applied to two breast cancer studies. PMID:26609213
NASA Technical Reports Server (NTRS)
Navon, I. M.
1984-01-01
A Lagrange multiplier method using techniques developed by Bertsekas (1982) was applied to solving the problem of enforcing simultaneous conservation of the nonlinear integral invariants of the shallow water equations on a limited area domain. This application of nonlinear constrained optimization is of the large dimensional type and the conjugate gradient method was found to be the only computationally viable method for the unconstrained minimization. Several conjugate-gradient codes were tested and compared for increasing accuracy requirements. Robustness and computational efficiency were the principal criteria.
[Use and knowledge of contraceptive methods in female students of children education].
Schilling, A; Rubio, L; Schlein, J
1989-01-01
An inquire about contraception use and knowledge was applied to 292 female students. (Average of age = 21.3 years) An 88.4% of the woman with sexual activity had used contraceptive methods at least once. Principal reason for not going on using them, was not having sexual intercourse. In single women, the use of contraceptive methods was related with age, while the type selected was related with sexual intercourse's frequency. The most used contraceptive methods were rhythm and pill, which were not the best known ones.
Ranking of options of real estate use by expert assessments mathematical processing
NASA Astrophysics Data System (ADS)
Lepikhina, O. Yu; Skachkova, M. E.; Mihaelyan, T. A.
2018-05-01
The article is devoted to the development of the real estate assessment concept. In conditions of multivariate using of the real estate method based on calculating, the integral indicator of each variant’s efficiency is proposed. In order to calculate weights of criteria of the efficiency expert method, Analytic hierarchy process and its mathematical support are used. The method allows fulfilling ranking of alternative types of real estate use in dependence of their efficiency. The method was applied for one of the land parcels located on Primorsky district in Saint Petersburg.
NASA Technical Reports Server (NTRS)
Damkohler, Gerhard
1950-01-01
The analytical results of Part I are also applied to sound dispersion by friction and heat conduction, An irreversible change of momentum, energy, and type of particle corresponding to friction, heat conduction, and diffusion effects can appear both in the direction of the sound field and traverse to it. Longitudinal damping, the coupling of longitudinal damping and that due to chemical and physical changes, and coupling of diffusion and compositional changes are treated for a plane sound wave of infinite extent. The same principles are also applied to sound effects in cylindrical tubes. The limitations of the method are discussed in some detail.
D'Agnese, F. A.; Faunt, C.C.; Turner, A.K.; ,
1996-01-01
The recharge and discharge components of the Death Valley regional groundwater flow system were defined by techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were used to calculate discharge volumes for these area. An empirical method of groundwater recharge estimation was modified to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter estimation techniques also used these data to evaluate the reliability and sensitivity of estimated values.The recharge and discharge components of the Death Valley regional groundwater flow system were defined by remote sensing and GIS techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. This map provided a basis for subsequent evapotranspiration and infiltration estimations. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were then used to calculate discharge volumes for these areas. A previously used empirical method of groundwater recharge estimation was modified by GIS methods to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter estimation techniques also used these data to evaluate the reliability and sensitivity of estimated values.
Clustering of samples and variables with mixed-type data
Edelmann, Dominic; Kopp-Schneider, Annette
2017-01-01
Analysis of data measured on different scales is a relevant challenge. Biomedical studies often focus on high-throughput datasets of, e.g., quantitative measurements. However, the need for integration of other features possibly measured on different scales, e.g. clinical or cytogenetic factors, becomes increasingly important. The analysis results (e.g. a selection of relevant genes) are then visualized, while adding further information, like clinical factors, on top. However, a more integrative approach is desirable, where all available data are analyzed jointly, and where also in the visualization different data sources are combined in a more natural way. Here we specifically target integrative visualization and present a heatmap-style graphic display. To this end, we develop and explore methods for clustering mixed-type data, with special focus on clustering variables. Clustering of variables does not receive as much attention in the literature as does clustering of samples. We extend the variables clustering methodology by two new approaches, one based on the combination of different association measures and the other on distance correlation. With simulation studies we evaluate and compare different clustering strategies. Applying specific methods for mixed-type data proves to be comparable and in many cases beneficial as compared to standard approaches applied to corresponding quantitative or binarized data. Our two novel approaches for mixed-type variables show similar or better performance than the existing methods ClustOfVar and bias-corrected mutual information. Further, in contrast to ClustOfVar, our methods provide dissimilarity matrices, which is an advantage, especially for the purpose of visualization. Real data examples aim to give an impression of various kinds of potential applications for the integrative heatmap and other graphical displays based on dissimilarity matrices. We demonstrate that the presented integrative heatmap provides more information than common data displays about the relationship among variables and samples. The described clustering and visualization methods are implemented in our R package CluMix available from https://cran.r-project.org/web/packages/CluMix. PMID:29182671
Testing the Stability of 2-D Recursive QP, NSHP and General Digital Filters of Second Order
NASA Astrophysics Data System (ADS)
Rathinam, Ananthanarayanan; Ramesh, Rengaswamy; Reddy, P. Subbarami; Ramaswami, Ramaswamy
Several methods for testing stability of first quadrant quarter-plane two dimensional (2-D) recursive digital filters have been suggested in 1970's and 80's. Though Jury's row and column algorithms, row and column concatenation stability tests have been considered as highly efficient mapping methods. They still fall short of accuracy as they need infinite number of steps to conclude about the exact stability of the filters and also the computational time required is enormous. In this paper, we present procedurally very simple algebraic method requiring only two steps when applied to the second order 2-D quarter - plane filter. We extend the same method to the second order Non-Symmetric Half-plane (NSHP) filters. Enough examples are given for both these types of filters as well as some lower order general recursive 2-D digital filters. We applied our method to barely stable or barely unstable filter examples available in the literature and got the same decisions thus showing that our method is accurate enough.
Hyperspectral imaging and multivariate analysis in the dried blood spots investigations
NASA Astrophysics Data System (ADS)
Majda, Alicja; Wietecha-Posłuszny, Renata; Mendys, Agata; Wójtowicz, Anna; Łydżba-Kopczyńska, Barbara
2018-04-01
The aim of this study was to apply a new methodology using the combination of the hyperspectral imaging and the dry blood spot (DBS) collecting. Application of the hyperspectral imaging is fast and non-destructive. DBS method offers the advantage also on the micro-invasive blood collecting and low volume of required sample. During experimental step, the reflected light was recorded by two hyperspectral systems. The collection of 776 spectral bands in the VIS-NIR range (400-1000 nm) and 256 spectral bands in the SWIR range (970-2500 nm) was applied. Pixel has the size of 8 × 8 and 30 × 30 µm for VIS-NIR and SWIR camera, respectively. The obtained data in the form of hyperspectral cubes were treated with chemometric methods, i.e., minimum noise fraction and principal component analysis. It has been shown that the application of these methods on this type of data, by analyzing the scatter plots, allows a rapid analysis of the homogeneity of DBS, and the selection of representative areas for further analysis. It also gives the possibility of tracking the dynamics of changes occurring in biological traces applied on the surface. For the analyzed 28 blood samples, described method allowed to distinguish those blood stains because of time of apply.
Extraction of memory colors for preferred color correction in digital TVs
NASA Astrophysics Data System (ADS)
Ryu, Byong Tae; Yeom, Jee Young; Kim, Choon-Woo; Ahn, Ji-Young; Kang, Dong-Woo; Shin, Hyun-Ho
2009-01-01
Subjective image quality is one of the most important performance indicators for digital TVs. In order to improve subjective image quality, preferred color correction is often employed. More specifically, areas of memory colors such as skin, grass, and sky are modified to generate pleasing impression to viewers. Before applying the preferred color correction, tendency of preference for memory colors should be identified. It is often accomplished by off-line human visual tests. Areas containing the memory colors should be extracted then color correction is applied to the extracted areas. These processes should be performed on-line. This paper presents a new method for area extraction of three types of memory colors. Performance of the proposed method is evaluated by calculating the correct and false detection ratios. Experimental results indicate that proposed method outperform previous methods proposed for the memory color extraction.
Magnetic printing characteristics using master disk with perpendicular magnetic anisotropy
NASA Astrophysics Data System (ADS)
Fujiwara, Naoto; Nishida, Yoichi; Ishioka, Toshihide; Sugita, Ryuji; Yasunaga, Tadashi
With the increase in recording density and capacity of hard-disk drives (HDD), high speed, high precision and low cost servo writing method has become an issue in HDD industry. The magnetic printing was proposed as the ultimate solution for this issue [1-3]. There are two types of magnetic printing methods, which are 'Bit Printing (BP)' and 'Edge Printing (EP)'. BP method is conducted by applying external field whose direction is vertical to the plane of both master disk (Master) and perpendicular magnetic recording (PMR) media (Slave). On the other hand, EP method is conducted by applying external field toward down track direction of both master and slave. In BP for bit length shorter than 100 nm, the SNR of perpendicular anisotropic master was higher than isotropic master. And the SNR of EP for the bit length shorter than 50 nm was demonstrated.
RETROSPECTIVE DETECTION OF INTERLEAVED SLICE ACQUISITION PARAMETERS FROM FMRI DATA
Parker, David; Rotival, Georges; Laine, Andrew; Razlighi, Qolamreza R.
2015-01-01
To minimize slice excitation leakage to adjacent slices, interleaved slice acquisition is nowadays performed regularly in fMRI scanners. In interleaved slice acquisition, the number of slices skipped between two consecutive slice acquisitions is often referred to as the ‘interleave parameter’; the loss of this parameter can be catastrophic for the analysis of fMRI data. In this article we present a method to retrospectively detect the interleave parameter and the axis in which it is applied. Our method relies on the smoothness of the temporal-distance correlation function, which becomes disrupted along the axis on which interleaved slice acquisition is applied. We examined this method on simulated and real data in the presence of fMRI artifacts such as physiological noise, motion, etc. We also examined the reliability of this method in detecting different types of interleave parameters and demonstrated an accuracy of about 94% in more than 1000 real fMRI scans. PMID:26161244
Using cluster ensemble and validation to identify subtypes of pervasive developmental disorders.
Shen, Jess J; Lee, Phil-Hyoun; Holden, Jeanette J A; Shatkay, Hagit
2007-10-11
Pervasive Developmental Disorders (PDD) are neurodevelopmental disorders characterized by impairments in social interaction, communication and behavior. Given the diversity and varying severity of PDD, diagnostic tools attempt to identify homogeneous subtypes within PDD. Identifying subtypes can lead to targeted etiology studies and to effective type-specific intervention. Cluster analysis can suggest coherent subsets in data; however, different methods and assumptions lead to different results. Several previous studies applied clustering to PDD data, varying in number and characteristics of the produced subtypes. Most studies used a relatively small dataset (fewer than 150 subjects), and all applied only a single clustering method. Here we study a relatively large dataset (358 PDD patients), using an ensemble of three clustering methods. The results are evaluated using several validation methods, and consolidated through an integration step. Four clusters are identified, analyzed and compared to subtypes previously defined by the widely used diagnostic tool DSM-IV.
Using Cluster Ensemble and Validation to Identify Subtypes of Pervasive Developmental Disorders
Shen, Jess J.; Lee, Phil Hyoun; Holden, Jeanette J.A.; Shatkay, Hagit
2007-01-01
Pervasive Developmental Disorders (PDD) are neurodevelopmental disorders characterized by impairments in social interaction, communication and behavior.1 Given the diversity and varying severity of PDD, diagnostic tools attempt to identify homogeneous subtypes within PDD. Identifying subtypes can lead to targeted etiology studies and to effective type-specific intervention. Cluster analysis can suggest coherent subsets in data; however, different methods and assumptions lead to different results. Several previous studies applied clustering to PDD data, varying in number and characteristics of the produced subtypes19. Most studies used a relatively small dataset (fewer than 150 subjects), and all applied only a single clustering method. Here we study a relatively large dataset (358 PDD patients), using an ensemble of three clustering methods. The results are evaluated using several validation methods, and consolidated through an integration step. Four clusters are identified, analyzed and compared to subtypes previously defined by the widely used diagnostic tool DSM-IV.2 PMID:18693920
Investigation of Crustal Thickness in Eastern Anatolia Using Gravity, Magnetic and Topographic Data
NASA Astrophysics Data System (ADS)
Pamukçu, Oya Ankaya; Akçığ, Zafer; Demirbaş, Şevket; Zor, Ekrem
2007-12-01
The tectonic regime of Eastern Anatolia is determined by the Arabia-Eurasia continent-continent collision. Several dynamic models have been proposed to characterize the collision zone and its geodynamic structure. In this study, change in crustal thickness has been investigated using gravity, magnetic and topographic data of the region. In the first stage, two-dimensional low-pass filter and upward analytical continuation techniques were applied to the Bouguer gravity data of the region to investigate the behavior of the regional gravity anomalies. Next the moving window power spectrum method was used, and changes in the probable structural depths from 38 to 52 km were determined. The changes in crustal thickness where free air gravity and magnetic data have inversely correlated and the type of the anomaly resources were investigated applying the Euler deconvolution method to Bouguer gravity data. The obtained depth values are consistent with the results obtained using the power spectrum method. It was determined that the types of anomaly resources are different in the west and east of the 40° E longitude. Finally, using the obtained findings from this study and seismic velocity models proposed for this region by previous studies, a probable two-dimensional crust model was constituted.
An extended Kalman filter for mouse tracking.
Choi, Hongjun; Kim, Mingi; Lee, Onseok
2018-05-19
Animal tracking is an important tool for observing behavior, which is useful in various research areas. Animal specimens can be tracked using dynamic models and observation models that require several types of data. Tracking mouse has several barriers due to the physical characteristics of the mouse, their unpredictable movement, and cluttered environments. Therefore, we propose a reliable method that uses a detection stage and a tracking stage to successfully track mouse. The detection stage detects the surface area of the mouse skin, and the tracking stage implements an extended Kalman filter to estimate the state variables of a nonlinear model. The changes in the overall shape of the mouse are tracked using an oval-shaped tracking model to estimate the parameters for the ellipse. An experiment is conducted to demonstrate the performance of the proposed tracking algorithm using six video images showing various types of movement, and the ground truth values for synthetic images are compared to the values generated by the tracking algorithm. A conventional manual tracking method is also applied to compare across eight experimenters. Furthermore, the effectiveness of the proposed tracking method is also demonstrated by applying the tracking algorithm with actual images of mouse. Graphical abstract.
Accurate and reproducible functional maps in 127 human cell types via 2D genome segmentation
Hardison, Ross C.
2017-01-01
Abstract The Roadmap Epigenomics Consortium has published whole-genome functional annotation maps in 127 human cell types by integrating data from studies of multiple epigenetic marks. These maps have been widely used for studying gene regulation in cell type-specific contexts and predicting the functional impact of DNA mutations on disease. Here, we present a new map of functional elements produced by applying a method called IDEAS on the same data. The method has several unique advantages and outperforms existing methods, including that used by the Roadmap Epigenomics Consortium. Using five categories of independent experimental datasets, we compared the IDEAS and Roadmap Epigenomics maps. While the overall concordance between the two maps is high, the maps differ substantially in the prediction details and in their consistency of annotation of a given genomic position across cell types. The annotation from IDEAS is uniformly more accurate than the Roadmap Epigenomics annotation and the improvement is substantial based on several criteria. We further introduce a pipeline that improves the reproducibility of functional annotation maps. Thus, we provide a high-quality map of candidate functional regions across 127 human cell types and compare the quality of different annotation methods in order to facilitate biomedical research in epigenomics. PMID:28973456
NASA Astrophysics Data System (ADS)
Mihanovic, H.; Vilibic, I.
2014-12-01
Herein we present three recent oceanographic studies performed in the Adriatic Sea (the northernmost arm of the Mediterranean Sea), where Self-Organizing Maps (SOM) method, an unsupervised neural network method capable of recognizing patterns in various types of datasets, was applied to environmental data. The first study applied the SOM method to a long (50 years) series of thermohaline, dissolved oxygen and nutrient data measured over a deep (1200 m) Southern Adriatic Pit, in order to extract characteristic deep water mass patterns and their temporal variability. Low-dimensional SOM solutions revealed that the patterns were not sensitive to nutrients but were determined mostly by temperature, salinity and DO content; therefore, the water masses in the region can be traced by using no nutrient data. The second study encompassed the classification of surface current patterns measured by HF radars over the northernmost part of the Adriatic, by applying the SOM method to the HF radar data and operational mesoscale meteorological model surface wind fields. The major output from this study was a high correlation found between characteristic ocean current distribution patterns with and without wind data introduced to the SOM, implying the dominant wind driven dynamics over a local scale. That nominates the SOM method as a basis for generating very fast real-time forecast models over limited domains, based on the existing atmospheric forecasts and basin-oriented ocean experiments. The last study classified the sea ambient noise distributions in a habitat area of bottlenose dolphin, connecting it to the man-made noise generated by different types of vessels. Altogether, the usefulness of the SOM method has been recognized in different aspects of basin-scale ocean environmental studies, and may be a useful tool in future investigations of understanding of the multi-disciplinary dynamics over a basin, including the creation of operational environmental forecasting systems.
Monolithic high voltage nonlinear transmission line fabrication process
Cooper, Gregory A.
1994-01-01
A process for fabricating sequential inductors and varactor diodes of a monolithic, high voltage, nonlinear, transmission line in GaAs is disclosed. An epitaxially grown laminate is produced by applying a low doped active n-type GaAs layer to an n-plus type GaAs substrate. A heavily doped p-type GaAs layer is applied to the active n-type layer and a heavily doped n-type GaAs layer is applied to the p-type layer. Ohmic contacts are applied to the heavily doped n-type layer where diodes are desired. Multiple layers are then either etched away or Oxygen ion implanted to isolate individual varactor diodes. An insulator is applied between the diodes and a conductive/inductive layer is thereafter applied on top of the insulator layer to complete the process.
Multilayer ultra thick resist development for MEMS
NASA Astrophysics Data System (ADS)
Washio, Yasushi; Senzaki, Takahiro; Masuda, Yasuo; Saito, Koji; Obiya, Hiroyuki
2005-05-01
MEMS (Micro-Electro-Mechanical Systems) is achieved through a process technology, called Micro-machining. There are two distinct methods to manufacture a MEMS-product. One method is to form permanent film through photolithography, and the other is to form a non-permanent film resist after photolithography proceeded by etch or plating process. The three-dimensional ultra-fine processing technology based on photolithography, and is assembled by processes, such as anode junction, and post lithography processes such as etching and plating. Currently ORDYL PR-100 (Dry Film Type) is used for the permanent resist process. TOK has developed TMMR S2000 (Liquid Type) and TMMF S2000 (Dry Film Type) also. TOK has developed a new process utilizing these resist. The electro-forming method by photolithography is developed as one of the methods for enabling high resolution and high aspect formation. In recent years, it has become possible to manufacture conventionally difficult multilayer through our development with material and equipment project (M&E). As for material for electro-forming, it was checked that chemically amplified resist is optimal from the reaction mechanism as it is easily removed by the clean solution. Moreover, multiple plating formations were enabled with the resist through a new process. As for the equipment, TOK developed Applicator (It can apply 500 or more μms) and Developer, which achieves high throughput and quality. The detailed plating formations, which a path differs, and air wiring are realizable through M&E. From the above results, opposed to metallic mold plating, electro-forming method by resist, enabled to form high resolution and aspect pattern, at low cost. It is thought that the infinite possibility spreads by applying this process.
A Novel Method for Age Estimation in Solar-Type Stars Through GALEX FUV Magnitudes
NASA Astrophysics Data System (ADS)
Ho, Kelly; Subramonian, Arjun; Smith, Graeme; Shouru Shieh
2018-01-01
Utilizing an inverse association known to exist between Galaxy Evolution Explorer (GALEX) far ultraviolet (FUV) magnitudes and the chromospheric activity of F, G, and K dwarfs, we explored a method of age estimation in solar-type stars through GALEX FUV magnitudes. Sample solar-type star data were collected from refereed publications and filtered by B-V and absolute visual magnitude to ensure similarities in temperature and luminosity to the Sun. We determined FUV-B and calculated a residual index Q for all the stars, using the temperature-induced upper bound on FUV-B as the fiducial. Plotting current age estimates for the stars against Q, we discovered a strong and significant association between the variables. By applying a log-linear transformation to the data to produce a strong correlation between Q and loge Age, we confirmed the association between Q and age to be exponential. Thus, least-squares regression was used to generate an exponential model relating Q to age in solar-type stars, which can be used by astronomers. The Q-method of stellar age estimation is simple and more efficient than existing spectroscopic methods and has applications to galactic archaeology and stellar chemical composition analysis.
48 CFR 2415.304 - Evaluation factors.
Code of Federal Regulations, 2010 CFR
2010-10-01
... DEVELOPMENT CONTRACTING METHODS AND CONTRACTING TYPES CONTRACTING BY NEGOTIATION Source Selection 2415.304... assigned a numerical weight (except for pass-fail factors) which shall appear in the RFP. When using LPTA, each evaluation factor is applied on a “pass-fail” basis; numerical scores are not assigned. “Pass-fail...
48 CFR 15.407-3 - Forward pricing rate agreements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Contract Pricing 15.407-3 Forward pricing rate agreements. (a) When certified cost or pricing data are required, offerors are required to... apply and to identify the latest cost or pricing data already submitted in accordance with the FPRA. All...
48 CFR 15.407-3 - Forward pricing rate agreements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Contract Pricing 15.407-3 Forward pricing rate agreements. (a) When certified cost or pricing data are required, offerors are required to... apply and to identify the latest cost or pricing data already submitted in accordance with the FPRA. All...
Exploring University Students' Online Information Seeking about Prescription Medications
ERIC Educational Resources Information Center
Alkhalaf, Ahmad Abdullah
2013-01-01
This study explored university students' information seeking behaviors related to prescription medication (PM) information. Specifically, it examined the different sources students use for PM information, their use and perceptions of online sources, the types of PM information they seek, their concerns about, and methods they apply to verify the…
Empirical Specification of Utility Functions.
ERIC Educational Resources Information Center
Mellenbergh, Gideon J.
Decision theory can be applied to four types of decision situations in education and psychology: (1) selection; (2) placement; (3) classification; and (4) mastery. For the application of the theory, a utility function must be specified. Usually the utility function is chosen on a priori grounds. In this paper methods for the empirical assessment…
APHASIC CHILDREN, IDENTIFICATION AND EDUCATION BY THE ASSOCIATION METHOD.
ERIC Educational Resources Information Center
MCGINNIS, MILDRED A.
THIS BOOK IS DESIGNED TO DEFINE APHASIA AND ITS CHARACTERISTICS, TO PRESENT A PROCEDURE FOR TEACHING LANGUAGE TO APHASIC CHILDREN, AND TO APPLY THIS PROCEDURE TO ELEMENTARY SCHOOL SUBJECTS. OTHER HANDICAPPING CONDITIONS WHICH COMPLICATE THE DIAGNOSIS OF APHASIA ARE PRESENTED BY MEANS OF CASE STUDIES. CHARACTERISTICS OF TWO TYPES OF…
A Performance-Based Method of Student Evaluation
ERIC Educational Resources Information Center
Nelson, G. E.; And Others
1976-01-01
The Problem Oriented Medical Record (which allows practical definition of the behavioral terms thoroughness, reliability, sound analytical sense, and efficiency as they apply to the identification and management of patient problems) provides a vehicle to use in performance based type evaluation. A test-run use of the record is reported. (JT)
Right-of-Way Pest Control. Sale Publication 4075.
ERIC Educational Resources Information Center
Stimmann, M. W., Ed.
This manual discusses weed control to be applied to such areas as roads, airports, railroads, electric utilities, waterways; and trails. Included is information about types of weeds, methods of weed control, safe and effective use of herbicides, and application equipment. Some topics included are: (1) selective and nonselective herbicides; (2)…
USDA-ARS?s Scientific Manuscript database
Agronomic and Environmental research experiments result in data that are analyzed using statistical methods. These data are unavoidably accompanied by uncertainty. Decisions about hypotheses, based on statistical analyses of these data are therefore subject to error. This error is of three types,...
Coaching for Tests. ERIC Digest.
ERIC Educational Resources Information Center
Wildemuth, Barbara
The term "coaching" applies to a variety of types of test preparation programs which vary in length, instructional method, and content. Most research on the effectiveness of coaching has examined the Scholastic Aptitude Test (SAT), a measure of academic abilities used to predict college performance. This ERIC Digest reviews studies of…
NASA Astrophysics Data System (ADS)
Kobayashi, Shigeki; Saitoh, Masumi; Nakabayashi, Yukio; Uchida, Ken
2007-11-01
Uniaxial stress effects on Coulomb-limited mobility (μCoulomb) in Si metal-oxide-semiconductor field-effect transistors (MOSFETs) are investigated experimentally. By using the four-point bending method, uniaxial stress corresponding to 0.1% strain is applied to MOSFETs along the channel direction. It is found that μCoulomb in p-type MOSFETs is enhanced greatly by uniaxial stress; μCoulomb is as sensitive as phonon-limited mobility. The high sensitivity of μCoulomb in p-type MOSFETs to stress arises from the stress-induced change of hole effective mass.
NASA Astrophysics Data System (ADS)
Elwakil, S. A.; El-Labany, S. K.; Zahran, M. A.; Sabry, R.
2004-04-01
The modified extended tanh-function method were applied to the general class of nonlinear diffusion-convection equations where the concentration-dependent diffusivity, D( u), was taken to be a constant while the concentration-dependent hydraulic conductivity, K( u) were taken to be in a power law. The obtained solutions include rational-type, triangular-type, singular-type, and solitary wave solutions. In fact, the profile of the obtained solitary wave solutions resemble the characteristics of a shock-wave like structure for an arbitrary m (where m>1 is the power of the nonlinear convection term).
A Car Transportation System in Cooperation by Multiple Mobile Robots for Each Wheel: iCART II
NASA Astrophysics Data System (ADS)
Kashiwazaki, Koshi; Yonezawa, Naoaki; Kosuge, Kazuhiro; Sugahara, Yusuke; Hirata, Yasuhisa; Endo, Mitsuru; Kanbayashi, Takashi; Shinozuka, Hiroyuki; Suzuki, Koki; Ono, Yuki
The authors proposed a car transportation system, iCART (intelligent Cooperative Autonomous Robot Transporters), for automation of mechanical parking systems by two mobile robots. However, it was difficult to downsize the mobile robot because the length of it requires at least the wheelbase of a car. This paper proposes a new car transportation system, iCART II (iCART - type II), based on “a-robot-for-a-wheel” concept. A prototype system, MRWheel (a Mobile Robot for a Wheel), is designed and downsized less than half the conventional robot. First, a method for lifting up a wheel by MRWheel is described. In general, it is very difficult for mobile robots such as MRWheel to move to desired positions without motion errors caused by slipping, etc. Therefore, we propose a follower's motion error estimation algorithm based on the internal force applied to each follower by extending a conventional leader-follower type decentralized control algorithm for cooperative object transportation. The proposed algorithm enables followers to estimate their motion errors and enables the robots to transport a car to a desired position. In addition, we analyze and prove the stability and convergence of the resultant system with the proposed algorithm. In order to extract only the internal force from the force applied to each robot, we also propose a model-based external force compensation method. Finally, proposed methods are applied to the car transportation system, the experimental results confirm their validity.
Wang, Dongmei; Yu, Liniu; Zhou, Xianlian; Wang, Chengtao
2004-02-01
Four types of 3D mathematical mode of the muscle groups applied to the human mandible have been developed. One is based on electromyography (EMG) and the others are based on linear programming with different objective function. Each model contains 26 muscle forces and two joint forces, allowing simulation of static bite forces and concomitant joint reaction forces for various bite point locations and mandibular positions. In this paper, the method of image processing to measure the position and direction of muscle forces according to 3D CAD model was built with CT data. Matlab optimization toolbox is applied to solve the three modes based on linear programming. Results show that the model with an objective function requiring a minimum sum of the tensions in the muscles is reasonable and agrees very well with the normal physiology activity.
Robust distortion correction of endoscope
NASA Astrophysics Data System (ADS)
Li, Wenjing; Nie, Sixiang; Soto-Thompson, Marcelo; Chen, Chao-I.; A-Rahim, Yousif I.
2008-03-01
Endoscopic images suffer from a fundamental spatial distortion due to the wide angle design of the endoscope lens. This barrel-type distortion is an obstacle for subsequent Computer Aided Diagnosis (CAD) algorithms and should be corrected. Various methods and research models for the barrel-type distortion correction have been proposed and studied. For industrial applications, a stable, robust method with high accuracy is required to calibrate the different types of endoscopes in an easy of use way. The correction area shall be large enough to cover all the regions that the physicians need to see. In this paper, we present our endoscope distortion correction procedure which includes data acquisition, distortion center estimation, distortion coefficients calculation, and look-up table (LUT) generation. We investigate different polynomial models used for modeling the distortion and propose a new one which provides correction results with better visual quality. The method has been verified with four types of colonoscopes. The correction procedure is currently being applied on human subject data and the coefficients are being utilized in a subsequent 3D reconstruction project of colon.
Diagonally Implicit Runge-Kutta Methods for Ordinary Differential Equations. A Review
NASA Technical Reports Server (NTRS)
Kennedy, Christopher A.; Carpenter, Mark H.
2016-01-01
A review of diagonally implicit Runge-Kutta (DIRK) methods applied to rst-order ordinary di erential equations (ODEs) is undertaken. The goal of this review is to summarize the characteristics, assess the potential, and then design several nearly optimal, general purpose, DIRK-type methods. Over 20 important aspects of DIRKtype methods are reviewed. A design study is then conducted on DIRK-type methods having from two to seven implicit stages. From this, 15 schemes are selected for general purpose application. Testing of the 15 chosen methods is done on three singular perturbation problems. Based on the review of method characteristics, these methods focus on having a stage order of two, sti accuracy, L-stability, high quality embedded and dense-output methods, small magnitudes of the algebraic stability matrix eigenvalues, small values of aii, and small or vanishing values of the internal stability function for large eigenvalues of the Jacobian. Among the 15 new methods, ESDIRK4(3)6L[2]SA is recommended as a good default method for solving sti problems at moderate error tolerances.
Uga, Minako; Dan, Ippeita; Dan, Haruka; Kyutoku, Yasushi; Taguchi, Y-h; Watanabe, Eiju
2015-01-01
Abstract. Recent advances in multichannel functional near-infrared spectroscopy (fNIRS) allow wide coverage of cortical areas while entailing the necessity to control family-wise errors (FWEs) due to increased multiplicity. Conventionally, the Bonferroni method has been used to control FWE. While Type I errors (false positives) can be strictly controlled, the application of a large number of channel settings may inflate the chance of Type II errors (false negatives). The Bonferroni-based methods are especially stringent in controlling Type I errors of the most activated channel with the smallest p value. To maintain a balance between Types I and II errors, effective multiplicity (Meff) derived from the eigenvalues of correlation matrices is a method that has been introduced in genetic studies. Thus, we explored its feasibility in multichannel fNIRS studies. Applying the Meff method to three kinds of experimental data with different activation profiles, we performed resampling simulations and found that Meff was controlled at 10 to 15 in a 44-channel setting. Consequently, the number of significantly activated channels remained almost constant regardless of the number of measured channels. We demonstrated that the Meff approach can be an effective alternative to Bonferroni-based methods for multichannel fNIRS studies. PMID:26157982
Arndt, Patricia A; Garratty, George
2010-07-01
Flow cytometry operators often apply familiar white blood cell (WBC) methods when studying red blood cell (RBC) antigens and antibodies. Some WBC methods are not appropriate for RBCs, as the analysis of RBCs requires special considerations, for example, avoidance of agglutination. One hundred seventy-six published articles from 88 groups studying RBC interactions were reviewed. Three fourths of groups used at least one unnecessary WBC procedure for RBCs, and about one fourth did not use any method to prevent/disperse RBC agglutination. Flow cytometric studies were performed to determine the effect of RBC agglutination on results and compare different methods of preventing and/or dispersing agglutination. The presence of RBC agglutinates have been shown to be affected by the type of pipette tip used for mixing RBC suspensions, the number of antigen sites/RBC, the type and concentration of primary antibody, and the type of secondary antibody. For quantitation methods, for example, fetal maternal hemorrhage, the presence of agglutinates have been shown to adversely affect results (fewer fetal D+ RBCs detected). Copyright 2010 Elsevier Inc. All rights reserved.
Development of a spreadsheet for SNPs typing using Microsoft EXCEL.
Hashiyada, Masaki; Itakura, Yukio; Takahashi, Shirushi; Sakai, Jun; Funayama, Masato
2009-04-01
Single-nucleotide polymorphisms (SNPs) have some characteristics that make them very appropriate for forensic studies and applications. In our institute, SNPs typings were performed by the TaqMan SNP Genotyping Assays using the ABI PRISM 7500 FAST Real-Time PCR System (AppliedBiosystems) and Sequence Detection Software ver.1.4 (AppliedBiosystem). The TaqMan method was desired two positive control (Allele1 and 2) and one negative control to analyze each SNP locus. Therefore, it can be analyzed up to 24 loci of a person on a 96-well-plate at the same time. If SNPs analysis is expected to apply to biometrics authentication, 48 and over loci are required to identify a person. In this study, we designed a spreadsheet package using Microsoft EXCEL, and population data were used from our 120 SNPs population studies. On the spreadsheet, we defined SNP types using 'template files' instead of positive and negative controls. "Template files" consisted of the results of 94 unknown samples and two negative controls of each of 120 SNPs loci we had previously studied. By the use of the files, the spreadsheet could analyze 96 SNPs on a 96-wells-plate simultaneously.
Fuzzy Comprehensive Evaluation Method Applied in the Real Estate Investment Risks Research
NASA Astrophysics Data System (ADS)
ML(Zhang Minli), Zhang; Wp(Yang Wenpo), Yang
Real estate investment is a high-risk and high returned of economic activity, the key of real estate analysis is the identification of their types of investment risk and the risk of different types of effective prevention. But, as the financial crisis sweeping the world, the real estate industry also faces enormous risks, how effective and correct evaluation of real estate investment risks becomes the multitudinous scholar concern[1]. In this paper, real estate investment risks were summarized and analyzed, and comparative analysis method is discussed and finally presented fuzzy comprehensive evaluation method, not only in theory has the advantages of science, in the application also has the reliability, for real estate investment risk assessment provides an effective means for investors in real estate investing guidance on risk factors and forecasts.
Yoshimoto, Junichiro; Shimizu, Yu; Okada, Go; Takamura, Masahiro; Okamoto, Yasumasa; Yamawaki, Shigeto; Doya, Kenji
2017-01-01
We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views) for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data. PMID:29049392
NASA Astrophysics Data System (ADS)
Guo, Tian; Xu, Zili
2018-03-01
Measurement noise is inevitable in practice; thus, it is difficult to identify defects, cracks or damage in a structure while suppressing noise simultaneously. In this work, a novel method is introduced to detect multiple damage in noisy environments. Based on multi-scale space analysis for discrete signals, a method for extracting damage characteristics from the measured displacement mode shape is illustrated. Moreover, the proposed method incorporates a data fusion algorithm to further eliminate measurement noise-based interference. The effectiveness of the method is verified by numerical and experimental methods applied to different structural types. The results demonstrate that there are two advantages to the proposed method. First, damage features are extracted by the difference of the multi-scale representation; this step is taken such that the interference of noise amplification can be avoided. Second, a data fusion technique applied to the proposed method provides a global decision, which retains the damage features while maximally eliminating the uncertainty. Monte Carlo simulations are utilized to validate that the proposed method has a higher accuracy in damage detection.
Comparative study of inversion methods of three-dimensional NMR and sensitivity to fluids
NASA Astrophysics Data System (ADS)
Tan, Maojin; Wang, Peng; Mao, Keyu
2014-04-01
Three-dimensional nuclear magnetic resonance (3D NMR) logging can simultaneously measure transverse relaxation time (T2), longitudinal relaxation time (T1), and diffusion coefficient (D). These parameters can be used to distinguish fluids in the porous reservoirs. For 3D NMR logging, the relaxation mechanism and mathematical model, Fredholm equation, are introduced, and the inversion methods including Singular Value Decomposition (SVD), Butler-Reeds-Dawson (BRD), and Global Inversion (GI) methods are studied in detail, respectively. During one simulation test, multi-echo CPMG sequence activation is designed firstly, echo trains of the ideal fluid models are synthesized, then an inversion algorithm is carried on these synthetic echo trains, and finally T2-T1-D map is built. Futhermore, SVD, BRD, and GI methods are respectively applied into a same fluid model, and the computing speed and inversion accuracy are compared and analyzed. When the optimal inversion method and matrix dimention are applied, the inversion results are in good aggreement with the supposed fluid model, which indicates that the inversion method of 3D NMR is applieable for fluid typing of oil and gas reservoirs. Additionally, the forward modeling and inversion tests are made in oil-water and gas-water models, respectively, the sensitivity to the fluids in different magnetic field gradients is also examined in detail. The effect of magnetic gradient on fluid typing in 3D NMR logging is stuied and the optimal manetic gradient is choosen.
Garcillán-Barcia, M Pilar; Ruiz del Castillo, Belén; Alvarado, Andrés; de la Cruz, Fernando; Martínez-Martínez, Luis
2015-01-01
Degenerate Primer MOB Typing is a PCR-based protocol for the classification of γ-proteobacterial transmissible plasmids in five phylogenetic relaxase MOB families. It was applied to a multiresistant E. coli collection, previously characterized by PCR-based replicon-typing, in order to compare both methods. Plasmids from 32 clinical isolates of multiresistant E. coli (19 extended spectrum beta-lactamase producers and 13 non producers) and their transconjugants were analyzed. A total of 95 relaxases were detected, at least one per isolate, underscoring the high potential of these strains for antibiotic-resistance transmission. MOBP12 and MOBF12 plasmids were the most abundant. Most MOB subfamilies detected were present in both subsets of the collection, indicating a shared mobilome among multiresistant E. coli. The plasmid profile obtained by both methods was compared, which provided useful data upon which decisions related to the implementation of detection methods in the clinic could be based. The phylogenetic depth at which replicon and MOB-typing classify plasmids is different. While replicon-typing aims at plasmid replication regions with non-degenerate primers, MOB-typing classifies plasmids into relaxase subfamilies using degenerate primers. As a result, MOB-typing provides a deeper phylogenetic depth than replicon-typing and new plasmid groups are uncovered. Significantly, MOB typing identified 17 plasmids and an integrative and conjugative element, which were not detected by replicon-typing. Four of these backbones were different from previously reported elements. Copyright © 2014 Elsevier Inc. All rights reserved.
Monolithic high voltage nonlinear transmission line fabrication process
Cooper, G.A.
1994-10-04
A process for fabricating sequential inductors and varistor diodes of a monolithic, high voltage, nonlinear, transmission line in GaAs is disclosed. An epitaxially grown laminate is produced by applying a low doped active n-type GaAs layer to an n-plus type GaAs substrate. A heavily doped p-type GaAs layer is applied to the active n-type layer and a heavily doped n-type GaAs layer is applied to the p-type layer. Ohmic contacts are applied to the heavily doped n-type layer where diodes are desired. Multiple layers are then either etched away or Oxygen ion implanted to isolate individual varistor diodes. An insulator is applied between the diodes and a conductive/inductive layer is thereafter applied on top of the insulator layer to complete the process. 6 figs.
A scoping review of spatial cluster analysis techniques for point-event data.
Fritz, Charles E; Schuurman, Nadine; Robertson, Colin; Lear, Scott
2013-05-01
Spatial cluster analysis is a uniquely interdisciplinary endeavour, and so it is important to communicate and disseminate ideas, innovations, best practices and challenges across practitioners, applied epidemiology researchers and spatial statisticians. In this research we conducted a scoping review to systematically search peer-reviewed journal databases for research that has employed spatial cluster analysis methods on individual-level, address location, or x and y coordinate derived data. To illustrate the thematic issues raised by our results, methods were tested using a dataset where known clusters existed. Point pattern methods, spatial clustering and cluster detection tests, and a locally weighted spatial regression model were most commonly used for individual-level, address location data (n = 29). The spatial scan statistic was the most popular method for address location data (n = 19). Six themes were identified relating to the application of spatial cluster analysis methods and subsequent analyses, which we recommend researchers to consider; exploratory analysis, visualization, spatial resolution, aetiology, scale and spatial weights. It is our intention that researchers seeking direction for using spatial cluster analysis methods, consider the caveats and strengths of each approach, but also explore the numerous other methods available for this type of analysis. Applied spatial epidemiology researchers and practitioners should give special consideration to applying multiple tests to a dataset. Future research should focus on developing frameworks for selecting appropriate methods and the corresponding spatial weighting schemes.
Aghdam, Rosa; Baghfalaki, Taban; Khosravi, Pegah; Saberi Ansari, Elnaz
2017-12-01
Deciphering important genes and pathways from incomplete gene expression data could facilitate a better understanding of cancer. Different imputation methods can be applied to estimate the missing values. In our study, we evaluated various imputation methods for their performance in preserving significant genes and pathways. In the first step, 5% genes are considered in random for two types of ignorable and non-ignorable missingness mechanisms with various missing rates. Next, 10 well-known imputation methods were applied to the complete datasets. The significance analysis of microarrays (SAM) method was applied to detect the significant genes in rectal and lung cancers to showcase the utility of imputation approaches in preserving significant genes. To determine the impact of different imputation methods on the identification of important genes, the chi-squared test was used to compare the proportions of overlaps between significant genes detected from original data and those detected from the imputed datasets. Additionally, the significant genes are tested for their enrichment in important pathways, using the ConsensusPathDB. Our results showed that almost all the significant genes and pathways of the original dataset can be detected in all imputed datasets, indicating that there is no significant difference in the performance of various imputation methods tested. The source code and selected datasets are available on http://profiles.bs.ipm.ir/softwares/imputation_methods/. Copyright © 2017. Production and hosting by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Wong, G. T. F.; Tai, J. H.
2014-12-01
702 CTD profiles were collected in the subtropical northern South China Sea at and in the vicinity of the SouthEast Asian Time-series Study (SEATS) station (18.2oN, 115.8oE) between 17.5 and 18.5oN and 115.3 and 116.3oE in 64 cruises in 1997 to 2013. The hydrographic structure of the upper water above the permanent thermocline may be classified into 4 principal types: (a) classic type (an almost isopycnic upper water); (b) stepwise type (with one or more small but significant step-increases in σθ in the upper water); (c) graded type (an approximately constant depth gradient in a monotonic increase in σθ in the upper water); and (d) mixed type (a combination of the stepwise and graded types). The 4 types of upper water were found in 75, 14, 5, and 6% of the cruises, respectively. Ten schemes were applied to these data to determine the mixed layer depth (MLD): 4 fixed temperature difference (FTD) methods (0.2, 0.5, 0.8 and 1.0oC decrease from 10 m); 1 fixed density difference (FDD) method (0.125 σθ increase from 10 m); 1 fixed temperature gradient (FTG) method (at 0.05oC/m); 3 fixed density gradient (FDG) methods (at 0.01, 0.05 and 0.1 σθ/m); and the maximum density gradient (MDG) method. MLD could not be clearly depicted in the 3 minor types of upper water. In the classical type, while similar MLD-s were found in a large majority of the cruises among all 10 methods, substantial discrepancies among methods could be found. The most consistent results, generally within ±5 m, were found among the FDG method at 0.05, 0.1 σθ/m and FTD method at 0.8 and 1.0oC. The MDG method gave consistently deeper MLD by ~8 m. If that difference was taken into account, the results were generally consistent with those from the other 4 methods. The remaining 5 methods could all yield MLD-s shallower than the first 4 methods by >10 m as they failed to capture the bottom of the mixed layer as indicated by visual inspection.
Estimating phase synchronization in dynamical systems using cellular nonlinear networks
NASA Astrophysics Data System (ADS)
Sowa, Robert; Chernihovskyi, Anton; Mormann, Florian; Lehnertz, Klaus
2005-06-01
We propose a method for estimating phase synchronization between time series using the parallel computing architecture of cellular nonlinear networks (CNN’s). Applying this method to time series of coupled nonlinear model systems and to electroencephalographic time series from epilepsy patients, we show that an accurate approximation of the mean phase coherence R —a bivariate measure for phase synchronization—can be achieved with CNN’s using polynomial-type templates.
Studies on the development of latent fingerprints by the method of solid-medium ninhydrin.
Yang, Ruiqin; Lian, Jie
2014-09-01
A new series of fingerprint developing membrane were prepared using ninhydrin as the developing agent, and pressure-sensitive emulsifiers as the encapsulated chemicals. The type of emulsifier, plastic film, concentration of the developing agent, modifying ions and thickness of the membrane were studied in order to get the optimized fingerprint developing effect. The membrane can be successfully applied to both latent sweat fingerprints and blood fingerprint on many different surfaces. The sensitivity of the method toward the latent sweat fingerprint is 0.1 mg/L amino acid. The membrane can be applied to both porous and non-porous surfaces. Fingerprints that are difficult to develop on surfaces such as leather, glass and heat-sensitive paper using traditional chemical methods can be successfully developed with this membrane. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
A Comparison of Interactional Aerodynamics Methods for a Helicopter in Low Speed Flight
NASA Technical Reports Server (NTRS)
Berry, John D.; Letnikov, Victor; Bavykina, Irena; Chaffin, Mark S.
1998-01-01
Recent advances in computing subsonic flow have been applied to helicopter configurations with various degrees of success. This paper is a comparison of two specific methods applied to a particularly challenging regime of helicopter flight, very low speeds, where the interaction of the rotor wake and the fuselage are most significant. Comparisons are made between different methods of predicting the interactional aerodynamics associated with a simple generic helicopter configuration. These comparisons are made using fuselage pressure data from a Mach-scaled powered model helicopter with a rotor diameter of approximately 3 meters. The data shown are for an advance ratio of 0.05 with a thrust coefficient of 0.0066. The results of this comparison show that in this type of complex flow both analytical techniques have regions where they are more accurate in matching the experimental data.
NASA Astrophysics Data System (ADS)
Hoshor, Cory; Young, Stephan; Rogers, Brent; Currie, James; Oakes, Thomas; Scott, Paul; Miller, William; Caruso, Anthony
2014-03-01
A novel application of the Pearson Cross-Correlation to neutron spectral discernment in a moderating type neutron spectrometer is introduced. This cross-correlation analysis will be applied to spectral response data collected through both MCNP simulation and empirical measurement by the volumetrically sensitive spectrometer for comparison in 1, 2, and 3 spatial dimensions. The spectroscopic analysis methods discussed will be demonstrated to discern various common spectral and monoenergetic neutron sources.
Analysis on IGBT and Diode Failures in Distribution Electronic Power Transformers
NASA Astrophysics Data System (ADS)
Wang, Si-cong; Sang, Zi-xia; Yan, Jiong; Du, Zhi; Huang, Jia-qi; Chen, Zhu
2018-02-01
Fault characteristics of power electronic components are of great importance for a power electronic device, and are of extraordinary importance for those applied in power system. The topology structures and control method of Distribution Electronic Power Transformer (D-EPT) are introduced, and an exploration on fault types and fault characteristics for the IGBT and diode failures is presented. The analysis and simulation of different fault types for the fault characteristics lead to the D-EPT fault location scheme.
Jia, G H
1989-12-01
This paper discussed the usage and effect of IUD-type O use for rural married women in Guangdong province. The continuation rate of IUD-type O is 71.7 per cent 100 women in one year. The main problem for failure was expulsion. This paper have used a combination of univariate and multivariate analytic methods. On the whole, the important factors were number of gravid and parity, number of induced abortion and medical technical level etc.
Methods of determination of periods in the motion of asteroids
NASA Astrophysics Data System (ADS)
Bien, R.; Schubart, J.
Numerical techniques for the analysis of fundamental periods in asteroidal motion are evaluated. The specific techniques evaluated were: the periodogram analysis procedure of Wundt (1980); Stumpff's (1937) system of algebraic transformations; and Labrouste's procedure. It is shown that the Labrouste procedure permitted sufficient isolation of single oscillations from the quasi-periodic process of asteroidal motion. The procedure was applied to the analysis of resonance in the motion of Trojan-type and Hilda-type asteroids, and some preliminary results are discussed.
Evaluating the Risks of Clinical Research: Direct Comparative Analysis
Abdoler, Emily; Roberson-Nay, Roxann; Pine, Daniel S.; Wendler, David
2014-01-01
Abstract Objectives: Many guidelines and regulations allow children and adolescents to be enrolled in research without the prospect of clinical benefit when it poses minimal risk. However, few systematic methods exist to determine when research risks are minimal. This situation has led to significant variation in minimal risk judgments, raising concern that some children are not being adequately protected. To address this concern, we describe a new method for implementing the widely endorsed “risks of daily life” standard for minimal risk. This standard defines research risks as minimal when they do not exceed the risks posed by daily life activities or routine examinations. Methods: This study employed a conceptual and normative analysis, and use of an illustrative example. Results: Different risks are composed of the same basic elements: Type, likelihood, and magnitude of harm. Hence, one can compare the risks of research and the risks of daily life by comparing the respective basic elements with each other. We use this insight to develop a systematic method, direct comparative analysis, for implementing the “risks of daily life” standard for minimal risk. The method offers a way of evaluating research procedures that pose the same types of risk as daily life activities, such as the risk of experiencing anxiety, stress, or other psychological harm. We thus illustrate how direct comparative analysis can be applied in practice by using it to evaluate whether the anxiety induced by a respiratory CO2 challenge poses minimal or greater than minimal risks in children and adolescents. Conclusions: Direct comparative analysis is a systematic method for applying the “risks of daily life” standard for minimal risk to research procedures that pose the same types of risk as daily life activities. It thereby offers a method to protect children and adolescents in research, while ensuring that important studies are not blocked because of unwarranted concerns about research risks. PMID:25210944
Graph theory applied to the analysis of motor activity in patients with schizophrenia and depression
Fasmer, Erlend Eindride; Berle, Jan Øystein; Oedegaard, Ketil J.; Hauge, Erik R.
2018-01-01
Depression and schizophrenia are defined only by their clinical features, and diagnostic separation between them can be difficult. Disturbances in motor activity pattern are central features of both types of disorders. We introduce a new method to analyze time series, called the similarity graph algorithm. Time series of motor activity, obtained from actigraph registrations over 12 days in depressed and schizophrenic patients, were mapped into a graph and we then applied techniques from graph theory to characterize these time series, primarily looking for changes in complexity. The most marked finding was that depressed patients were found to be significantly different from both controls and schizophrenic patients, with evidence of less regularity of the time series, when analyzing the recordings with one hour intervals. These findings support the contention that there are important differences in control systems regulating motor behavior in patients with depression and schizophrenia. The similarity graph algorithm we have described can easily be applied to the study of other types of time series. PMID:29668743
Fasmer, Erlend Eindride; Fasmer, Ole Bernt; Berle, Jan Øystein; Oedegaard, Ketil J; Hauge, Erik R
2018-01-01
Depression and schizophrenia are defined only by their clinical features, and diagnostic separation between them can be difficult. Disturbances in motor activity pattern are central features of both types of disorders. We introduce a new method to analyze time series, called the similarity graph algorithm. Time series of motor activity, obtained from actigraph registrations over 12 days in depressed and schizophrenic patients, were mapped into a graph and we then applied techniques from graph theory to characterize these time series, primarily looking for changes in complexity. The most marked finding was that depressed patients were found to be significantly different from both controls and schizophrenic patients, with evidence of less regularity of the time series, when analyzing the recordings with one hour intervals. These findings support the contention that there are important differences in control systems regulating motor behavior in patients with depression and schizophrenia. The similarity graph algorithm we have described can easily be applied to the study of other types of time series.
Buyuk, S Kutalmış; Kucukekenci, Ahmet Serkan
2018-03-01
To investigate the shear bond strength (SBS) of orthodontic metal brackets applied to different types of ceramic surfaces treated with different etching procedures and bonding agents. Monolithic CAD/CAM ceramic specimens (N = 120; n = 40 each group) of feldspathic ceramic Vita Mark II, resin nanoceramic Lava Ultimate, and hybrid ceramic Vita Enamic were fabricated (14 × 12 × 3 mm). Ceramic specimens were separated into four subgroups (n = 10) according to type of surface treatment and bonding onto the ceramic surface. Within each group, four subgroups were prepared by phosphoric acid, hydrofluoric acid, Transbond XT primer, and Clearfill Ceramic primer. Mandibular central incisor metal brackets were bonded with light-cure composite. The SBS data were analyzed using three-way analysis of variance (ANOVA) and Tukey HSD tests. The highest SBS was found in the Vita Enamic group, which is a hybrid ceramic, etched with hydrofluoric acid and applied Transbond XT Adhesive primer (7.28 ± 2.49 MPa). The lowest SBS was found in the Lava Ultimate group, which is a resin nano-ceramic etched with hydrofluoric acid and applied Clearfill ceramic primer (2.20 ± 1.21 MPa). CAD/CAM material types and bonding procedures affected bond strength ( P < .05), but the etching procedure did not ( P > .05). The use of Transbond XT as a primer bonding agent resulted in higher SBS.
Convolution- and Fourier-transform-based reconstructors for pyramid wavefront sensor.
Shatokhina, Iuliia; Ramlau, Ronny
2017-08-01
In this paper, we present two novel algorithms for wavefront reconstruction from pyramid-type wavefront sensor data. An overview of the current state-of-the-art in the application of pyramid-type wavefront sensors shows that the novel algorithms can be applied in various scientific fields such as astronomy, ophthalmology, and microscopy. Assuming a computationally very challenging setting corresponding to the extreme adaptive optics (XAO) on the European Extremely Large Telescope, we present the results of the performed end-to-end simulations and compare the achieved AO correction quality (in terms of the long-exposure Strehl ratio) to other methods, such as matrix-vector multiplication and preprocessed cumulative reconstructor with domain decomposition. Also, we provide a comparison in terms of applicability and computational complexity and closed-loop performance of our novel algorithms to other methods existing for this type of sensor.
A contour-based shape descriptor for biomedical image classification and retrieval
NASA Astrophysics Data System (ADS)
You, Daekeun; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.
2013-12-01
Contours, object blobs, and specific feature points are utilized to represent object shapes and extract shape descriptors that can then be used for object detection or image classification. In this research we develop a shape descriptor for biomedical image type (or, modality) classification. We adapt a feature extraction method used in optical character recognition (OCR) for character shape representation, and apply various image preprocessing methods to successfully adapt the method to our application. The proposed shape descriptor is applied to radiology images (e.g., MRI, CT, ultrasound, X-ray, etc.) to assess its usefulness for modality classification. In our experiment we compare our method with other visual descriptors such as CEDD, CLD, Tamura, and PHOG that extract color, texture, or shape information from images. The proposed method achieved the highest classification accuracy of 74.1% among all other individual descriptors in the test, and when combined with CSD (color structure descriptor) showed better performance (78.9%) than using the shape descriptor alone.
Garcia, Diego; Moro, Claudia Maria Cabral; Cicogna, Paulo Eduardo; Carvalho, Deborah Ribeiro
2013-01-01
Clinical guidelines are documents that assist healthcare professionals, facilitating and standardizing diagnosis, management, and treatment in specific areas. Computerized guidelines as decision support systems (DSS) attempt to increase the performance of tasks and facilitate the use of guidelines. Most DSS are not integrated into the electronic health record (EHR), ordering some degree of rework especially related to data collection. This study's objective was to present a method for integrating clinical guidelines into the EHR. The study developed first a way to identify data and rules contained in the guidelines, and then incorporate rules into an archetype-based EHR. The proposed method tested was anemia treatment in the Chronic Kidney Disease Guideline. The phases of the method are: data and rules identification; archetypes elaboration; rules definition and inclusion in inference engine; and DSS-EHR integration and validation. The main feature of the proposed method is that it is generic and can be applied toany type of guideline.
Learn from every mistake! Hierarchical information combination in astronomy
NASA Astrophysics Data System (ADS)
Süveges, Maria; Fotopoulou, Sotiria; Coupon, Jean; Paltani, Stéphane; Eyer, Laurent; Rimoldini, Lorenzo
2017-06-01
Throughout the processing and analysis of survey data, a ubiquitous issue nowadays is that we are spoilt for choice when we need to select a methodology for some of its steps. The alternative methods usually fail and excel in different data regions, and have various advantages and drawbacks, so a combination that unites the strengths of all while suppressing the weaknesses is desirable. We propose to use a two-level hierarchy of learners. Its first level consists of training and applying the possible base methods on the first part of a known set. At the second level, we feed the output probability distributions from all base methods to a second learner trained on the remaining known objects. Using classification of variable stars and photometric redshift estimation as examples, we show that the hierarchical combination is capable of achieving general improvement over averaging-type combination methods, correcting systematics present in all base methods, is easy to train and apply, and thus, it is a promising tool in the astronomical ``Big Data'' era.
Constraining f(R) theories with cosmography
NASA Astrophysics Data System (ADS)
Anabella Teppa Pannia, Florencia; Esteban Perez Bergliaffa, Santiago
2013-08-01
A method to set constraints on the parameters of extended theories of gravitation is presented. It is based on the comparison of two series expansions of any observable that depends on H(z). The first expansion is of the cosmographical type, while the second uses the dependence of H with z furnished by a given type of extended theory. When applied to f(R) theories together with the redshift drift, the method yields limits on the parameters of two examples (the theory of Hu and Sawicki [1], and the exponential gravity introduced by Linder [2]) that are compatible with or more stringent than the existing ones, as well as a limit for a previously unconstrained parameter.
NASA Astrophysics Data System (ADS)
Saha, Debajyoti; Shaw, Pankaj Kumar; Ghosh, Sabuj; Janaki, M. S.; Sekar Iyengar, A. N.
2018-01-01
We have carried out a detailed study of scaling region using detrended fractal analysis test by applying different forcing likewise noise, sinusoidal, square on the floating potential fluctuations acquired under different pressures in a DC glow discharge plasma. The transition in the dynamics is observed through recurrence plot techniques which is an efficient method to observe the critical regime transitions in dynamics. The complexity of the nonlinear fluctuation has been revealed with the help of recurrence quantification analysis which is a suitable tool for investigating recurrence, an ubiquitous feature providing a deep insight into the dynamics of real dynamical system. An informal test for stationarity which checks for the compatibility of nonlinear approximations to the dynamics made in different segments in a time series has been proposed. In case of sinusoidal, noise, square forcing applied on fluctuation acquired at P = 0.12 mbar only one dominant scaling region is observed whereas the forcing applied on fluctuation (P = 0.04 mbar) two prominent scaling regions have been explored reliably using different forcing amplitudes indicating the signature of crossover phenomena. Furthermore a persistence long range behavior has been observed in one of these scaling regions. A comprehensive study of the quantification of scaling exponents has been carried out with the increase in amplitude and frequency of sinusoidal, square type of forcings. The scalings exponent is envisaged to be the roughness of the time series. The method provides a single quantitative idea of the scaling exponent to quantify the correlation properties of a signal.
Developing a Conceptually Equivalent Type 2 Diabetes Risk Score for Indian Gujaratis in the UK
Patel, Naina; Stone, Margaret; Barber, Shaun; Gray, Laura; Davies, Melanie; Khunti, Kamlesh
2016-01-01
Aims. To apply and assess the suitability of a model consisting of commonly used cross-cultural translation methods to achieve a conceptually equivalent Gujarati language version of the Leicester self-assessment type 2 diabetes risk score. Methods. Implementation of the model involved multiple stages, including pretesting of the translated risk score by conducting semistructured interviews with a purposive sample of volunteers. Interviews were conducted on an iterative basis to enable findings to inform translation revisions and to elicit volunteers' ability to self-complete and understand the risk score. Results. The pretest stage was an essential component involving recruitment of a diverse sample of 18 Gujarati volunteers, many of whom gave detailed suggestions for improving the instructions for the calculation of the risk score and BMI table. Volunteers found the standard and level of Gujarati accessible and helpful in understanding the concept of risk, although many of the volunteers struggled to calculate their BMI. Conclusions. This is the first time that a multicomponent translation model has been applied to the translation of a type 2 diabetes risk score into another language. This project provides an invaluable opportunity to share learning about the transferability of this model for translation of self-completed risk scores in other health conditions. PMID:27703985
Szakmár, Katalin; Reichart, Olivér; Szatmári, István; Erdősi, Orsolya; Szili, Zsuzsanna; László, Noémi; Székely Körmöczy, Péter; Laczay, Péter
2014-09-01
The potential effect of doxycycline on the microbial activity was investigated in three types of soil. Soil samples were spiked with doxycycline, incubated at 25°C and tested at 0, 2, 4 and 6 days after treatment. The microbiological activity of the soil was characterized by the viable count determined by plate pouring and by the time necessary to reach a defined rate of the redox-potential decrease termed as time to detection (TTD).The viable count of the samples was not changed during the storage. The TTD values, however exhibited a significant increase in the 0.2-1.6 mg/kg doxycycline concentration range compared to the untreated samples indicating concentration-dependent inhibitory effect on microbial activity. The potency of the effect was different in the 3 soil types. To describe the combined effect of the doxycycline concentration and time on the biological activity of one type of soil a mathematical model was constructed and applied.The change of microbial metabolic rate could be measured also without (detectable) change of microbial count when the traditional microbiological methods are not applicable. The applied new redox potential measurement-based method is a simple and useful procedure for the examination of microbial activity of soil and its potential inhibition by antibiotics.
Chu, Chu; Wei, Mengmeng; Wang, Shan; Zheng, Liqiong; He, Zheng; Cao, Jun; Yan, Jizhong
2017-09-15
A simple and effective method was developed for determining lignans in Schisandrae Chinensis Fructus by using a micro-matrix solid phase dispersion (MSPD) technique coupled with microemulsion electrokinetic chromatography (MEEKC). Molecular sieve, TS-1, was applied as a solid supporting material in micro MSPD extraction for the first time. Parameters that affect extraction efficiency, such as type of dispersant, mass ratio of the sample to the dispersant, grinding time, elution solvent and volume were optimized. The optimal extraction conditions involve dispersing 25mg of powdered Schisandrae samples with 50mg of TS-1 by a mortar and pestle. A grinding time of 150s was adopted. The blend was then transferred to a solid-phase extraction cartridge and the target analytes were eluted with 500μL of methanol. Moreover, several parameters affecting MEEKC separation were studied, including the type of oil, SDS concentration, type and concentration of cosurfactant, and concentration of organic modifier. A satisfactory linearity (R>0.9998) was obtained, and the calculated limits of quantitation were less than 2.77μg/mL. Finally, the micro MSPD-MEEKC method was successfully applied to the analysis of lignans in complex Schisandrae fructus samples. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chaabani, Anouar; Njeh, Anouar; Donner, Wolfgang; Klein, Andreas; Hédi Ben Ghozlen, Mohamed
2017-05-01
Ba0.65Sr0.35TiO3 (BST) thin films of 300 nm were deposited on Pt(111)/TiO2/SiO2/Si(001) substrates by radio frequency magnetron sputtering. Two thin films with different (111) and (001) fiber textures were prepared. X-ray diffraction was applied to measure texture. The raw pole figure data were further processed using the MTEX quantitative texture analysis software for plotting pole figures and calculating elastic constants and Young’s modulus from the orientation distribution function (ODF) for each type of textured fiber. The calculated elastic constants were used in the theoretical studies of surface acoustics waves (SAW) propagating in two types of multilayered BST systems. Theoretical dispersion curves were plotted by the application of the ordinary differential equation (ODE) and the stiffness matrix methods (SMM). A laser acoustic waves (LAW) technique was applied to generate surface acoustic waves (SAW) propagating in the BST films, and from a recursive process, the effective Young’s modulus are determined for the two samples. These methods are used to extract and compare elastic properties of two types of BST films, and quantify the influence of texture on the direction-dependent Young’s modulus.
Huber, Stefan; Klein, Elise; Moeller, Korbinian; Willmes, Klaus
2015-10-01
In neuropsychological research, single-cases are often compared with a small control sample. Crawford and colleagues developed inferential methods (i.e., the modified t-test) for such a research design. In the present article, we suggest an extension of the methods of Crawford and colleagues employing linear mixed models (LMM). We first show that a t-test for the significance of a dummy coded predictor variable in a linear regression is equivalent to the modified t-test of Crawford and colleagues. As an extension to this idea, we then generalized the modified t-test to repeated measures data by using LMMs to compare the performance difference in two conditions observed in a single participant to that of a small control group. The performance of LMMs regarding Type I error rates and statistical power were tested based on Monte-Carlo simulations. We found that starting with about 15-20 participants in the control sample Type I error rates were close to the nominal Type I error rate using the Satterthwaite approximation for the degrees of freedom. Moreover, statistical power was acceptable. Therefore, we conclude that LMMs can be applied successfully to statistically evaluate performance differences between a single-case and a control sample. Copyright © 2015 Elsevier Ltd. All rights reserved.
Calculation of photoionization differential cross sections using complex Gauss-type orbitals.
Matsuzaki, Rei; Yabushita, Satoshi
2017-09-05
Accurate theoretical calculation of photoelectron angular distributions for general molecules is becoming an important tool to image various chemical reactions in real time. We show in this article that not only photoionization total cross sections but also photoelectron angular distributions can be accurately calculated using complex Gauss-type orbital (cGTO) basis functions. Our method can be easily combined with existing quantum chemistry techniques including electron correlation effects, and applied to various molecules. The so-called two-potential formula is applied to represent the transition dipole moment from an initial bound state to a final continuum state in the molecular coordinate frame. The two required continuum functions, the zeroth-order final continuum state and the first-order wave function induced by the photon field, have been variationally obtained using the complex basis function method with a mixture of appropriate cGTOs and conventional real Gauss-type orbitals (GTOs) to represent the continuum orbitals as well as the remaining bound orbitals. The complex orbital exponents of the cGTOs are optimized by fitting to the outgoing Coulomb functions. The efficiency of the current method is demonstrated through the calculations of the asymmetry parameters and molecular-frame photoelectron angular distributions of H2+ and H2 . In the calculations of H2 , the static exchange and random phase approximations are employed, and the dependence of the results on the basis functions is discussed. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Damage characterization of E-glass and C-glass fibre polymer composites after high velocity impact
NASA Astrophysics Data System (ADS)
Razali, N.; Sultan, M. T. H.; Cardona, F.; Jawaid, M.
2017-12-01
The purpose of this work is to identify impact damage on glass fibre reinforced polymer composite structures after high velocity impact. In this research, Type C-glass (600 g/m2) and Type E-glass (600 g/m2) were used to fabricate Glass Fibre-Reinforced Polymer composites (GFRP) plates. The panels were fabricated using a vacuum bagging and hot bounder method. Single stage gas gun (SSGG) was used to do the testing and data acquisition system was used to collect the damage data. Different types of bullets and different pressure levels were used for the experiment. The obtained results showed that the C-glass type of GFRP experienced more damage in comparison to E-glass type of materials based on the amount of energy absorbed on impact and the size of the damage area. All specimens underwent a partial fibre breakage but the laminates were not fully penetrated by the bullets. This indicated that both types of materials have high impact resistance even though the applied pressures of the gas gun were on the high range. We concluded that within the material specifications of the laminates including the type of glass fibre reinforcement and the thickness of the panels, those composite materials are safe to be applied in structural and body armour applications as an alternative to more expensive materials such as Kevlar and type S-glass fibre based panels.
Automated classification of dolphin echolocation click types from the Gulf of Mexico.
Frasier, Kaitlin E; Roch, Marie A; Soldevilla, Melissa S; Wiggins, Sean M; Garrison, Lance P; Hildebrand, John A
2017-12-01
Delphinids produce large numbers of short duration, broadband echolocation clicks which may be useful for species classification in passive acoustic monitoring efforts. A challenge in echolocation click classification is to overcome the many sources of variability to recognize underlying patterns across many detections. An automated unsupervised network-based classification method was developed to simulate the approach a human analyst uses when categorizing click types: Clusters of similar clicks were identified by incorporating multiple click characteristics (spectral shape and inter-click interval distributions) to distinguish within-type from between-type variation, and identify distinct, persistent click types. Once click types were established, an algorithm for classifying novel detections using existing clusters was tested. The automated classification method was applied to a dataset of 52 million clicks detected across five monitoring sites over two years in the Gulf of Mexico (GOM). Seven distinct click types were identified, one of which is known to be associated with an acoustically identifiable delphinid (Risso's dolphin) and six of which are not yet identified. All types occurred at multiple monitoring locations, but the relative occurrence of types varied, particularly between continental shelf and slope locations. Automatically-identified click types from autonomous seafloor recorders without verifiable species identification were compared with clicks detected on sea-surface towed hydrophone arrays in the presence of visually identified delphinid species. These comparisons suggest potential species identities for the animals producing some echolocation click types. The network-based classification method presented here is effective for rapid, unsupervised delphinid click classification across large datasets in which the click types may not be known a priori.
Automated classification of dolphin echolocation click types from the Gulf of Mexico
Roch, Marie A.; Soldevilla, Melissa S.; Wiggins, Sean M.; Garrison, Lance P.; Hildebrand, John A.
2017-01-01
Delphinids produce large numbers of short duration, broadband echolocation clicks which may be useful for species classification in passive acoustic monitoring efforts. A challenge in echolocation click classification is to overcome the many sources of variability to recognize underlying patterns across many detections. An automated unsupervised network-based classification method was developed to simulate the approach a human analyst uses when categorizing click types: Clusters of similar clicks were identified by incorporating multiple click characteristics (spectral shape and inter-click interval distributions) to distinguish within-type from between-type variation, and identify distinct, persistent click types. Once click types were established, an algorithm for classifying novel detections using existing clusters was tested. The automated classification method was applied to a dataset of 52 million clicks detected across five monitoring sites over two years in the Gulf of Mexico (GOM). Seven distinct click types were identified, one of which is known to be associated with an acoustically identifiable delphinid (Risso’s dolphin) and six of which are not yet identified. All types occurred at multiple monitoring locations, but the relative occurrence of types varied, particularly between continental shelf and slope locations. Automatically-identified click types from autonomous seafloor recorders without verifiable species identification were compared with clicks detected on sea-surface towed hydrophone arrays in the presence of visually identified delphinid species. These comparisons suggest potential species identities for the animals producing some echolocation click types. The network-based classification method presented here is effective for rapid, unsupervised delphinid click classification across large datasets in which the click types may not be known a priori. PMID:29216184
Goode, D.J.; Konikow, Leonard F.
1989-01-01
The U.S. Geological Survey computer model of two-dimensional solute transport and dispersion in ground water (Konikow and Bredehoeft, 1978) has been modified to incorporate the following types of chemical reactions: (1) first-order irreversible rate-reaction, such as radioactive decay; (2) reversible equilibrium-controlled sorption with linear, Freundlich, or Langmuir isotherms; and (3) reversible equilibrium-controlled ion exchange for monovalent or divalent ions. Numerical procedures are developed to incorporate these processes in the general solution scheme that uses method-of- characteristics with particle tracking for advection and finite-difference methods for dispersion. The first type of reaction is accounted for by an exponential decay term applied directly to the particle concentration. The second and third types of reactions are incorporated through a retardation factor, which is a function of concentration for nonlinear cases. The model is evaluated and verified by comparison with analytical solutions for linear sorption and decay, and by comparison with other numerical solutions for nonlinear sorption and ion exchange.
Non-invasive prenatal detection of achondroplasia using circulating fetal DNA in maternal plasma.
Lim, Ji Hyae; Kim, Mee Jin; Kim, Shin Young; Kim, Hye Ok; Song, Mee Jin; Kim, Min Hyoung; Park, So Yeon; Yang, Jae Hyug; Ryu, Hyun Mee
2011-02-01
To perform a reliable non-invasive detection of the fetal achondroplasia using maternal plasma. We developed a quantitative fluorescent-polymerase chain reaction (QF-PCR) method suitable for detection of the FGFR3 mutation (G1138A) causing achondroplasia. This method was applied in a non-invasive detection of the fetal achondroplasia using circulating fetal-DNA (cf-DNA) in maternal plasma. Maternal plasmas were obtained at 27 weeks of gestational age from women carrying an achondroplasia fetus or a normal fetus. Two percent or less achondroplasia DNA was reliably detected by QF-PCR. In a woman carrying a normal fetus, analysis of cf-DNA showed only one peak of the wild-type G allele. In a woman expected an achondroplasia fetus, analysis of cf-DNA showed the two peaks of wild-type G allele and mutant-type A allele and accurately detected the fetal achondroplasia. The non-invasive method using maternal plasma and QF-PCR may be useful for diagnosis of the fetal achondroplasia.
NASA Astrophysics Data System (ADS)
Ma, L.; Zhou, M.; Li, C.
2017-09-01
In this study, a Random Forest (RF) based land covers classification method is presented to predict the types of land covers in Miyun area. The returned full-waveforms which were acquired by a LiteMapper 5600 airborne LiDAR system were processed, including waveform filtering, waveform decomposition and features extraction. The commonly used features that were distance, intensity, Full Width at Half Maximum (FWHM), skewness and kurtosis were extracted. These waveform features were used as attributes of training data for generating the RF prediction model. The RF prediction model was applied to predict the types of land covers in Miyun area as trees, buildings, farmland and ground. The classification results of these four types of land covers were obtained according to the ground truth information acquired from CCD image data of the same region. The RF classification results were compared with that of SVM method and show better results. The RF classification accuracy reached 89.73% and the classification Kappa was 0.8631.
Comparison of Different EHG Feature Selection Methods for the Detection of Preterm Labor
Alamedine, D.; Khalil, M.; Marque, C.
2013-01-01
Numerous types of linear and nonlinear features have been extracted from the electrohysterogram (EHG) in order to classify labor and pregnancy contractions. As a result, the number of available features is now very large. The goal of this study is to reduce the number of features by selecting only the relevant ones which are useful for solving the classification problem. This paper presents three methods for feature subset selection that can be applied to choose the best subsets for classifying labor and pregnancy contractions: an algorithm using the Jeffrey divergence (JD) distance, a sequential forward selection (SFS) algorithm, and a binary particle swarm optimization (BPSO) algorithm. The two last methods are based on a classifier and were tested with three types of classifiers. These methods have allowed us to identify common features which are relevant for contraction classification. PMID:24454536
Wan, Neng; Lin, Ge
2016-12-01
Smartphones have emerged as a promising type of equipment for monitoring human activities in environmental health studies. However, degraded location accuracy and inconsistency of smartphone-measured GPS data have limited its effectiveness for classifying human activity patterns. This study proposes a fuzzy classification scheme for differentiating human activity patterns from smartphone-collected GPS data. Specifically, a fuzzy logic reasoning was adopted to overcome the influence of location uncertainty by estimating the probability of different activity types for single GPS points. Based on that approach, a segment aggregation method was developed to infer activity patterns, while adjusting for uncertainties of point attributes. Validations of the proposed methods were carried out based on a convenient sample of three subjects with different types of smartphones. The results indicate desirable accuracy (e.g., up to 96% in activity identification) with use of this method. Two examples were provided in the appendix to illustrate how the proposed methods could be applied in environmental health studies. Researchers could tailor this scheme to fit a variety of research topics.
A comparison of radiosity with current methods of sound level prediction in commercial spaces
NASA Astrophysics Data System (ADS)
Beamer, C. Walter, IV; Muehleisen, Ralph T.
2002-11-01
The ray tracing and image methods (and variations thereof) are widely used for the computation of sound fields in architectural spaces. The ray tracing and image methods are best suited for spaces with mostly specular reflecting surfaces. The radiosity method, a method based on solving a system of energy balance equations, is best applied to spaces with mainly diffusely reflective surfaces. Because very few spaces are either purely specular or purely diffuse, all methods must deal with both types of reflecting surfaces. A comparison of the radiosity method to other methods for the prediction of sound levels in commercial environments is presented. [Work supported by NSF.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Möller, A.; Ruhlmann-Kleider, V.; Leloup, C.
In the era of large astronomical surveys, photometric classification of supernovae (SNe) has become an important research field due to limited spectroscopic resources for candidate follow-up and classification. In this work, we present a method to photometrically classify type Ia supernovae based on machine learning with redshifts that are derived from the SN light-curves. This method is implemented on real data from the SNLS deferred pipeline, a purely photometric pipeline that identifies SNe Ia at high-redshifts (0.2 < z < 1.1). Our method consists of two stages: feature extraction (obtaining the SN redshift from photometry and estimating light-curve shape parameters)more » and machine learning classification. We study the performance of different algorithms such as Random Forest and Boosted Decision Trees. We evaluate the performance using SN simulations and real data from the first 3 years of the Supernova Legacy Survey (SNLS), which contains large spectroscopically and photometrically classified type Ia samples. Using the Area Under the Curve (AUC) metric, where perfect classification is given by 1, we find that our best-performing classifier (Extreme Gradient Boosting Decision Tree) has an AUC of 0.98.We show that it is possible to obtain a large photometrically selected type Ia SN sample with an estimated contamination of less than 5%. When applied to data from the first three years of SNLS, we obtain 529 events. We investigate the differences between classifying simulated SNe, and real SN survey data. In particular, we find that applying a thorough set of selection cuts to the SN sample is essential for good classification. This work demonstrates for the first time the feasibility of machine learning classification in a high- z SN survey with application to real SN data.« less
MO-DE-BRA-05: Developing Effective Medical Physics Knowledge Structures: Models and Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprawls, P
Purpose: Develop a method and supporting online resources to be used by medical physics educators for teaching medical imaging professionals and trainees so they develop highly-effective physics knowledge structures that can contribute to improved diagnostic image quality on a global basis. Methods: The different types of mental knowledge structures were analyzed and modeled with respect to both the learning and teaching process for their development and the functions or tasks that can be performed with the knowledge. While symbolic verbal and mathematical knowledge structures are very important in medical physics for many purposes, the tasks of applying physics in clinicalmore » imaging--especially to optimize image quality and diagnostic accuracy--requires a sensory conceptual knowledge structure, specifically, an interconnected network of visually based concepts. This type of knowledge supports tasks such as analysis, evaluation, problem solving, interacting, and creating solutions. Traditional educational methods including lectures, online modules, and many texts are serial procedures and limited with respect to developing interconnected conceptual networks. A method consisting of the synergistic combination of on-site medical physics teachers and the online resource, CONET (Concept network developer), has been developed and made available for the topic Radiographic Image Quality. This was selected as the inaugural topic, others to follow, because it can be used by medical physicists teaching the large population of medical imaging professionals, such as radiology residents, who can apply the knowledge. Results: Tutorials for medical physics educators on developing effective knowledge structures are being presented and published and CONET is available with open access for all to use. Conclusion: An adjunct to traditional medical physics educational methods with the added focus on sensory concept development provides opportunities for medical physics teachers to share their knowledge and experience at a higher cognitive level and produce medical professionals with the enhanced ability to apply physics to clinical procedures.« less
NASA Astrophysics Data System (ADS)
Valizadeh, Maryam; Sohrabi, Mahmoud Reza
2018-03-01
In the present study, artificial neural networks (ANNs) and support vector regression (SVR) as intelligent methods coupled with UV spectroscopy for simultaneous quantitative determination of Dorzolamide (DOR) and Timolol (TIM) in eye drop. Several synthetic mixtures were analyzed for validating the proposed methods. At first, neural network time series, which one type of network from the artificial neural network was employed and its efficiency was evaluated. Afterwards, the radial basis network was applied as another neural network. Results showed that the performance of this method is suitable for predicting. Finally, support vector regression was proposed to construct the Zilomole prediction model. Also, root mean square error (RMSE) and mean recovery (%) were calculated for SVR method. Moreover, the proposed methods were compared to the high-performance liquid chromatography (HPLC) as a reference method. One way analysis of variance (ANOVA) test at the 95% confidence level applied to the comparison results of suggested and reference methods that there were no significant differences between them. Also, the effect of interferences was investigated in spike solutions.
NASA Astrophysics Data System (ADS)
Yusoff, Mohd Suffian; Azwan, Azlyza Mohd; Zamri, Mohd Faiz Muaz Ahmad; Aziz, Hamidi Abdul
2017-10-01
In this study electrocoagulation method is used to treat slaughterhouse wastewaters. The aim of this study is to determine the efficiency of electrocoagulation method for the removal of colour, turbidity, oil and grease of slaughterhouse wastewaters. The factors of electrode types, and voltage applied during treatment are the study parameters. The types of electrode used are Aluminium (Al) grade 6082 and Iron (Fe) grade 1050. Meanwhile, the ranges of voltage applied are 2, 4, 6, 8 volts at a time interval of 10, 20 and 30 minutes respectively. The effect of these factors on the removal of fat oil and grease (FOG), colour and turbidity are analyzed. The results show maximum removal of FOG, colour and turbidity are recorded using Fe electrode at 8 V of applied voltage with 30 minutes of treatment time. The increase in treatment time of the cell will also increase the amount of hydrogen bubbles at the cathode which results in a greater upwards flux and a faster removal of FOG,, turbidity and colour. The removal of FOG, colour and turbidity are 98%, 92% and 91 % respectively. Meanwhile, by using Al electrodes in the same condition, the removal of FOG, colour and turbidity are 91%, 85% and 87 % respectively. Whereas by using Fe-Al as electrodes pairs, the removal of FOG, colour and turbidity are found to be at 90%, 87% and 76 % respectively. In this case, the Fe-Fe pair electrodes have been proven to provide better performance for FOG, colour and turbidity removals of slaughterhouse wastewaters. Therefore, it is feasible to be considered as an alternative method for wastewater treatment.
Applying reliability analysis to design electric power systems for More-electric aircraft
NASA Astrophysics Data System (ADS)
Zhang, Baozhu
The More-Electric Aircraft (MEA) is a type of aircraft that replaces conventional hydraulic and pneumatic systems with electrically powered components. These changes have significantly challenged the aircraft electric power system design. This thesis investigates how reliability analysis can be applied to automatically generate system topologies for the MEA electric power system. We first use a traditional method of reliability block diagrams to analyze the reliability level on different system topologies. We next propose a new methodology in which system topologies, constrained by a set reliability level, are automatically generated. The path-set method is used for analysis. Finally, we interface these sets of system topologies with control synthesis tools to automatically create correct-by-construction control logic for the electric power system.
NASA Astrophysics Data System (ADS)
Äijälä, Mikko; Heikkinen, Liine; Fröhlich, Roman; Canonaco, Francesco; Prévôt, André S. H.; Junninen, Heikki; Petäjä, Tuukka; Kulmala, Markku; Worsnop, Douglas; Ehn, Mikael
2017-03-01
Mass spectrometric measurements commonly yield data on hundreds of variables over thousands of points in time. Refining and synthesizing this raw data into chemical information necessitates the use of advanced, statistics-based data analytical techniques. In the field of analytical aerosol chemistry, statistical, dimensionality reductive methods have become widespread in the last decade, yet comparable advanced chemometric techniques for data classification and identification remain marginal. Here we present an example of combining data dimensionality reduction (factorization) with exploratory classification (clustering), and show that the results cannot only reproduce and corroborate earlier findings, but also complement and broaden our current perspectives on aerosol chemical classification. We find that applying positive matrix factorization to extract spectral characteristics of the organic component of air pollution plumes, together with an unsupervised clustering algorithm, k-means+ + , for classification, reproduces classical organic aerosol speciation schemes. Applying appropriately chosen metrics for spectral dissimilarity along with optimized data weighting, the source-specific pollution characteristics can be statistically resolved even for spectrally very similar aerosol types, such as different combustion-related anthropogenic aerosol species and atmospheric aerosols with similar degree of oxidation. In addition to the typical oxidation level and source-driven aerosol classification, we were also able to classify and characterize outlier groups that would likely be disregarded in a more conventional analysis. Evaluating solution quality for the classification also provides means to assess the performance of mass spectral similarity metrics and optimize weighting for mass spectral variables. This facilitates algorithm-based evaluation of aerosol spectra, which may prove invaluable for future development of automatic methods for spectra identification and classification. Robust, statistics-based results and data visualizations also provide important clues to a human analyst on the existence and chemical interpretation of data structures. Applying these methods to a test set of data, aerosol mass spectrometric data of organic aerosol from a boreal forest site, yielded five to seven different recurring pollution types from various sources, including traffic, cooking, biomass burning and nearby sawmills. Additionally, three distinct, minor pollution types were discovered and identified as amine-dominated aerosols.
Probabilistic finite elements for fatigue and fracture analysis
NASA Astrophysics Data System (ADS)
Belytschko, Ted; Liu, Wing Kam
Attenuation is focused on the development of Probabilistic Finite Element Method (PFEM), which combines the finite element method with statistics and reliability methods, and its application to linear, nonlinear structural mechanics problems and fracture mechanics problems. The computational tool based on the Stochastic Boundary Element Method is also given for the reliability analysis of a curvilinear fatigue crack growth. The existing PFEM's have been applied to solve for two types of problems: (1) determination of the response uncertainty in terms of the means, variance and correlation coefficients; and (2) determination the probability of failure associated with prescribed limit states.
Applications of alignment-free methods in epigenomics.
Pinello, Luca; Lo Bosco, Giosuè; Yuan, Guo-Cheng
2014-05-01
Epigenetic mechanisms play an important role in the regulation of cell type-specific gene activities, yet how epigenetic patterns are established and maintained remains poorly understood. Recent studies have supported a role of DNA sequences in recruitment of epigenetic regulators. Alignment-free methods have been applied to identify distinct sequence features that are associated with epigenetic patterns and to predict epigenomic profiles. Here, we review recent advances in such applications, including the methods to map DNA sequence to feature space, sequence comparison and prediction models. Computational studies using these methods have provided important insights into the epigenetic regulatory mechanisms.
Probabilistic finite elements for fatigue and fracture analysis
NASA Technical Reports Server (NTRS)
Belytschko, Ted; Liu, Wing Kam
1992-01-01
Attenuation is focused on the development of Probabilistic Finite Element Method (PFEM), which combines the finite element method with statistics and reliability methods, and its application to linear, nonlinear structural mechanics problems and fracture mechanics problems. The computational tool based on the Stochastic Boundary Element Method is also given for the reliability analysis of a curvilinear fatigue crack growth. The existing PFEM's have been applied to solve for two types of problems: (1) determination of the response uncertainty in terms of the means, variance and correlation coefficients; and (2) determination the probability of failure associated with prescribed limit states.
Numerical solution of distributed order fractional differential equations
NASA Astrophysics Data System (ADS)
Katsikadelis, John T.
2014-02-01
In this paper a method for the numerical solution of distributed order FDEs (fractional differential equations) of a general form is presented. The method applies to both linear and nonlinear equations. The Caputo type fractional derivative is employed. The distributed order FDE is approximated with a multi-term FDE, which is then solved by adjusting appropriately the numerical method developed for multi-term FDEs by Katsikadelis. Several example equations are solved and the response of mechanical systems described by such equations is studied. The convergence and the accuracy of the method for linear and nonlinear equations are demonstrated through well corroborated numerical results.
Efficient propagation of the hierarchical equations of motion using the matrix product state method
NASA Astrophysics Data System (ADS)
Shi, Qiang; Xu, Yang; Yan, Yaming; Xu, Meng
2018-05-01
We apply the matrix product state (MPS) method to propagate the hierarchical equations of motion (HEOM). It is shown that the MPS approximation works well in different type of problems, including boson and fermion baths. The MPS method based on the time-dependent variational principle is also found to be applicable to HEOM with over one thousand effective modes. Combining the flexibility of the HEOM in defining the effective modes and the efficiency of the MPS method thus may provide a promising tool in simulating quantum dynamics in condensed phases.
Discovery of cancer common and specific driver gene sets
2017-01-01
Abstract Cancer is known as a disease mainly caused by gene alterations. Discovery of mutated driver pathways or gene sets is becoming an important step to understand molecular mechanisms of carcinogenesis. However, systematically investigating commonalities and specificities of driver gene sets among multiple cancer types is still a great challenge, but this investigation will undoubtedly benefit deciphering cancers and will be helpful for personalized therapy and precision medicine in cancer treatment. In this study, we propose two optimization models to de novo discover common driver gene sets among multiple cancer types (ComMDP) and specific driver gene sets of one certain or multiple cancer types to other cancers (SpeMDP), respectively. We first apply ComMDP and SpeMDP to simulated data to validate their efficiency. Then, we further apply these methods to 12 cancer types from The Cancer Genome Atlas (TCGA) and obtain several biologically meaningful driver pathways. As examples, we construct a common cancer pathway model for BRCA and OV, infer a complex driver pathway model for BRCA carcinogenesis based on common driver gene sets of BRCA with eight cancer types, and investigate specific driver pathways of the liquid cancer lymphoblastic acute myeloid leukemia (LAML) versus other solid cancer types. In these processes more candidate cancer genes are also found. PMID:28168295
A Mixture Modeling Framework for Differential Analysis of High-Throughput Data
Taslim, Cenny; Lin, Shili
2014-01-01
The inventions of microarray and next generation sequencing technologies have revolutionized research in genomics; platforms have led to massive amount of data in gene expression, methylation, and protein-DNA interactions. A common theme among a number of biological problems using high-throughput technologies is differential analysis. Despite the common theme, different data types have their own unique features, creating a “moving target” scenario. As such, methods specifically designed for one data type may not lead to satisfactory results when applied to another data type. To meet this challenge so that not only currently existing data types but also data from future problems, platforms, or experiments can be analyzed, we propose a mixture modeling framework that is flexible enough to automatically adapt to any moving target. More specifically, the approach considers several classes of mixture models and essentially provides a model-based procedure whose model is adaptive to the particular data being analyzed. We demonstrate the utility of the methodology by applying it to three types of real data: gene expression, methylation, and ChIP-seq. We also carried out simulations to gauge the performance and showed that the approach can be more efficient than any individual model without inflating type I error. PMID:25057284
NASA Astrophysics Data System (ADS)
Alahmadi, F.; Rahman, N. A.; Abdulrazzak, M.
2014-09-01
Rainfall frequency analysis is an essential tool for the design of water related infrastructure. It can be used to predict future flood magnitudes for a given magnitude and frequency of extreme rainfall events. This study analyses the application of rainfall partial duration series (PDS) in the vast growing urban Madinah city located in the western part of Saudi Arabia. Different statistical distributions were applied (i.e. Normal, Log Normal, Extreme Value type I, Generalized Extreme Value, Pearson Type III, Log Pearson Type III) and their distribution parameters were estimated using L-moments methods. Also, different selection criteria models are applied, e.g. Akaike Information Criterion (AIC), Corrected Akaike Information Criterion (AICc), Bayesian Information Criterion (BIC) and Anderson-Darling Criterion (ADC). The analysis indicated the advantage of Generalized Extreme Value as the best fit statistical distribution for Madinah partial duration daily rainfall series. The outcome of such an evaluation can contribute toward better design criteria for flood management, especially flood protection measures.
Government Jobs and Careers in Foreign Languages.
ERIC Educational Resources Information Center
Leaver, Betty Lou
A discussion of foreign-language-related jobs available in federal government agencies describes the extent and types of opportunities and suggests methods to use in preparing and applying for them. There are many diverse opportunities for foreign language students in the federal government, but many are hard to locate from the outside, rarely…
A Survey of Popular R Packages for Cluster Analysis
ERIC Educational Resources Information Center
Flynt, Abby; Dean, Nema
2016-01-01
Cluster analysis is a set of statistical methods for discovering new group/class structure when exploring data sets. This article reviews the following popular libraries/commands in the R software language for applying different types of cluster analysis: from the stats library, the kmeans, and hclust functions; the mclust library; the poLCA…
Fitting ARMA Time Series by Structural Equation Models.
ERIC Educational Resources Information Center
van Buuren, Stef
1997-01-01
This paper outlines how the stationary ARMA (p,q) model (G. Box and G. Jenkins, 1976) can be specified as a structural equation model. Maximum likelihood estimates for the parameters in the ARMA model can be obtained by software for fitting structural equation models. The method is applied to three problem types. (SLD)
Using Latent Class Analysis to Model Temperament Types
ERIC Educational Resources Information Center
Loken, Eric
2004-01-01
Mixture models are appropriate for data that arise from a set of qualitatively different subpopulations. In this study, latent class analysis was applied to observational data from a laboratory assessment of infant temperament at four months of age. The EM algorithm was used to fit the models, and the Bayesian method of posterior predictive checks…
Apply Pesticides Correctly, A Guide for Commercial Applicators: Seed Treatment.
ERIC Educational Resources Information Center
Wamsley, Mary Ann, Ed.; Vermeire, Donna M., Ed.
This guide contains basic information to meet specific standards for pesticide applicators. The text is concerned with the types of seeds that require chemical protection against pests. Methods of treatment and labeling requirements for such seeds as rye, wheat, soybeans, peas, and grass hybrids are discussed. Safety and environmental precautions…
Loudiyi, M; Rutledge, D N; Aït-Kaddour, A
2018-10-30
Common Dimension (ComDim) chemometrics method for multi-block data analysis was employed to evaluate the impact of different added salts and ripening times on physicochemical, color, dynamic low amplitude oscillatory rheology, texture profile, and molecular structure (fluorescence and MIR spectroscopies) of five Cantal-type cheeses. Firstly, Independent Components Analysis (ICA) was applied separately on fluorescence and MIR spectra in order to extract the relevant signal source and the associated proportions related to molecular structure characteristics. ComDim was then applied on the 31 data tables corresponding to the proportion of ICA signals obtained for spectral methods and the global analysis of cheeses by the other techniques. The ComDim results indicated that generally cheeses made with 50% NaCl or with 75:25% NaCl/KCl exhibit the equivalent characteristics in structural, textural, meltability and color properties. The proposed methodology demonstrates the applicability of ComDim for the characterization of samples when different techniques describe the same samples. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Isozaki, Masanori; Adachi, Kouichi; Hita, Takanori; Asano, Yuji
Applying anti-corrosion grease and aluminum clad steel (AC) wires to ACSR has adopted as general methods to prevent overhead transmission line conductors and/or wires from corrosion. However, there are some cases that ineffectiveness of those means are reported on some transmission lines passing through acid atmosphere in the vicinity of a factory exhausting acid smoke. The feature of the corrosion caused by acid atmosphere is to show a higher speed in its progressing as well known. As means against such acid corrosion, application of high purity aluminum, selective removal of inter-metallic compound in aluminum and plastic coating wires has been reported before, and each has both of advantage and disadvantage actually. In the former letter, we reported the new type of anti-corrosion grease that shows an excellent property against acid atmosphere as well as in a salty circumstance. Here presents a new type of anti-corrosion technology of applying high corrosion resistance aluminum alloy or zinc coatings on each component wires of a conductor that we succeed in developing through a serial study of anti-corrosion methods on overhead transmission lines.
3D Texture Analysis in Renal Cell Carcinoma Tissue Image Grading
Cho, Nam-Hoon; Choi, Heung-Kook
2014-01-01
One of the most significant processes in cancer cell and tissue image analysis is the efficient extraction of features for grading purposes. This research applied two types of three-dimensional texture analysis methods to the extraction of feature values from renal cell carcinoma tissue images, and then evaluated the validity of the methods statistically through grade classification. First, we used a confocal laser scanning microscope to obtain image slices of four grades of renal cell carcinoma, which were then reconstructed into 3D volumes. Next, we extracted quantitative values using a 3D gray level cooccurrence matrix (GLCM) and a 3D wavelet based on two types of basis functions. To evaluate their validity, we predefined 6 different statistical classifiers and applied these to the extracted feature sets. In the grade classification results, 3D Haar wavelet texture features combined with principal component analysis showed the best discrimination results. Classification using 3D wavelet texture features was significantly better than 3D GLCM, suggesting that the former has potential for use in a computer-based grading system. PMID:25371701
Jia, Li; Liu, Yaling; Du, Yanyan; Xing, Da
2007-06-22
A pressurized capillary electrochromatography (pCEC) system was developed for the separation of water-soluble vitamins, in which UV absorbance was used as the detection method and a monolithic silica-ODS column as the separation column. The parameters (type and content of organic solvent in the mobile phase, type and concentration of electrolyte, pH of the electrolyte buffer, applied voltage and flow rate) affecting the separation resolution were evaluated. The combination of two on-line concentration techniques, namely, solvent gradient zone sharpening effect and field-enhanced sample stacking, was utilized to improve detection sensitivity, which proved to be beneficial to enhance the detection sensitivity by enabling the injection of large volumes of samples. Coupling electrokinetic injection with the on-line concentration techniques was much more beneficial for the concentration of positively charged vitamins. Comparing with the conventional injection mode, the enhancement in the detection sensitivities of water-soluble vitamins using the on-line concentration technique is in the range of 3 to 35-fold. The developed pCEC method was applied to evaluate water-soluble vitamins in corns.
Solving nonlinear evolution equation system using two different methods
NASA Astrophysics Data System (ADS)
Kaplan, Melike; Bekir, Ahmet; Ozer, Mehmet N.
2015-12-01
This paper deals with constructing more general exact solutions of the coupled Higgs equation by using the (G0/G, 1/G)-expansion and (1/G0)-expansion methods. The obtained solutions are expressed by three types of functions: hyperbolic, trigonometric and rational functions with free parameters. It has been shown that the suggested methods are productive and will be used to solve nonlinear partial differential equations in applied mathematics and engineering. Throughout the paper, all the calculations are made with the aid of the Maple software.
Modelling the firing pattern of bullfrog vestibular neurons responding to naturalistic stimuli
NASA Technical Reports Server (NTRS)
Paulin, M. G.; Hoffman, L. F.
1999-01-01
We have developed a neural system identification method for fitting models to stimulus-response data, where the response is a spike train. The method involves using a general nonlinear optimisation procedure to fit models in the time domain. We have applied the method to model bullfrog semicircular canal afferent neuron responses during naturalistic, broad-band head rotations. These neurons respond in diverse ways, but a simple four parameter class of models elegantly accounts for the various types of responses observed. c1999 Elsevier Science B.V. All rights reserved.
Living systematic reviews: 3. Statistical methods for updating meta-analyses.
Simmonds, Mark; Salanti, Georgia; McKenzie, Joanne; Elliott, Julian
2017-11-01
A living systematic review (LSR) should keep the review current as new research evidence emerges. Any meta-analyses included in the review will also need updating as new material is identified. If the aim of the review is solely to present the best current evidence standard meta-analysis may be sufficient, provided reviewers are aware that results may change at later updates. If the review is used in a decision-making context, more caution may be needed. When using standard meta-analysis methods, the chance of incorrectly concluding that any updated meta-analysis is statistically significant when there is no effect (the type I error) increases rapidly as more updates are performed. Inaccurate estimation of any heterogeneity across studies may also lead to inappropriate conclusions. This paper considers four methods to avoid some of these statistical problems when updating meta-analyses: two methods, that is, law of the iterated logarithm and the Shuster method control primarily for inflation of type I error and two other methods, that is, trial sequential analysis and sequential meta-analysis control for type I and II errors (failing to detect a genuine effect) and take account of heterogeneity. This paper compares the methods and considers how they could be applied to LSRs. Copyright © 2017 Elsevier Inc. All rights reserved.
Fuzzy neural network methodology applied to medical diagnosis
NASA Technical Reports Server (NTRS)
Gorzalczany, Marian B.; Deutsch-Mcleish, Mary
1992-01-01
This paper presents a technique for building expert systems that combines the fuzzy-set approach with artificial neural network structures. This technique can effectively deal with two types of medical knowledge: a nonfuzzy one and a fuzzy one which usually contributes to the process of medical diagnosis. Nonfuzzy numerical data is obtained from medical tests. Fuzzy linguistic rules describing the diagnosis process are provided by a human expert. The proposed method has been successfully applied in veterinary medicine as a support system in the diagnosis of canine liver diseases.
2006-04-21
C. M., and Prendergast, J. P., 2002, "Thermial Analysis of Hypersonic Inlet Flow with Exergy -Based Design Methods," International Journal of Applied...parametric study of the PS and its components is first presented in order to show the type of detailed information on internal system losses which an exergy ...Thermoeconomic Isolation Applied to the Optimal Synthesis/Design of an Advanced Fighter Aircraft System," International Journal of Thermodynamics, ICAT
Yoo, Won-Gyu
2015-01-01
[Purpose] This study showed the effects of different computer typing speeds on acceleration and peak contact pressure of the fingertips during computer typing. [Subjects] Twenty-one male computer workers voluntarily consented to participate in this study. They consisted of 7 workers who could type 200-300 characteristics/minute, 7 workers who could type 300-400 characteristics/minute, and 7 workers who could type 400-500 chracteristics/minute. [Methods] This study was used to measure the acceleration and peak contact pressure of the fingertips for different typing speed groups using an accelerometer and CONFORMat system. [Results] The fingertip contact pressure was increased in the high typing speed group compared with the low and medium typing speed groups. The fingertip acceleration was increased in the high typing speed group compared with the low and medium typing speed groups. [Conclusion] The results of the present study indicate that a fast typing speed cause continuous pressure stress to be applied to the fingers, thereby creating pain in the fingers.
Number needed to treat (NNT) in clinical literature: an appraisal.
Mendes, Diogo; Alves, Carlos; Batel-Marques, Francisco
2017-06-01
The number needed to treat (NNT) is an absolute effect measure that has been used to assess beneficial and harmful effects of medical interventions. Several methods can be used to calculate NNTs, and they should be applied depending on the different study characteristics, such as the design and type of variable used to measure outcomes. Whether or not the most recommended methods have been applied to calculate NNTs in studies published in the medical literature is yet to be determined. The aim of this study is to assess whether the methods used to calculate NNTs in studies published in medical journals are in line with basic methodological recommendations. The top 25 high-impact factor journals in the "General and/or Internal Medicine" category were screened to identify studies assessing pharmacological interventions and reporting NNTs. Studies were categorized according to their design and the type of variables. NNTs were assessed for completeness (baseline risk, time horizon, and confidence intervals [CIs]). The methods used for calculating NNTs in selected studies were compared to basic methodological recommendations published in the literature. Data were analyzed using descriptive statistics. The search returned 138 citations, of which 51 were selected. Most were meta-analyses (n = 23, 45.1%), followed by clinical trials (n = 17, 33.3%), cohort (n = 9, 17.6%), and case-control studies (n = 2, 3.9%). Binary variables were more common (n = 41, 80.4%) than time-to-event (n = 10, 19.6%) outcomes. Twenty-six studies (51.0%) reported only NNT to benefit (NNTB), 14 (27.5%) reported both NNTB and NNT to harm (NNTH), and 11 (21.6%) reported only NNTH. Baseline risk (n = 37, 72.5%), time horizon (n = 38, 74.5%), and CI (n = 32, 62.7%) for NNTs were not always reported. Basic methodological recommendations to calculate NNTs were not followed in 15 studies (29.4%). The proportion of studies applying non-recommended methods was particularly high for meta-analyses (n = 13, 56.5%). A considerable proportion of studies, particularly meta-analyses, applied methods that are not in line with basic methodological recommendations. Despite their usefulness in assisting clinical decisions, NNTs are uninterpretable if incompletely reported, and they may be misleading if calculating methods are inadequate to study designs and variables under evaluation. Further research is needed to confirm the present findings.
Godin, Katelyn; Stapleton, Jackie; Kirkpatrick, Sharon I; Hanning, Rhona M; Leatherdale, Scott T
2015-10-22
Grey literature is an important source of information for large-scale review syntheses. However, there are many characteristics of grey literature that make it difficult to search systematically. Further, there is no 'gold standard' for rigorous systematic grey literature search methods and few resources on how to conduct this type of search. This paper describes systematic review search methods that were developed and applied to complete a case study systematic review of grey literature that examined guidelines for school-based breakfast programs in Canada. A grey literature search plan was developed to incorporate four different searching strategies: (1) grey literature databases, (2) customized Google search engines, (3) targeted websites, and (4) consultation with contact experts. These complementary strategies were used to minimize the risk of omitting relevant sources. Since abstracts are often unavailable in grey literature documents, items' abstracts, executive summaries, or table of contents (whichever was available) were screened. Screening of publications' full-text followed. Data were extracted on the organization, year published, who they were developed by, intended audience, goal/objectives of document, sources of evidence/resources cited, meals mentioned in the guidelines, and recommendations for program delivery. The search strategies for identifying and screening publications for inclusion in the case study review was found to be manageable, comprehensive, and intuitive when applied in practice. The four search strategies of the grey literature search plan yielded 302 potentially relevant items for screening. Following the screening process, 15 publications that met all eligibility criteria remained and were included in the case study systematic review. The high-level findings of the case study systematic review are briefly described. This article demonstrated a feasible and seemingly robust method for applying systematic search strategies to identify web-based resources in the grey literature. The search strategy we developed and tested is amenable to adaptation to identify other types of grey literature from other disciplines and answering a wide range of research questions. This method should be further adapted and tested in future research syntheses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderton, Christopher R.; Chu, Rosalie K.; Tolic, Nikola
The ability to visualize biochemical interactions between microbial communities using MALDI MSI has provided tremendous insights into a variety of biological fields. Matrix application using a sieve proved to be incredibly useful, but it had many limitations that include uneven matrix coverage and limitation in the types of matrices one could employ in their studies. Recently, there has been a concerted effort to improve matrix application for studying agar plated microbial cultures, many of which utilized automated matrix sprayers. Here, we describe the usefulness of using a robotic sprayer for matrix application. The robotic sprayer has two-dimensional control over wheremore » matrix is applied and a heated capillary that allows for rapid drying of the applied matrix. This method provided a significant increase in MALDI sensitivity over the sieve method, as demonstrated by FT-ICR MS analysis, facilitating the ability to gain higher lateral resolution MS images of Bacillus Subtilis than previously reported. This method also allowed for the use of different matrices to be applied to the culture surfaces.« less
Chromophore Poling in Thin Films of Organic Glasses. 2. Two-Electrode Corona Discharge Setup
NASA Astrophysics Data System (ADS)
Vilitis, O.; Muzikante, I.; Rutkis, M.; Vembris, A.
2012-01-01
In Part 1 of the article we provided description of the corona discharge physics and overview of the methods used for corona poling in thin organic films. Subsequent sections describe comparatively simple technical methods for poling the organic nonlinear optical polymers using a two-electrode (point-to-plate or wire-to-plate) technique. The polarization build-up was studied by the DC positive corona method for poling the nonlinear optical (NLO) polymers. The experimental setup provides the corona discharge current from 0.5 μA up to 3 μA by applying 3 kV - 12 kV voltage to the corona electrode and makes possible selection among the types of corona electrodes (needle, multi-needle, wire, etc.). The results of experimental testing of the poling setup show that at fixed optimal operational parameters of poling - the sample orientation temperature and the discharge current - the corona charging of polymeric materials can successfully be performed applying the two-electrode technique. To study the dynamics of both poling and charge transport processes the three-electrode charging system - a corona triode - should be applied.
Development of toroid-type HTS DC reactor series for HVDC system
NASA Astrophysics Data System (ADS)
Kim, Kwangmin; Go, Byeong-Soo; Park, Hea-chul; Kim, Sung-kyu; Kim, Seokho; Lee, Sangjin; Oh, Yunsang; Park, Minwon; Yu, In-Keun
2015-11-01
This paper describes design specifications and performance of a toroid-type high-temperature superconducting (HTS) DC reactor. The first phase operation targets of the HTS DC reactor were 400 mH and 400 A. The authors have developed a real HTS DC reactor system during the last three years. The HTS DC reactor was designed using 2G GdBCO HTS wires. The HTS coils of the toroid-type DC reactor magnet were made in the form of a D-shape. The electromagnetic performance of the toroid-type HTS DC reactor magnet was analyzed using the finite element method program. A conduction cooling method was adopted for reactor magnet cooling. The total system has been successfully developed and tested in connection with LCC type HVDC system. Now, the authors are studying a 400 mH, kA class toroid-type HTS DC reactor for the next phase research. The 1500 A class DC reactor system was designed using layered 13 mm GdBCO 2G HTS wire. The expected operating temperature is under 30 K. These fundamental data obtained through both works will usefully be applied to design a real toroid-type HTS DC reactor for grid application.
Matsuzaki, Rei; Yabushita, Satoshi
2017-05-05
The complex basis function (CBF) method applied to various atomic and molecular photoionization problems can be interpreted as an L2 method to solve the driven-type (inhomogeneous) Schrödinger equation, whose driven term being dipole operator times the initial state wave function. However, efficient basis functions for representing the solution have not fully been studied. Moreover, the relation between their solution and that of the ordinary Schrödinger equation has been unclear. For these reasons, most previous applications have been limited to total cross sections. To examine the applicability of the CBF method to differential cross sections and asymmetry parameters, we show that the complex valued solution to the driven-type Schrödinger equation can be variationally obtained by optimizing the complex trial functions for the frequency dependent polarizability. In the test calculations made for the hydrogen photoionization problem with five or six complex Slater-type orbitals (cSTOs), their complex valued expansion coefficients and the orbital exponents have been optimized with the analytic derivative method. Both the real and imaginary parts of the solution have been obtained accurately in a wide region covering typical molecular regions. Their phase shifts and asymmetry parameters are successfully obtained by extrapolating the CBF solution from the inner matching region to the asymptotic region using WKB method. The distribution of the optimized orbital exponents in the complex plane is explained based on the close connection between the CBF method and the driven-type equation method. The obtained information is essential to constructing the appropriate basis sets in future molecular applications. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Extremely large magnetoresistance in a high-quality WTe2 grown by flux method
NASA Astrophysics Data System (ADS)
Tsumura, K.; Yano, R.; Kashiwaya, H.; Koyanagi, M.; Masubuchi, S.; Machida, T.; Namiki, H.; Sasagawa, T.; Kashiwaya, S.
2018-03-01
We have grown single crystals of WTe2 by a self-flux method and evaluated the quality of the crystals. A Hall bar-type device was fabricated from an as-exfoliated film on a Si substrate and longitudinal resistance Rxx was measured. Rxx increased with an applied perpendicular magnetic field without saturation and an extremely large magnetoresistance as high as 376,059 % was observed at 8.27 T and 1.7 K.
[Study of beta-turns in globular proteins].
Amirova, S R; Milchevskiĭ, Iu V; Filatov, I V; Esipova, N G; Tumanian, V G
2005-01-01
The formation of beta-turns in globular proteins has been studied by the method of molecular mechanics. Statistical method of discriminant analysis was applied to calculate energy components and sequences of oligopeptide segments, and after this prediction of I type beta-turns has been drawn. The accuracy of true positive prediction is 65%. Components of conformational energy considerably affecting beta-turn formation were delineated. There are torsional energy, energy of hydrogen bonds, and van der Waals energy.
Estimation of geotechnical parameters on the basis of geophysical methods and geostatistics
NASA Astrophysics Data System (ADS)
Brom, Aleksander; Natonik, Adrianna
2017-12-01
The paper presents possible implementation of ordinary cokriging and geophysical investigation on humidity data acquired in geotechnical studies. The Author describes concept of geostatistics, terminology of geostatistical modelling, spatial correlation functions, principles of solving cokriging systems, advantages of (co-)kriging in comparison with other interpolation methods, obstacles in this type of attempt. Cross validation and discussion of results was performed with an indication of prospect of applying similar procedures in various researches..
Williams, J.H.; Paillet, Frederick L.
2002-01-01
Cross-borehole flowmeter pulse tests define subsurface connections between discrete fractures using short stress periods to monitor the propagation of the pulse through the flow system. This technique is an improvement over other cross-borehole techniques because measurements can be made in open boreholes without packers or previous identification of water-producing intervals. The method is based on the concept of monitoring the propagation of pulses rather than steady flow through the fracture network. In this method, a hydraulic stress is applied to a borehole connected to a single, permeable fracture, and the distribution of flow induced by that stress monitored in adjacent boreholes. The transient flow responses are compared to type curves computed for several different types of fracture connections. The shape of the transient flow response indicates the type of fracture connection, and the fit of the data to the type curve yields an estimate of its transmissivity and storage coefficient. The flowmeter pulse test technique was applied in fractured shale at a volatile-organic contaminant plume in Watervliet, New York. Flowmeter and other geophysical logs were used to identify permeable fractures in eight boreholes in and near the contaminant plume using single-borehole flow measurements. Flowmeter cross-hole pulse tests were used to identify connections between fractures detected in the boreholes. The results indicated a permeable fracture network connecting many of the individual boreholes, and demonstrated the presence of an ambient upward hydraulic-head gradient throughout the site.
Constrained maximum likelihood modal parameter identification applied to structural dynamics
NASA Astrophysics Data System (ADS)
El-Kafafy, Mahmoud; Peeters, Bart; Guillaume, Patrick; De Troyer, Tim
2016-05-01
A new modal parameter estimation method to directly establish modal models of structural dynamic systems satisfying two physically motivated constraints will be presented. The constraints imposed in the identified modal model are the reciprocity of the frequency response functions (FRFs) and the estimation of normal (real) modes. The motivation behind the first constraint (i.e. reciprocity) comes from the fact that modal analysis theory shows that the FRF matrix and therefore the residue matrices are symmetric for non-gyroscopic, non-circulatory, and passive mechanical systems. In other words, such types of systems are expected to obey Maxwell-Betti's reciprocity principle. The second constraint (i.e. real mode shapes) is motivated by the fact that analytical models of structures are assumed to either be undamped or proportional damped. Therefore, normal (real) modes are needed for comparison with these analytical models. The work done in this paper is a further development of a recently introduced modal parameter identification method called ML-MM that enables us to establish modal model that satisfies such motivated constraints. The proposed constrained ML-MM method is applied to two real experimental datasets measured on fully trimmed cars. This type of data is still considered as a significant challenge in modal analysis. The results clearly demonstrate the applicability of the method to real structures with significant non-proportional damping and high modal densities.
Multivariate classification of the infrared spectra of cell and tissue samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haaland, D.M.; Jones, H.D.; Thomas, E.V.
1997-03-01
Infrared microspectroscopy of biopsied canine lymph cells and tissue was performed to investigate the possibility of using IR spectra coupled with multivariate classification methods to classify the samples as normal, hyperplastic, or neoplastic (malignant). IR spectra were obtained in transmission mode through BaF{sub 2} windows and in reflection mode from samples prepared on gold-coated microscope slides. Cytology and histopathology samples were prepared by a variety of methods to identify the optimal methods of sample preparation. Cytospinning procedures that yielded a monolayer of cells on the BaF{sub 2} windows produced a limited set of IR transmission spectra. These transmission spectra weremore » converted to absorbance and formed the basis for a classification rule that yielded 100{percent} correct classification in a cross-validated context. Classifications of normal, hyperplastic, and neoplastic cell sample spectra were achieved by using both partial least-squares (PLS) and principal component regression (PCR) classification methods. Linear discriminant analysis applied to principal components obtained from the spectral data yielded a small number of misclassifications. PLS weight loading vectors yield valuable qualitative insight into the molecular changes that are responsible for the success of the infrared classification. These successful classification results show promise for assisting pathologists in the diagnosis of cell types and offer future potential for {ital in vivo} IR detection of some types of cancer. {copyright} {ital 1997} {ital Society for Applied Spectroscopy}« less
Ichida, Kensuke; Kise, Kazuyoshi; Morita, Tetsuro; Yazawa, Ryosuke; Takeuchi, Yutaka; Yoshizaki, Goro
2017-10-01
We previously established surrogate broodstock in which the donor germ cells transplanted into the peritoneal cavities of xenogeneic recipients were capable of developing into functional eggs and sperm in teleost fish. In this transplantation system, only the undifferentiated germ cells such as type A spermatogonia (ASG) or a portion of the ASG population were capable of being incorporated into the genital ridges of the recipients and undergo gametogenesis. Therefore, the use of enriched ASGs can be expected to achieve efficient donor-cell incorporation. Here, we established a method of isolation and enrichment of the ASG of Pacific bluefin tuna using flow cytometry. Whole testicular cell suspensions were fractionated by forward and side scatter properties, following which ASGs were enriched in a fraction in which the forward scatter signal was relatively high and side scatter signal was relatively low. The diameter of sorted cells using the fraction was identical to the size of ASGs observed in histological analysis, and these cells also expressed the vasa gene. In addition, we succeeded in applying this method to several maturation stages of Pacific bluefin tuna. Since this method was based on light-scattering characteristics of ASGs, it can potentially be applied to various teleosts. We expect that this method can contribute to the production of seeds of Pacific bluefin tuna using surrogate broodstock. Copyright © 2017 Elsevier Inc. All rights reserved.
Combating QR-Code-Based Compromised Accounts in Mobile Social Networks.
Guo, Dong; Cao, Jian; Wang, Xiaoqi; Fu, Qiang; Li, Qiang
2016-09-20
Cyber Physical Social Sensing makes mobile social networks (MSNs) popular with users. However, such attacks are rampant as malicious URLs are spread covertly through quick response (QR) codes to control compromised accounts in MSNs to propagate malicious messages. Currently, there are generally two types of methods to identify compromised accounts in MSNs: one type is to analyze the potential threats on wireless access points and the potential threats on handheld devices' operation systems so as to stop compromised accounts from spreading malicious messages; the other type is to apply the method of detecting compromised accounts in online social networks to MSNs. The above types of methods above focus neither on the problems of MSNs themselves nor on the interaction of sensors' messages, which leads to the restrictiveness of platforms and the simplification of methods. In order to stop the spreading of compromised accounts in MSNs effectively, the attacks have to be traced to their sources first. Through sensors, users exchange information in MSNs and acquire information by scanning QR codes. Therefore, analyzing the traces of sensor-related information helps to identify the compromised accounts in MSNs. This paper analyzes the diversity of information sending modes of compromised accounts and normal accounts, analyzes the regularity of GPS (Global Positioning System)-based location information, and introduces the concepts of entropy and conditional entropy so as to construct an entropy-based model based on machine learning strategies. To achieve the goal, about 500,000 accounts of Sina Weibo and about 100 million corresponding messages are collected. Through the validation, the accuracy rate of the model is proved to be as high as 87.6%, and the false positive rate is only 3.7%. Meanwhile, the comparative experiments of the feature sets prove that sensor-based location information can be applied to detect the compromised accounts in MSNs.
Combating QR-Code-Based Compromised Accounts in Mobile Social Networks
Guo, Dong; Cao, Jian; Wang, Xiaoqi; Fu, Qiang; Li, Qiang
2016-01-01
Cyber Physical Social Sensing makes mobile social networks (MSNs) popular with users. However, such attacks are rampant as malicious URLs are spread covertly through quick response (QR) codes to control compromised accounts in MSNs to propagate malicious messages. Currently, there are generally two types of methods to identify compromised accounts in MSNs: one type is to analyze the potential threats on wireless access points and the potential threats on handheld devices’ operation systems so as to stop compromised accounts from spreading malicious messages; the other type is to apply the method of detecting compromised accounts in online social networks to MSNs. The above types of methods above focus neither on the problems of MSNs themselves nor on the interaction of sensors’ messages, which leads to the restrictiveness of platforms and the simplification of methods. In order to stop the spreading of compromised accounts in MSNs effectively, the attacks have to be traced to their sources first. Through sensors, users exchange information in MSNs and acquire information by scanning QR codes. Therefore, analyzing the traces of sensor-related information helps to identify the compromised accounts in MSNs. This paper analyzes the diversity of information sending modes of compromised accounts and normal accounts, analyzes the regularity of GPS (Global Positioning System)-based location information, and introduces the concepts of entropy and conditional entropy so as to construct an entropy-based model based on machine learning strategies. To achieve the goal, about 500,000 accounts of Sina Weibo and about 100 million corresponding messages are collected. Through the validation, the accuracy rate of the model is proved to be as high as 87.6%, and the false positive rate is only 3.7%. Meanwhile, the comparative experiments of the feature sets prove that sensor-based location information can be applied to detect the compromised accounts in MSNs. PMID:27657071
Nakayasu, Ernesto S.; Nicora, Carrie D.; Sims, Amy C.; Burnum-Johnson, Kristin E.; Kim, Young-Mo; Kyle, Jennifer E.; Matzke, Melissa M.; Shukla, Anil K.; Chu, Rosalie K.; Schepmoes, Athena A.; Jacobs, Jon M.; Baric, Ralph S.; Webb-Robertson, Bobbie-Jo; Smith, Richard D.
2016-01-01
ABSTRACT Integrative multi-omics analyses can empower more effective investigation and complete understanding of complex biological systems. Despite recent advances in a range of omics analyses, multi-omic measurements of the same sample are still challenging and current methods have not been well evaluated in terms of reproducibility and broad applicability. Here we adapted a solvent-based method, widely applied for extracting lipids and metabolites, to add proteomics to mass spectrometry-based multi-omics measurements. The metabolite, protein, and lipid extraction (MPLEx) protocol proved to be robust and applicable to a diverse set of sample types, including cell cultures, microbial communities, and tissues. To illustrate the utility of this protocol, an integrative multi-omics analysis was performed using a lung epithelial cell line infected with Middle East respiratory syndrome coronavirus, which showed the impact of this virus on the host glycolytic pathway and also suggested a role for lipids during infection. The MPLEx method is a simple, fast, and robust protocol that can be applied for integrative multi-omic measurements from diverse sample types (e.g., environmental, in vitro, and clinical). IMPORTANCE In systems biology studies, the integration of multiple omics measurements (i.e., genomics, transcriptomics, proteomics, metabolomics, and lipidomics) has been shown to provide a more complete and informative view of biological pathways. Thus, the prospect of extracting different types of molecules (e.g., DNAs, RNAs, proteins, and metabolites) and performing multiple omics measurements on single samples is very attractive, but such studies are challenging due to the fact that the extraction conditions differ according to the molecule type. Here, we adapted an organic solvent-based extraction method that demonstrated broad applicability and robustness, which enabled comprehensive proteomics, metabolomics, and lipidomics analyses from the same sample. Author Video: An author video summary of this article is available. PMID:27822525
Spreco, A; Eriksson, O; Dahlström, Ö; Timpka, T
2017-07-01
Methods for the detection of influenza epidemics and prediction of their progress have seldom been comparatively evaluated using prospective designs. This study aimed to perform a prospective comparative trial of algorithms for the detection and prediction of increased local influenza activity. Data on clinical influenza diagnoses recorded by physicians and syndromic data from a telenursing service were used. Five detection and three prediction algorithms previously evaluated in public health settings were calibrated and then evaluated over 3 years. When applied on diagnostic data, only detection using the Serfling regression method and prediction using the non-adaptive log-linear regression method showed acceptable performances during winter influenza seasons. For the syndromic data, none of the detection algorithms displayed a satisfactory performance, while non-adaptive log-linear regression was the best performing prediction method. We conclude that evidence was found for that available algorithms for influenza detection and prediction display satisfactory performance when applied on local diagnostic data during winter influenza seasons. When applied on local syndromic data, the evaluated algorithms did not display consistent performance. Further evaluations and research on combination of methods of these types in public health information infrastructures for 'nowcasting' (integrated detection and prediction) of influenza activity are warranted.
Liu, Wei; Du, Peijun; Wang, Dongchen
2015-01-01
One important method to obtain the continuous surfaces of soil properties from point samples is spatial interpolation. In this paper, we propose a method that combines ensemble learning with ancillary environmental information for improved interpolation of soil properties (hereafter, EL-SP). First, we calculated the trend value for soil potassium contents at the Qinghai Lake region in China based on measured values. Then, based on soil types, geology types, land use types, and slope data, the remaining residual was simulated with the ensemble learning model. Next, the EL-SP method was applied to interpolate soil potassium contents at the study site. To evaluate the utility of the EL-SP method, we compared its performance with other interpolation methods including universal kriging, inverse distance weighting, ordinary kriging, and ordinary kriging combined geographic information. Results show that EL-SP had a lower mean absolute error and root mean square error than the data produced by the other models tested in this paper. Notably, the EL-SP maps can describe more locally detailed information and more accurate spatial patterns for soil potassium content than the other methods because of the combined use of different types of environmental information; these maps are capable of showing abrupt boundary information for soil potassium content. Furthermore, the EL-SP method not only reduces prediction errors, but it also compliments other environmental information, which makes the spatial interpolation of soil potassium content more reasonable and useful.
The use of "mixing" procedure of mixed methods in health services research.
Zhang, Wanqing; Creswell, John
2013-08-01
Mixed methods research has emerged alongside qualitative and quantitative approaches as an important tool for health services researchers. Despite growing interest, among health services researchers, in using mixed methods designs, little has been done to identify the procedural aspects of doing so. To describe how mixed methods researchers mix the qualitative and quantitative aspects of their studies in health services research. We searched the PubMed for articles, using mixed methods in health services research, published between January 1, 2006 and December 30, 2010. We identified and reviewed 30 published health services research articles on studies in which mixed methods had been used. We selected 3 articles as illustrations to help health services researcher conceptualize the type of mixing procedures that they were using. Three main "mixing" procedures have been applied within these studies: (1) the researchers analyzed the 2 types of data at the same time but separately and integrated the results during interpretation; (2) the researchers connected the qualitative and quantitative portions in phases in such a way that 1 approach was built upon the findings of the other approach; and (3) the researchers mixed the 2 data types by embedding the analysis of 1 data type within the other. "Mixing" in mixed methods is more than just the combination of 2 independent components of the quantitative and qualitative data. The use of "mixing" procedure in health services research involves the integration, connection, and embedding of these 2 data components.
Juang, Chia-Feng; Hsu, Chia-Hung
2009-12-01
This paper proposes a new reinforcement-learning method using online rule generation and Q-value-aided ant colony optimization (ORGQACO) for fuzzy controller design. The fuzzy controller is based on an interval type-2 fuzzy system (IT2FS). The antecedent part in the designed IT2FS uses interval type-2 fuzzy sets to improve controller robustness to noise. There are initially no fuzzy rules in the IT2FS. The ORGQACO concurrently designs both the structure and parameters of an IT2FS. We propose an online interval type-2 rule generation method for the evolution of system structure and flexible partitioning of the input space. Consequent part parameters in an IT2FS are designed using Q -values and the reinforcement local-global ant colony optimization algorithm. This algorithm selects the consequent part from a set of candidate actions according to ant pheromone trails and Q-values, both of which are updated using reinforcement signals. The ORGQACO design method is applied to the following three control problems: 1) truck-backing control; 2) magnetic-levitation control; and 3) chaotic-system control. The ORGQACO is compared with other reinforcement-learning methods to verify its efficiency and effectiveness. Comparisons with type-1 fuzzy systems verify the noise robustness property of using an IT2FS.
Ultrasonic and densimetric titration applied for acid-base reactions.
Burakowski, Andrzej; Gliński, Jacek
2014-01-01
Classical acoustic acid-base titration was monitored using sound speed and density measurements. Plots of these parameters, as well as of the adiabatic compressibility coefficient calculated from them, exhibit changes with the volume of added titrant. Compressibility changes can be explained and quantitatively predicted theoretically in terms of Pasynski theory of non-compressible hydrates combined with that of the additivity of the hydration numbers with the amount and type of ions and molecules present in solution. It also seems that this development could be applied in chemical engineering for monitoring the course of chemical processes, since the applied experimental methods can be carried out almost independently on the medium under test (harmful, aggressive, etc.).
NASA Technical Reports Server (NTRS)
Yan, Jue; Shu, Chi-Wang; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
In this paper we review the existing and develop new continuous Galerkin methods for solving time dependent partial differential equations with higher order derivatives in one and multiple space dimensions. We review local discontinuous Galerkin methods for convection diffusion equations involving second derivatives and for KdV type equations involving third derivatives. We then develop new local discontinuous Galerkin methods for the time dependent bi-harmonic type equations involving fourth derivatives, and partial differential equations involving fifth derivatives. For these new methods we present correct interface numerical fluxes and prove L(exp 2) stability for general nonlinear problems. Preliminary numerical examples are shown to illustrate these methods. Finally, we present new results on a post-processing technique, originally designed for methods with good negative-order error estimates, on the local discontinuous Galerkin methods applied to equations with higher derivatives. Numerical experiments show that this technique works as well for the new higher derivative cases, in effectively doubling the rate of convergence with negligible additional computational cost, for linear as well as some nonlinear problems, with a local uniform mesh.
Diffusion length measurements of thin GaAs solar cells by means of energetic electrons
NASA Technical Reports Server (NTRS)
Vonross, O.
1980-01-01
A calculation of the short circuit current density (j sub sc) of a thin GaAs solar cell induced by fast electrons is presented. It is shown that in spite of the disparity in thickness between the N-type portion of the junction and the P-type portion of the junction, the measurement of the bulk diffusion length L sub p of the N-type part of the junction is seriously hampered due to the presence of a sizable contribution to the j sub sc from the P-type region of the junction. Corrections of up to 50% had to be made in order to interpret the data correctly. Since these corrections were not amenable to direct measurements it is concluded that the electron beam method for the determination of the bulk minority carrier diffusion length, which works so well for Si solar cells, is a poor method when applied to thin GaAs cells.
NASA Astrophysics Data System (ADS)
Fang, Yuanyuan; Zuo, Yanyan; Xia, Zhaowang
2018-03-01
The noise level is getting higher with the development of high-power marine power plant. Mechanical noise is one of the most obvious noise sources which not only affect equipment reliability, riding comfort and working environment, but also enlarge underwater noise. The periodic truss type device which is commonly applied in fields of aerospace and architectural is introduced to floating raft construction in ship. Four different raft frame structure are designed in the paper. The vibration transmissibility is taken as an evaluation index to measure vibration isolation effect. A design scheme with the best vibration isolation effect is found by numerical method. Plate type and the optimized periodic truss type raft frame structure are processed to experimental verify vibration isolation effect of the structure of the periodic raft. The experimental results demonstrate that the same quality of the periodic truss floating raft has better isolation effect than that of the plate type floating raft.
System-wide identification of wild-type SUMO-2 conjugation sites
Hendriks, Ivo A.; D'Souza, Rochelle C.; Chang, Jer-Gung; Mann, Matthias; Vertegaal, Alfred C. O.
2015-01-01
SUMOylation is a reversible post-translational modification (PTM) regulating all nuclear processes. Identification of SUMOylation sites by mass spectrometry (MS) has been hampered by bulky tryptic fragments, which thus far necessitated the use of mutated SUMO. Here we present a SUMO-specific protease-based methodology which circumvents this problem, dubbed Protease-Reliant Identification of SUMO Modification (PRISM). PRISM allows for detection of SUMOylated proteins as well as identification of specific sites of SUMOylation while using wild-type SUMO. The method is generic and could be widely applied to study lysine PTMs. We employ PRISM in combination with high-resolution MS to identify SUMOylation sites from HeLa cells under standard growth conditions and in response to heat shock. We identified 751 wild-type SUMOylation sites on endogenous proteins, including 200 dynamic SUMO sites in response to heat shock. Thus, we have developed a method capable of quantitatively studying wild-type mammalian SUMO at the site-specific and system-wide level. PMID:26073453
Evaluation of null-point detection methods on simulation data
NASA Astrophysics Data System (ADS)
Olshevsky, Vyacheslav; Fu, Huishan; Vaivads, Andris; Khotyaintsev, Yuri; Lapenta, Giovanni; Markidis, Stefano
2014-05-01
We model the measurements of artificial spacecraft that resemble the configuration of CLUSTER propagating in the particle-in-cell simulation of turbulent magnetic reconnection. The simulation domain contains multiple isolated X-type null-points, but the majority are O-type null-points. Simulations show that current pinches surrounded by twisted fields, analogous to laboratory pinches, are formed along the sequences of O-type nulls. In the simulation, the magnetic reconnection is mainly driven by the kinking of the pinches, at spatial scales of several ion inertial lentghs. We compute the locations of magnetic null-points and detect their type. When the satellites are separated by the fractions of ion inertial length, as it is for CLUSTER, they are able to locate both the isolated null-points, and the pinches. We apply the method to the real CLUSTER data and speculate how common are pinches in the magnetosphere, and whether they play a dominant role in the dissipation of magnetic energy.
Fault classification method for the driving safety of electrified vehicles
NASA Astrophysics Data System (ADS)
Wanner, Daniel; Drugge, Lars; Stensson Trigell, Annika
2014-05-01
A fault classification method is proposed which has been applied to an electric vehicle. Potential faults in the different subsystems that can affect the vehicle directional stability were collected in a failure mode and effect analysis. Similar driveline faults were grouped together if they resembled each other with respect to their influence on the vehicle dynamic behaviour. The faults were physically modelled in a simulation environment before they were induced in a detailed vehicle model under normal driving conditions. A special focus was placed on faults in the driveline of electric vehicles employing in-wheel motors of the permanent magnet type. Several failures caused by mechanical and other faults were analysed as well. The fault classification method consists of a controllability ranking developed according to the functional safety standard ISO 26262. The controllability of a fault was determined with three parameters covering the influence of the longitudinal, lateral and yaw motion of the vehicle. The simulation results were analysed and the faults were classified according to their controllability using the proposed method. It was shown that the controllability decreased specifically with increasing lateral acceleration and increasing speed. The results for the electric driveline faults show that this trend cannot be generalised for all the faults, as the controllability deteriorated for some faults during manoeuvres with low lateral acceleration and low speed. The proposed method is generic and can be applied to various other types of road vehicles and faults.
Evaluating the risks of clinical research: direct comparative analysis.
Rid, Annette; Abdoler, Emily; Roberson-Nay, Roxann; Pine, Daniel S; Wendler, David
2014-09-01
Many guidelines and regulations allow children and adolescents to be enrolled in research without the prospect of clinical benefit when it poses minimal risk. However, few systematic methods exist to determine when research risks are minimal. This situation has led to significant variation in minimal risk judgments, raising concern that some children are not being adequately protected. To address this concern, we describe a new method for implementing the widely endorsed "risks of daily life" standard for minimal risk. This standard defines research risks as minimal when they do not exceed the risks posed by daily life activities or routine examinations. This study employed a conceptual and normative analysis, and use of an illustrative example. Different risks are composed of the same basic elements: Type, likelihood, and magnitude of harm. Hence, one can compare the risks of research and the risks of daily life by comparing the respective basic elements with each other. We use this insight to develop a systematic method, direct comparative analysis, for implementing the "risks of daily life" standard for minimal risk. The method offers a way of evaluating research procedures that pose the same types of risk as daily life activities, such as the risk of experiencing anxiety, stress, or other psychological harm. We thus illustrate how direct comparative analysis can be applied in practice by using it to evaluate whether the anxiety induced by a respiratory CO2 challenge poses minimal or greater than minimal risks in children and adolescents. Direct comparative analysis is a systematic method for applying the "risks of daily life" standard for minimal risk to research procedures that pose the same types of risk as daily life activities. It thereby offers a method to protect children and adolescents in research, while ensuring that important studies are not blocked because of unwarranted concerns about research risks.
NASA Astrophysics Data System (ADS)
Svoren, J.; Neslusan, L.; Porubcan, V.
1993-07-01
It is evident that there is no uniform method of calculating meteor radiants which would yield reliable results for all types of cometary orbits. In the present paper an analysis of this problem is presented, together with recommended methods for various types of orbits. Some additional methods resulting from mathematical modelling are presented and discussed together with Porter's, Steel-Baggaley's and Hasegawa's methods. In order to be able to compare how suitable the application of the individual radiant determination methods is, it is necessary to determine the accuracy with which they approximate real meteor orbits. To verify the accuracy with which the orbit of a meteoroid with at least one node at 1 AU fits the original orbit of the parent body, we applied the Southworth-Hawkins D-criterion (Southworth, R.B., Hawkins, G.S.: 1963, Smithson. Contr. Astrophys 7, 261). D<=0.1 indicates a very good fit of orbits, 0.1
Ananiadou, Sophia
2016-01-01
Biomedical literature articles and narrative content from Electronic Health Records (EHRs) both constitute rich sources of disease-phenotype information. Phenotype concepts may be mentioned in text in multiple ways, using phrases with a variety of structures. This variability stems partly from the different backgrounds of the authors, but also from the different writing styles typically used in each text type. Since EHR narrative reports and literature articles contain different but complementary types of valuable information, combining details from each text type can help to uncover new disease-phenotype associations. However, the alternative ways in which the same concept may be mentioned in each source constitutes a barrier to the automatic integration of information. Accordingly, identification of the unique concepts represented by phrases in text can help to bridge the gap between text types. We describe our development of a novel method, PhenoNorm, which integrates a number of different similarity measures to allow automatic linking of phenotype concept mentions to known concepts in the UMLS Metathesaurus, a biomedical terminological resource. PhenoNorm was developed using the PhenoCHF corpus—a collection of literature articles and narratives in EHRs, annotated for phenotypic information relating to congestive heart failure (CHF). We evaluate the performance of PhenoNorm in linking CHF-related phenotype mentions to Metathesaurus concepts, using a newly enriched version of PhenoCHF, in which each phenotype mention has an expert-verified link to a concept in the UMLS Metathesaurus. We show that PhenoNorm outperforms a number of alternative methods applied to the same task. Furthermore, we demonstrate PhenoNorm’s wider utility, by evaluating its ability to link mentions of various other types of medically-related information, occurring in texts covering wider subject areas, to concepts in different terminological resources. We show that PhenoNorm can maintain performance levels, and that its accuracy compares favourably to other methods applied to these tasks. PMID:27643689
NASA Astrophysics Data System (ADS)
Najafi, Ali; Karimpour, Mohammad Hassan; Ghaderi, Majid
2014-12-01
Using fuzzy analytical hierarchy process (AHP) technique, we propose a method for mineral prospectivity mapping (MPM) which is commonly used for exploration of mineral deposits. The fuzzy AHP is a popular technique which has been applied for multi-criteria decision-making (MCDM) problems. In this paper we used fuzzy AHP and geospatial information system (GIS) to generate prospectivity model for Iron Oxide Copper-Gold (IOCG) mineralization on the basis of its conceptual model and geo-evidence layers derived from geological, geochemical, and geophysical data in Taherabad area, eastern Iran. The FuzzyAHP was used to determine the weights belonging to each criterion. Three geoscientists knowledge on exploration of IOCG-type mineralization have been applied to assign weights to evidence layers in fuzzy AHP MPM approach. After assigning normalized weights to all evidential layers, fuzzy operator was applied to integrate weighted evidence layers. Finally for evaluating the ability of the applied approach to delineate reliable target areas, locations of known mineral deposits in the study area were used. The results demonstrate the acceptable outcomes for IOCG exploration.
[Cost of therapy for neurodegenerative diseases. Applying an activity-based costing system].
Sánchez-Rebull, María-Victoria; Terceño Gómez, Antonio; Travé Bautista, Angeles
2013-01-01
To apply the activity based costing (ABC) model to calculate the cost of therapy for neurodegenerative disorders in order to improve hospital management and allocate resources more efficiently. We used the case study method in the Francolí long-term care day center. We applied all phases of an ABC system to quantify the cost of the activities developed in the center. We identified 60 activities; the information was collected in June 2009. The ABC system allowed us to calculate the average cost per patient with respect to the therapies received. The most costly and commonly applied technique was psycho-stimulation therapy. Focusing on this therapy and on others related to the admissions process could lead to significant cost savings. ABC costing is a viable method for costing activities and therapies in long-term day care centers because it can be adapted to their structure and standard practice. This type of costing allows the costs of each activity and therapy, or combination of therapies, to be determined and aids measures to improve management. Copyright © 2012 SESPAS. Published by Elsevier Espana. All rights reserved.
Woźniak, Mateusz Kacper; Wiergowski, Marek; Aszyk, Justyna; Kubica, Paweł; Namieśnik, Jacek; Biziuk, Marek
2018-01-30
Amphetamine, methamphetamine, phentermine, 3,4-methylenedioxyamphetamine (MDA), 3,4-methylenedioxymethamphetamine (MDMA), and 3,4-methylenedioxy-N-ethylamphetamine (MDEA) are the most popular amphetamine-type stimulants. The use of these substances is a serious societal problem worldwide. In this study, a method based on gas chromatography-tandem mass spectrometry (GC-MS/MS) with simple and rapid liquid-liquid extraction (LLE) and derivatization was developed and validated for the simultaneous determination of the six aforementioned amphetamine derivatives in blood and urine. The detection of all compounds was based on multiple reaction monitoring (MRM) transitions. The most important advantage of the method is the minimal sample volume (as low as 200μL) required for the extraction procedure. The validation parameters, i.e., the recovery (90.5-104%), inter-day accuracy (94.2-109.1%) and precision (0.5-5.8%), showed the repeatability and sensitivity of the method for both matrices and indicated that the proposed procedure fulfils internationally established acceptance criteria for bioanalytical methods The procedure was successfully applied to the analysis of real blood and urine samples examined in 22 forensic toxicological cases. To the best of our knowledge, this is the first work presenting the use of GC-MS/MS for the determination of amphetamine-type stimulants in blood and urine. In view of the low limits of detection (0.09-0.81ng/mL), limits of quantification (0.26-2.4ng/mL), and high selectivity, the procedure can be applied for drug monitoring in both fatal and non-fatal intoxication cases in routine toxicology analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kompany-Zareh, Mohsen; Khoshkam, Maryam
2013-02-01
This paper describes estimation of reaction rate constants and pure ultraviolet/visible (UV-vis) spectra of the component involved in a second order consecutive reaction between Ortho-Amino benzoeic acid (o-ABA) and Diazoniom ions (DIAZO), with one intermediate. In the described system, o-ABA was not absorbing in the visible region of interest and thus, closure rank deficiency problem did not exist. Concentration profiles were determined by solving differential equations of the corresponding kinetic model. In that sense, three types of model-based procedures were applied to estimate the rate constants of the kinetic system, according to Levenberg/Marquardt (NGL/M) algorithm. Original data-based, Score-based and concentration-based objective functions were included in these nonlinear fitting procedures. Results showed that when there is error in initial concentrations, accuracy of estimated rate constants strongly depends on the type of applied objective function in fitting procedure. Moreover, flexibility in application of different constraints and optimization of the initial concentrations estimation during the fitting procedure were investigated. Results showed a considerable decrease in ambiguity of obtained parameters by applying appropriate constraints and adjustable initial concentrations of reagents.
NASA Astrophysics Data System (ADS)
Zhou, Shuguang; Zhou, Kefa; Wang, Jinlin; Yang, Genfang; Wang, Shanshan
2017-12-01
Cluster analysis is a well-known technique that is used to analyze various types of data. In this study, cluster analysis is applied to geochemical data that describe 1444 stream sediment samples collected in northwestern Xinjiang with a sample spacing of approximately 2 km. Three algorithms (the hierarchical, k-means, and fuzzy c-means algorithms) and six data transformation methods (the z-score standardization, ZST; the logarithmic transformation, LT; the additive log-ratio transformation, ALT; the centered log-ratio transformation, CLT; the isometric log-ratio transformation, ILT; and no transformation, NT) are compared in terms of their effects on the cluster analysis of the geochemical compositional data. The study shows that, on the one hand, the ZST does not affect the results of column- or variable-based (R-type) cluster analysis, whereas the other methods, including the LT, the ALT, and the CLT, have substantial effects on the results. On the other hand, the results of the row- or observation-based (Q-type) cluster analysis obtained from the geochemical data after applying NT and the ZST are relatively poor. However, we derive some improved results from the geochemical data after applying the CLT, the ILT, the LT, and the ALT. Moreover, the k-means and fuzzy c-means clustering algorithms are more reliable than the hierarchical algorithm when they are used to cluster the geochemical data. We apply cluster analysis to the geochemical data to explore for Au deposits within the study area, and we obtain a good correlation between the results retrieved by combining the CLT or the ILT with the k-means or fuzzy c-means algorithms and the potential zones of Au mineralization. Therefore, we suggest that the combination of the CLT or the ILT with the k-means or fuzzy c-means algorithms is an effective tool to identify potential zones of mineralization from geochemical data.
Method for lateral force calibration in atomic force microscope using MEMS microforce sensor.
Dziekoński, Cezary; Dera, Wojciech; Jarząbek, Dariusz M
2017-11-01
In this paper we present a simple and direct method for the lateral force calibration constant determination. Our procedure does not require any knowledge about material or geometrical parameters of an investigated cantilever. We apply a commercially available microforce sensor with advanced electronics for direct measurement of the friction force applied by the cantilever's tip to a flat surface of the microforce sensor measuring beam. Due to the third law of dynamics, the friction force of the equal value tilts the AFM cantilever. Therefore, torsional (lateral force) signal is compared with the signal from the microforce sensor and the lateral force calibration constant is determined. The method is easy to perform and could be widely used for the lateral force calibration constant determination in many types of atomic force microscopes. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Cristescu, Constantin P.; Stan, Cristina; Scarlat, Eugen I.; Minea, Teofil; Cristescu, Cristina M.
2012-04-01
We present a novel method for the parameter oriented analysis of mutual correlation between independent time series or between equivalent structures such as ordered data sets. The proposed method is based on the sliding window technique, defines a new type of correlation measure and can be applied to time series from all domains of science and technology, experimental or simulated. A specific parameter that can characterize the time series is computed for each window and a cross correlation analysis is carried out on the set of values obtained for the time series under investigation. We apply this method to the study of some currency daily exchange rates from the point of view of the Hurst exponent and the intermittency parameter. Interesting correlation relationships are revealed and a tentative crisis prediction is presented.
Applying a weed risk assessment approach to GM crops.
Keese, Paul K; Robold, Andrea V; Myers, Ruth C; Weisman, Sarah; Smith, Joe
2014-12-01
Current approaches to environmental risk assessment of genetically modified (GM) plants are modelled on chemical risk assessment methods, which have a strong focus on toxicity. There are additional types of harms posed by plants that have been extensively studied by weed scientists and incorporated into weed risk assessment methods. Weed risk assessment uses robust, validated methods that are widely applied to regulatory decision-making about potentially problematic plants. They are designed to encompass a broad variety of plant forms and traits in different environments, and can provide reliable conclusions even with limited data. The knowledge and experience that underpin weed risk assessment can be harnessed for environmental risk assessment of GM plants. A case study illustrates the application of the Australian post-border weed risk assessment approach to a representative GM plant. This approach is a valuable tool to identify potential risks from GM plants.
Extrapolation techniques applied to matrix methods in neutron diffusion problems
NASA Technical Reports Server (NTRS)
Mccready, Robert R
1956-01-01
A general matrix method is developed for the solution of characteristic-value problems of the type arising in many physical applications. The scheme employed is essentially that of Gauss and Seidel with appropriate modifications needed to make it applicable to characteristic-value problems. An iterative procedure produces a sequence of estimates to the answer; and extrapolation techniques, based upon previous behavior of iterants, are utilized in speeding convergence. Theoretically sound limits are placed on the magnitude of the extrapolation that may be tolerated. This matrix method is applied to the problem of finding criticality and neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron-diffusion equations is treated. Results for this example are indicated.
Progeny Clustering: A Method to Identify Biological Phenotypes
Hu, Chenyue W.; Kornblau, Steven M.; Slater, John H.; Qutub, Amina A.
2015-01-01
Estimating the optimal number of clusters is a major challenge in applying cluster analysis to any type of dataset, especially to biomedical datasets, which are high-dimensional and complex. Here, we introduce an improved method, Progeny Clustering, which is stability-based and exceptionally efficient in computing, to find the ideal number of clusters. The algorithm employs a novel Progeny Sampling method to reconstruct cluster identity, a co-occurrence probability matrix to assess the clustering stability, and a set of reference datasets to overcome inherent biases in the algorithm and data space. Our method was shown successful and robust when applied to two synthetic datasets (datasets of two-dimensions and ten-dimensions containing eight dimensions of pure noise), two standard biological datasets (the Iris dataset and Rat CNS dataset) and two biological datasets (a cell phenotype dataset and an acute myeloid leukemia (AML) reverse phase protein array (RPPA) dataset). Progeny Clustering outperformed some popular clustering evaluation methods in the ten-dimensional synthetic dataset as well as in the cell phenotype dataset, and it was the only method that successfully discovered clinically meaningful patient groupings in the AML RPPA dataset. PMID:26267476
Identifying disease polymorphisms from case-control genetic association data.
Park, L
2010-12-01
In case-control association studies, it is typical to observe several associated polymorphisms in a gene region. Often the most significantly associated polymorphism is considered to be the disease polymorphism; however, it is not clear whether it is the disease polymorphism or there is more than one disease polymorphism in the gene region. Currently, there is no method that can handle these problems based on the linkage disequilibrium (LD) relationship between polymorphisms. To distinguish real disease polymorphisms from markers in LD, a method that can detect disease polymorphisms in a gene region has been developed. Relying on the LD between polymorphisms in controls, the proposed method utilizes model-based likelihood ratio tests to find disease polymorphisms. This method shows reliable Type I and Type II error rates when sample sizes are large enough, and works better with re-sequenced data. Applying this method to fine mapping using re-sequencing or dense genotyping data would provide important information regarding the genetic architecture of complex traits.
An overview of age estimation in forensic anthropology: perspectives and practical considerations.
Márquez-Grant, Nicholas
2015-01-01
Information on methods of age estimation in physical anthropology, in particular with regard to age-at-death from human skeletal remains, is widely available in the literature. However, the practicalities and real challenges faced in forensic casework are not always highlighted. To provide a practitioner's perspective, regarding age estimation in forensic anthropology (both in the living as well as the dead), with an emphasis on the types of cases, the value of such work and its challenges and limitations. The paper reviews the current literature on age estimation with a focus on forensic anthropology, but it also brings the author's personal perspective derived from a number of forensic cases. Although much is known about what methods to use, but not always how to apply them, little attention has been given in the literature to the real practicalities faced by forensic anthropologists, for example: the challenges in different types of scenarios; how to report age estimations; responsibilities; and ethical concerns. This paper gathers some of these aspects into one overview which includes the value of such work and the practical challenges, not necessarily with the methods themselves, but also with regard to how these are applied in the different cases where age estimation is required.
NASA Technical Reports Server (NTRS)
Chao, Winston C.
2015-01-01
The excessive precipitation over steep and high mountains (EPSM) in GCMs and meso-scale models is due to a lack of parameterization of the thermal effects of the subgrid-scale topographic variation. These thermal effects drive subgrid-scale heated slope induced vertical circulations (SHVC). SHVC provide a ventilation effect of removing heat from the boundary layer of resolvable-scale mountain slopes and depositing it higher up. The lack of SHVC parameterization is the cause of EPSM. The author has previously proposed a method of parameterizing SHVC, here termed SHVC.1. Although this has been successful in avoiding EPSM, the drawback of SHVC.1 is that it suppresses convective type precipitation in the regions where it is applied. In this article we propose a new method of parameterizing SHVC, here termed SHVC.2. In SHVC.2 the potential temperature and mixing ratio of the boundary layer are changed when used as input to the cumulus parameterization scheme over mountainous regions. This allows the cumulus parameterization to assume the additional function of SHVC parameterization. SHVC.2 has been tested in NASA Goddard's GEOS-5 GCM. It achieves the primary goal of avoiding EPSM while also avoiding the suppression of convective-type precipitation in regions where it is applied.
Yong, Alan K.; Hough, Susan E.; Iwahashi, Junko; Braverman, Amy
2012-01-01
We present an approach based on geomorphometry to predict material properties and characterize site conditions using the VS30 parameter (time‐averaged shear‐wave velocity to a depth of 30 m). Our framework consists of an automated terrain classification scheme based on taxonomic criteria (slope gradient, local convexity, and surface texture) that systematically identifies 16 terrain types from 1‐km spatial resolution (30 arcsec) Shuttle Radar Topography Mission digital elevation models (SRTM DEMs). Using 853 VS30 values from California, we apply a simulation‐based statistical method to determine the mean VS30 for each terrain type in California. We then compare the VS30 values with models based on individual proxies, such as mapped surface geology and topographic slope, and show that our systematic terrain‐based approach consistently performs better than semiempirical estimates based on individual proxies. To further evaluate our model, we apply our California‐based estimates to terrains of the contiguous United States. Comparisons of our estimates with 325 VS30 measurements outside of California, as well as estimates based on the topographic slope model, indicate our method to be statistically robust and more accurate. Our approach thus provides an objective and robust method for extending estimates of VS30 for regions where in situ measurements are sparse or not readily available.
Xu, Xu; Su, Rui; Zhao, Xin; Liu, Zhuang; Zhang, Yupu; Li, Dan; Li, Xueyuan; Zhang, Hanqi; Wang, Ziming
2011-11-30
The ionic liquid-based microwave-assisted dispersive liquid-liquid microextraction (IL-based MADLLME) and derivatization was applied for the pretreatment of six sulfonamides (SAs) prior to the determination by high-performance liquid chromatography (HPLC). By adding methanol (disperser), fluorescamine solution (derivatization reagent) and ionic liquid (extraction solvent) into sample, extraction, derivatization, and preconcentration were continuously performed. Several experimental parameters, such as the type and volume of extraction solvent, the type and volume of disperser, amount of derivatization reagent, microwave power, microwave irradiation time, pH of sample solution, and ionic strength were investigated and optimized. When the microwave power was 240 W, the analytes could be derivatized and extracted simultaneously within 90 s. The proposed method was applied to the analysis of river water, honey, milk, and pig plasma samples, and the recoveries of analytes obtained were in the range of 95.0-110.8, 95.4-106.3, 95.0-108.3, and 95.7-107.7, respectively. The relative standard deviations varied between 1.5% and 7.3% (n=5). The results showed that the proposed method was a rapid, convenient and feasible method for the determination of SAs in liquid samples. Copyright © 2011 Elsevier B.V. All rights reserved.
A method for soil moisture probes calibration and validation of satellite estimates.
Holzman, Mauro; Rivas, Raúl; Carmona, Facundo; Niclòs, Raquel
2017-01-01
Optimization of field techniques is crucial to ensure high quality soil moisture data. The aim of the work is to present a sampling method for undisturbed soil and soil water content to calibrated soil moisture probes, in a context of the SMOS (Soil Moisture and Ocean Salinity) mission MIRAS Level 2 soil moisture product validation in Pampean Region of Argentina. The method avoids soil alteration and is recommended to calibrated probes based on soil type under a freely drying process at ambient temperature. A detailed explanation of field and laboratory procedures to obtain reference soil moisture is shown. The calibration results reflected accurate operation for the Delta-T thetaProbe ML2x probes in most of analyzed cases (RMSE and bias ≤ 0.05 m 3 /m 3 ). Post-calibration results indicated that the accuracy improves significantly applying the adjustments of the calibration based on soil types (RMSE ≤ 0.022 m 3 /m 3 , bias ≤ -0.010 m 3 /m 3 ). •A sampling method that provides high quality data of soil water content for calibration of probes is described.•Importance of calibration based on soil types.•A calibration process for similar soil types could be suitable in practical terms, depending on the required accuracy level.
Preparation of TNAs/NiO p-n heterojunction and their applications in UV photosensor
NASA Astrophysics Data System (ADS)
Yusoff, M. M.; Mamat, M. H.; Malek, M. F.; Abdullah, M. A. R.; Ismail, A. S.; Saidi, S. A.; Mohamed, R.; Suriani, A. B.; Khusaimi, Z.; Rusop, M.
2018-05-01
A nanocomposite consisted of n-type titanium dioxide (TiO2) nanorod arrays (TNAs) and p-type nickel oxide (NiO) were deposited using a novel facile low-temperature aqueous chemical route (ACR) in a Schott bottle with cap clamps and sol-gel spin coating method, respectively on a transparent conductive oxide (TCO) glass substrate for the application of ultraviolet (UV) photosensor. The p-n heterojunction photosensor exhibited an increase in photocurrent under UV light (365 nm, 750 µW/cm2) at applied reverse bias. The measured UV response also revealed an increase in photocurrent, and dark current with increasing applied reverse bias on the p-n heterojunction. In this study, the fabricated TNAs/NiO composite nanostructures showed potential applications for photosensor based on the steady photocurrent results obtained under UV irradiation
Jiang, Honghua; Ni, Xiao; Huster, William; Heilmann, Cory
2015-01-01
Hypoglycemia has long been recognized as a major barrier to achieving normoglycemia with intensive diabetic therapies. It is a common safety concern for the diabetes patients. Therefore, it is important to apply appropriate statistical methods when analyzing hypoglycemia data. Here, we carried out bootstrap simulations to investigate the performance of the four commonly used statistical models (Poisson, negative binomial, analysis of covariance [ANCOVA], and rank ANCOVA) based on the data from a diabetes clinical trial. Zero-inflated Poisson (ZIP) model and zero-inflated negative binomial (ZINB) model were also evaluated. Simulation results showed that Poisson model inflated type I error, while negative binomial model was overly conservative. However, after adjusting for dispersion, both Poisson and negative binomial models yielded slightly inflated type I errors, which were close to the nominal level and reasonable power. Reasonable control of type I error was associated with ANCOVA model. Rank ANCOVA model was associated with the greatest power and with reasonable control of type I error. Inflated type I error was observed with ZIP and ZINB models.
Solving Word Problems using Schemas: A Review of the Literature
Powell, Sarah R.
2011-01-01
Solving word problems is a difficult task for students at-risk for or with learning disabilities (LD). One instructional approach that has emerged as a valid method for helping students at-risk for or with LD to become more proficient at word-problem solving is using schemas. A schema is a framework for solving a problem. With a schema, students are taught to recognize problems as falling within word-problem types and to apply a problem solution method that matches that problem type. This review highlights two schema approaches for 2nd- and 3rd-grade students at-risk for or with LD: schema-based instruction and schema-broadening instruction. A total of 12 schema studies were reviewed and synthesized. Both types of schema approaches enhanced the word-problem skill of students at-risk for or with LD. Based on the review, suggestions are provided for incorporating word-problem instruction using schemas. PMID:21643477
Schalk, Kathrin; Koehler, Peter; Scherf, Katharina Anne
2018-01-01
Celiac disease (CD) is an inflammatory disorder of the upper small intestine caused by the ingestion of storage proteins (prolamins and glutelins) from wheat, barley, rye, and, in rare cases, oats. CD patients need to follow a gluten-free diet by consuming gluten-free products with gluten contents of less than 20 mg/kg. Currently, the recommended method for the quantitative determination of gluten is an enzyme-linked immunosorbent assay (ELISA) based on the R5 monoclonal antibody. Because the R5 ELISA mostly detects the prolamin fraction of gluten, a new independent method is required to detect prolamins as well as glutelins. This paper presents the development of a method to quantitate 16 wheat marker peptides derived from all wheat gluten protein types by liquid chromatography tandem mass spectrometry (LC-MS/MS) in the multiple reaction monitoring mode. The quantitation of each marker peptide in the chymotryptic digest of a defined amount of the respective reference wheat protein type resulted in peptide-specific yields. This enabled the conversion of peptide into protein type concentrations. Gluten contents were expressed as sum of all determined protein type concentrations. This new method was applied to quantitate gluten in wheat starches and compared to R5 ELISA and gel-permeation high-performance liquid chromatography with fluorescence detection (GP-HPLC-FLD), which resulted in a strong correlation between LC-MS/MS and the other two methods.
Semi-supervised learning for photometric supernova classification
NASA Astrophysics Data System (ADS)
Richards, Joseph W.; Homrighausen, Darren; Freeman, Peter E.; Schafer, Chad M.; Poznanski, Dovi
2012-01-01
We present a semi-supervised method for photometric supernova typing. Our approach is to first use the non-linear dimension reduction technique diffusion map to detect structure in a data base of supernova light curves and subsequently employ random forest classification on a spectroscopically confirmed training set to learn a model that can predict the type of each newly observed supernova. We demonstrate that this is an effective method for supernova typing. As supernova numbers increase, our semi-supervised method efficiently utilizes this information to improve classification, a property not enjoyed by template-based methods. Applied to supernova data simulated by Kessler et al. to mimic those of the Dark Energy Survey, our methods achieve (cross-validated) 95 per cent Type Ia purity and 87 per cent Type Ia efficiency on the spectroscopic sample, but only 50 per cent Type Ia purity and 50 per cent efficiency on the photometric sample due to their spectroscopic follow-up strategy. To improve the performance on the photometric sample, we search for better spectroscopic follow-up procedures by studying the sensitivity of our machine-learned supernova classification on the specific strategy used to obtain training sets. With a fixed amount of spectroscopic follow-up time, we find that, despite collecting data on a smaller number of supernovae, deeper magnitude-limited spectroscopic surveys are better for producing training sets. For supernova Ia (II-P) typing, we obtain a 44 per cent (1 per cent) increase in purity to 72 per cent (87 per cent) and 30 per cent (162 per cent) increase in efficiency to 65 per cent (84 per cent) of the sample using a 25th (24.5th) magnitude-limited survey instead of the shallower spectroscopic sample used in the original simulations. When redshift information is available, we incorporate it into our analysis using a novel method of altering the diffusion map representation of the supernovae. Incorporating host redshifts leads to a 5 per cent improvement in Type Ia purity and 13 per cent improvement in Type Ia efficiency. A web service for the supernova classification method used in this paper can be found at .
Method and apparatus for improving heat transfer in a fluidized bed
Lessor, Delbert L.; Robertus, Robert J.
1990-01-01
An apparatus contains a fluidized bed that includes particles of different triboelectrical types, each particle type acquiring an opposite polarity upon contact. The contact may occur between particles of the two types or between particles of etiher type and structure or fluid present in the apparatus. A fluidizing gas flow is passed through the particles to produce the fluidized bed. Immersed within the bed are electrodes. An alternating EMF source connected to the electrodes applies an alternating electric field across the fluidized bed to cause particles of the first type to move relative to particles of the second type and relative to the gas flow. In a heat exchanger incorporating the apparatus, the electrodes are conduits conveying a fluid to be heated. The two particle types alternately contact each conduit to transfer heat from a hot gas flow to the second fluid within the conduit.
Guo, Yahong; Tsuruga, Ayako; Yamaguchi, Shigeharu; Oba, Koji; Iwai, Kasumi; Sekita, Setsuko; Mizukami, Hajime
2006-06-01
Chloroplast chlB gene encoding subunit B of light-independent protochlorophyllide reductase was amplified from herbarium and crude drug specimens of Ephedra sinica, E. intermedia, E. equisetina, and E. przewalskii. Sequence comparison of the chlB gene indicated that all the E. sinica specimens have the same sequence type (Type S) distinctive from other species, while there are two sequence types (Type E1 and Type E2) in E. equisetina. E. intermedia and E. prezewalskii revealed an identical sequence type (Type IP). E. sinica was also identified by digesting the chlB fragment with Bcl I. A novel method for DNA authentication of Ephedra Herb based on the sequences of the chloroplast chlB gene and internal transcribed spacer of nuclear rRNA genes was developed and successfully applied for identification of the crude drugs obtained in the Chinese market.
Two-Level Hierarchical FEM Method for Modeling Passive Microwave Devices
NASA Astrophysics Data System (ADS)
Polstyanko, Sergey V.; Lee, Jin-Fa
1998-03-01
In recent years multigrid methods have been proven to be very efficient for solving large systems of linear equations resulting from the discretization of positive definite differential equations by either the finite difference method or theh-version of the finite element method. In this paper an iterative method of the multiple level type is proposed for solving systems of algebraic equations which arise from thep-version of the finite element analysis applied to indefinite problems. A two-levelV-cycle algorithm has been implemented and studied with a Gauss-Seidel iterative scheme used as a smoother. The convergence of the method has been investigated, and numerical results for a number of numerical examples are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Itagaki, Masafumi; Miyoshi, Yoshinori; Hirose, Hideyuki
A procedure is presented for the determination of geometric buckling for regular polygons. A new computation technique, the multiple reciprocity boundary element method (MRBEM), has been applied to solve the one-group neutron diffusion equation. The main difficulty in applying the ordinary boundary element method (BEM) to neutron diffusion problems has been the need to compute a domain integral, resulting from the fission source. The MRBEM has been developed for transforming this type of domain integral into an equivalent boundary integral. The basic idea of the MRBEM is to apply repeatedly the reciprocity theorem (Green's second formula) using a sequence ofmore » higher order fundamental solutions. The MRBEM requires discretization of the boundary only rather than of the domain. This advantage is useful for extensive survey analyses of buckling for complex geometries. The results of survey analyses have indicated that the general form of geometric buckling is B[sub g][sup 2] = (a[sub n]/R[sub c])[sup 2], where R[sub c] represents the radius of the circumscribed circle of the regular polygon under consideration. The geometric constant A[sub n] depends on the type of regular polygon and takes the value of [pi] for a square and 2.405 for a circle, an extreme case that has an infinite number of sides. Values of a[sub n] for a triangle, pentagon, hexagon, and octagon have been calculated as 4.190, 2.281, 2.675, and 2.547, respectively.« less
Thermal barrier coating life-prediction model development
NASA Technical Reports Server (NTRS)
Strangman, T. E.; Neumann, J.; Liu, A.
1986-01-01
The program focuses on predicting the lives of two types of strain-tolerant and oxidation-resistant thermal barrier coating (TBC) systems that are produced by commercial coating suppliers to the gas turbine industry. The plasma-sprayed TBC system, composed of a low-pressure plasma-spray (LPPS) or an argon shrouded plasma-spray (ASPS) applied oxidation resistant NiCrAlY or (CoNiCrAlY) bond coating and an air-plasma-sprayed yttria partially stabilized zirconia insulative layer, is applied by both Chromalloy, Klock, and Union Carbide. The second type of TBS is applied by the electron beam-physical vapor deposition (EB-PVD) process by Temescal. The second year of the program was focused on specimen procurement, TMC system characterization, nondestructive evaluation methods, life prediction model development, and TFE731 engine testing of thermal barrier coated blades. Materials testing is approaching completion. Thermomechanical characterization of the TBC systems, with toughness, and spalling strain tests, was completed. Thermochemical testing is approximately two-thirds complete. Preliminary materials life models for the bond coating oxidation and zirconia sintering failure modes were developed. Integration of these life models with airfoil component analysis methods is in progress. Testing of high pressure turbine blades coated with the program TBS systems is in progress in a TFE731 turbofan engine. Eddy current technology feasibility was established with respect to nondestructively measuring zirconia layer thickness of a TBC system.
Uncertainty factors in screening ecological risk assessments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duke, L.D.; Taggart, M.
2000-06-01
The hazard quotient (HQ) method is commonly used in screening ecological risk assessments (ERAs) to estimate risk to wildlife at contaminated sites. Many ERAs use uncertainty factors (UFs) in the HQ calculation to incorporate uncertainty associated with predicting wildlife responses to contaminant exposure using laboratory toxicity data. The overall objective was to evaluate the current UF methodology as applied to screening ERAs in California, USA. Specific objectives included characterizing current UF methodology, evaluating the degree of conservatism in UFs as applied, and identifying limitations to the current approach. Twenty-four of 29 evaluated ERAs used the HQ approach: 23 of thesemore » used UFs in the HQ calculation. All 24 made interspecies extrapolations, and 21 compensated for its uncertainty, most using allometric adjustments and some using RFs. Most also incorporated uncertainty for same-species extrapolations. Twenty-one ERAs used UFs extrapolating from lowest observed adverse effect level (LOAEL) to no observed adverse effect level (NOAEL), and 18 used UFs extrapolating from subchronic to chronic exposure. Values and application of all UF types were inconsistent. Maximum cumulative UFs ranged from 10 to 3,000. Results suggest UF methodology is widely used but inconsistently applied and is not uniformly conservative relative to UFs recommended in regulatory guidelines and academic literature. The method is limited by lack of consensus among scientists, regulators, and practitioners about magnitudes, types, and conceptual underpinnings of the UF methodology.« less
Alivisatos, A. Paul; Colvin, Vickie
1996-01-01
An electroluminescent device is described, as well as a method of making same, wherein the device is characterized by a semiconductor nanocrystal electron transport layer capable of emitting visible light in response to a voltage applied to the device. The wavelength of the light emitted by the device may be changed by changing either the size or the type of semiconductor nanocrystals used in forming the electron transport layer. In a preferred embodiment the device is further characterized by the capability of emitting visible light of varying wavelengths in response to changes in the voltage applied to the device. The device comprises a hole processing structure capable of injecting and transporting holes, and usually comprising a hole injecting layer and a hole transporting layer; an electron transport layer in contact with the hole processing structure and comprising one or more layers of semiconductor nanocrystals; and an electron injecting layer in contact with the electron transport layer for injecting electrons into the electron transport layer. The capability of emitting visible light of various wavelengths is principally based on the variations in voltage applied thereto, but the type of semiconductor nanocrystals used and the size of the semiconductor nanocrystals in the layers of semiconductor nanometer crystals may also play a role in color change, in combination with the change in voltage.
Reference materials for cellular therapeutics.
Bravery, Christopher A; French, Anna
2014-09-01
The development of cellular therapeutics (CTP) takes place over many years, and, where successful, the developer will anticipate the product to be in clinical use for decades. Successful demonstration of manufacturing and quality consistency is dependent on the use of complex analytical methods; thus, the risk of process and method drift over time is high. The use of reference materials (RM) is an established scientific principle and as such also a regulatory requirement. The various uses of RM in the context of CTP manufacturing and quality are discussed, along with why they are needed for living cell products and the analytical methods applied to them. Relatively few consensus RM exist that are suitable for even common methods used by CTP developers, such as flow cytometry. Others have also identified this need and made proposals; however, great care will be needed to ensure any consensus RM that result are fit for purpose. Such consensus RM probably will need to be applied to specific standardized methods, and the idea that a single RM can have wide applicability is challenged. Written standards, including standardized methods, together with appropriate measurement RM are probably the most appropriate way to define specific starting cell types. The characteristics of a specific CTP will to some degree deviate from those of the starting cells; consequently, a product RM remains the best solution where feasible. Each CTP developer must consider how and what types of RM should be used to ensure the reliability of their own analytical measurements. Copyright © 2014 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.
Analysis of Statistical Methods and Errors in the Articles Published in the Korean Journal of Pain
Yim, Kyoung Hoon; Han, Kyoung Ah; Park, Soo Young
2010-01-01
Background Statistical analysis is essential in regard to obtaining objective reliability for medical research. However, medical researchers do not have enough statistical knowledge to properly analyze their study data. To help understand and potentially alleviate this problem, we have analyzed the statistical methods and errors of articles published in the Korean Journal of Pain (KJP), with the intention to improve the statistical quality of the journal. Methods All the articles, except case reports and editorials, published from 2004 to 2008 in the KJP were reviewed. The types of applied statistical methods and errors in the articles were evaluated. Results One hundred and thirty-nine original articles were reviewed. Inferential statistics and descriptive statistics were used in 119 papers and 20 papers, respectively. Only 20.9% of the papers were free from statistical errors. The most commonly adopted statistical method was the t-test (21.0%) followed by the chi-square test (15.9%). Errors of omission were encountered 101 times in 70 papers. Among the errors of omission, "no statistics used even though statistical methods were required" was the most common (40.6%). The errors of commission were encountered 165 times in 86 papers, among which "parametric inference for nonparametric data" was the most common (33.9%). Conclusions We found various types of statistical errors in the articles published in the KJP. This suggests that meticulous attention should be given not only in the applying statistical procedures but also in the reviewing process to improve the value of the article. PMID:20552071
Astrometric Research of Asteroidal Satellites
NASA Astrophysics Data System (ADS)
Kikwaya, J.-B.; Thuillot, W.; Rocher, P.; Vieira Martins, R.; Arlot, J.-E.; Angeli, Cl.
2002-09-01
Several observational methods have been applied in order to detect asteroidal satellites. Some of them were rather successful, such as the stellar occultations and mutual eclipse methods. Recently other techniques such as the space imaging, the adaptive optics and the radar imaging inferred a great improvement in the search for these objects. However several limitations appear in the type of data that each of them allow us to access. We propose to apply an astrometric method in order as well to detect new asteroidal satellites as to get complementary data of some already detected objects (mainly their orbital period). This method is founded on the search of the reflex effect of the primary object due to the orbital motion of a possible satellite. Such an astrometric signature, already searched by Monet & Monet (1998), may reach several tens of MAS. Only a spectral analysis could then detect this signal under good conditions of signal/noise ratio and thanks to high quality astrometric measurements and coverage by different sites of observation. We have applied such a method for several asteroids. A preliminary result is obtained thanks to 377 CCD observations of 146 Lucina made at the Haute-Provence Observatory in South of France. A periodical signal appears in this analysis, leading to data compatible with a first detection of a probable satellite made previously (Arlot et al. 1985) by the occultation method.
A new compound control method for sine-on-random mixed vibration test
NASA Astrophysics Data System (ADS)
Zhang, Buyun; Wang, Ruochen; Zeng, Falin
2017-09-01
Vibration environmental test (VET) is one of the important and effective methods to provide supports for the strength design, reliability and durability test of mechanical products. A new separation control strategy was proposed to apply in multiple-input multiple-output (MIMO) sine on random (SOR) mixed mode vibration test, which is the advanced and intensive test type of VET. As the key problem of the strategy, correlation integral method was applied to separate the mixed signals which included random and sinusoidal components. The feedback control formula of MIMO linear random vibration system was systematically deduced in frequency domain, and Jacobi control algorithm was proposed in view of the elements, such as self-spectrum, coherence, and phase of power spectral density (PSD) matrix. Based on the excessive correction of excitation in sine vibration test, compression factor was introduced to reduce the excitation correction, avoiding the destruction to vibration table or other devices. The two methods were synthesized to be applied in MIMO SOR vibration test system. In the final, verification test system with the vibration of a cantilever beam as the control object was established to verify the reliability and effectiveness of the methods proposed in the paper. The test results show that the exceeding values can be controlled in the tolerance range of references accurately, and the method can supply theory and application supports for mechanical engineering.
Implementation of a novel efficient low cost method in structural health monitoring
NASA Astrophysics Data System (ADS)
Asadi, S.; Sepehry, N.; Shamshirsaz, M.; Vaghasloo, Y. A.
2017-05-01
In active structural health monitoring (SHM) methods, it is necessary to excite the structure with a preselected signal. More studies in the field of active SHM are focused on applying SHM on higher frequency ranges since it is possible to detect smaller damages, using higher excitation frequency. Also, to increase spatial domain of measurements and enhance signal to noise ratio (SNR), the amplitude of excitation signal is usually amplified. These issues become substantial where piezoelectric transducers with relatively high capacitance are used and consequently, need to utilize high power amplifiers becomes predominant. In this paper, a novel method named Step Excitation Method (SEM) is proposed and implemented for Lamb wave and transfer impedance-based SHM for damage detection in structures. Three different types of structure are studied: beam, plate and pipe. The related hardware is designed and fabricated which eliminates high power analog amplifiers and decreases complexity of driver significantly. Spectral Finite Element Method (SFEM) is applied to examine performance of proposed SEM. In proposed method, by determination of impulse response of the system, any input could be applied to the system by both finite element simulations and experiments without need for multiple measurements. The experimental results using SEM are compared with those obtained by conventional direct excitation method for healthy and damaged structures. The results show an improvement of amplitude resolution in damage detection comparing to conventional method which is due to achieving an SNR improvement up to 50%.
NASA Astrophysics Data System (ADS)
Costa-Surós, M.; Calbó, J.; González, J. A.; Long, C. N.
2013-06-01
The cloud vertical distribution and especially the cloud base height, which is linked to cloud type, is an important characteristic in order to describe the impact of clouds in a changing climate. In this work several methods to estimate the cloud vertical structure (CVS) based on atmospheric sounding profiles are compared, considering number and position of cloud layers, with a ground based system which is taken as a reference: the Active Remote Sensing of Clouds (ARSCL). All methods establish some conditions on the relative humidity, and differ on the use of other variables, the thresholds applied, or the vertical resolution of the profile. In this study these methods are applied to 125 radiosonde profiles acquired at the ARM Southern Great Plains site during all seasons of year 2009 and endorsed by GOES images, to confirm that the cloudiness conditions are homogeneous enough across their trajectory. The overall agreement for the methods ranges between 44-88%; four methods produce total agreements around 85%. Further tests and improvements are applied on one of these methods. In addition, we attempt to make this method suitable for low resolution vertical profiles, which could be useful in atmospheric modeling. The total agreement, even when using low resolution profiles, can be improved up to 91% if the thresholds for a moist layer to become a cloud layer are modified to minimize false negatives with the current data set, thus improving overall agreement.
The Use of Empirical Methods for Testing Granular Materials in Analogue Modelling
Montanari, Domenico; Agostini, Andrea; Bonini, Marco; Corti, Giacomo; Del Ventisette, Chiara
2017-01-01
The behaviour of a granular material is mainly dependent on its frictional properties, angle of internal friction, and cohesion, which, together with material density, are the key factors to be considered during the scaling procedure of analogue models. The frictional properties of a granular material are usually investigated by means of technical instruments such as a Hubbert-type apparatus and ring shear testers, which allow for investigating the response of the tested material to a wide range of applied stresses. Here we explore the possibility to determine material properties by means of different empirical methods applied to mixtures of quartz and K-feldspar sand. Empirical methods exhibit the great advantage of measuring the properties of a certain analogue material under the experimental conditions, which are strongly sensitive to the handling techniques. Finally, the results obtained from the empirical methods have been compared with ring shear tests carried out on the same materials, which show a satisfactory agreement with those determined empirically. PMID:28772993
Valtierra, Robert D; Glynn Holt, R; Cholewiak, Danielle; Van Parijs, Sofie M
2013-09-01
Multipath localization techniques have not previously been applied to baleen whale vocalizations due to difficulties in application to tonal vocalizations. Here it is shown that an autocorrelation method coupled with the direct reflected time difference of arrival localization technique can successfully resolve location information. A derivation was made to model the autocorrelation of a direct signal and its overlapping reflections to illustrate that an autocorrelation may be used to extract reflection information from longer duration signals containing a frequency sweep, such as some calls produced by baleen whales. An analysis was performed to characterize the difference in behavior of the autocorrelation when applied to call types with varying parameters (sweep rate, call duration). The method's feasibility was tested using data from playback transmissions to localize an acoustic transducer at a known depth and location. The method was then used to estimate the depth and range of a single North Atlantic right whale (Eubalaena glacialis) and humpback whale (Megaptera novaeangliae) from two separate experiments.
Kita, Tomoko; Komatsu, Katsuko; Zhu, Shu; Iida, Osamu; Sugimura, Koji; Kawahara, Nobuo; Taguchi, Hiromu; Masamura, Noriya; Cai, Shao-Qing
2016-03-01
Various Curcuma rhizomes have been used as medicines or spices in Asia since ancient times. It is very difficult to distinguish them morphologically, especially when they are boiled and dried, which causes misidentification leading to a loss of efficacy. We developed a method for discriminating Curcuma species by intron length polymorphism markers in genes encoding diketide-CoA synthase and curcumin synthase. This method could apply to identification of not only fresh plants but also samples of crude drugs or edible spices. By applying this method to Curcuma specimens and samples, and constructing a dendrogram based on these markers, seven Curcuma species were clearly distinguishable. Moreover, Curcuma longa specimens were geographically distinguishable. On the other hand, Curcuma kwangsiensis (gl type) specimens also showed intraspecies polymorphism, which may have occurred as a result of hybridization with other Curcuma species. The molecular method we developed is a potential tool for global classification of the genus Curcuma. Copyright © 2015 Elsevier Ltd. All rights reserved.
Yan, Song; Li, Yun
2014-02-15
Despite its great capability to detect rare variant associations, next-generation sequencing is still prohibitively expensive when applied to large samples. In case-control studies, it is thus appealing to sequence only a subset of cases to discover variants and genotype the identified variants in controls and the remaining cases under the reasonable assumption that causal variants are usually enriched among cases. However, this approach leads to inflated type-I error if analyzed naively for rare variant association. Several methods have been proposed in recent literature to control type-I error at the cost of either excluding some sequenced cases or correcting the genotypes of discovered rare variants. All of these approaches thus suffer from certain extent of information loss and thus are underpowered. We propose a novel method (BETASEQ), which corrects inflation of type-I error by supplementing pseudo-variants while keeps the original sequence and genotype data intact. Extensive simulations and real data analysis demonstrate that, in most practical situations, BETASEQ leads to higher testing powers than existing approaches with guaranteed (controlled or conservative) type-I error. BETASEQ and associated R files, including documentation, examples, are available at http://www.unc.edu/~yunmli/betaseq
NASA Astrophysics Data System (ADS)
Saito, Toru; Nishihara, Satomichi; Yamanaka, Shusuke; Kitagawa, Yasutaka; Kawakami, Takashi; Okumura, Mitsutaka; Yamaguchi, Kizashi
2010-10-01
Mukherjee's type of multireference coupled-cluster (MkMRCC), approximate spin-projected spin-unrestricted CC (APUCC), and AP spin-unrestricted Brueckner's (APUBD) methods were applied to didehydronated ethylene, allyl cation, cis-butadiene, and naphthalene. The focus is on descriptions of magnetic properties for these diradical species such as S-T gaps and diradical characters. Several types of orbital sets were examined as reference orbitals for MkMRCC calculations, and it was found that the change of orbital sets do not give significant impacts on computational results for these species. Comparison of MkMRCC results with APUCC and APUBD results show that these two types of methods yield similar results. These results show that the quantum spin corrected UCC and UBD methods can effectively account for both nondynamical and dynamical correlation effects that are covered by the MkMRCC methods. It was also shown that appropriately parameterized hybrid density functional theory (DFT) with AP corrections (APUDFT) calculations yielded very accurate data that qualitatively agree with those of MRCC and APUBD methods. This hierarchy of methods, MRCC, APUCC, and APUDFT, is expected to constitute a series of standard ab initio approaches towards radical systems, among which we could choose one of them, depending on the size of the systems and the required accuracy.
Spectral/ hp element methods: Recent developments, applications, and perspectives
NASA Astrophysics Data System (ADS)
Xu, Hui; Cantwell, Chris D.; Monteserin, Carlos; Eskilsson, Claes; Engsig-Karup, Allan P.; Sherwin, Spencer J.
2018-02-01
The spectral/ hp element method combines the geometric flexibility of the classical h-type finite element technique with the desirable numerical properties of spectral methods, employing high-degree piecewise polynomial basis functions on coarse finite element-type meshes. The spatial approximation is based upon orthogonal polynomials, such as Legendre or Chebychev polynomials, modified to accommodate a C 0 - continuous expansion. Computationally and theoretically, by increasing the polynomial order p, high-precision solutions and fast convergence can be obtained and, in particular, under certain regularity assumptions an exponential reduction in approximation error between numerical and exact solutions can be achieved. This method has now been applied in many simulation studies of both fundamental and practical engineering flows. This paper briefly describes the formulation of the spectral/ hp element method and provides an overview of its application to computational fluid dynamics. In particular, it focuses on the use of the spectral/ hp element method in transitional flows and ocean engineering. Finally, some of the major challenges to be overcome in order to use the spectral/ hp element method in more complex science and engineering applications are discussed.
Li, Haocheng; Zhang, Yukun; Carroll, Raymond J; Keadle, Sarah Kozey; Sampson, Joshua N; Matthews, Charles E
2017-11-10
A mixed effect model is proposed to jointly analyze multivariate longitudinal data with continuous, proportion, count, and binary responses. The association of the variables is modeled through the correlation of random effects. We use a quasi-likelihood type approximation for nonlinear variables and transform the proposed model into a multivariate linear mixed model framework for estimation and inference. Via an extension to the EM approach, an efficient algorithm is developed to fit the model. The method is applied to physical activity data, which uses a wearable accelerometer device to measure daily movement and energy expenditure information. Our approach is also evaluated by a simulation study. Copyright © 2017 John Wiley & Sons, Ltd.
The Benchmark Farm Program : a method for estimating irrigation water use in southwest Florida
Duerr, A.D.; Trommer, J.T.
1982-01-01
Irrigation water-use data are summarized in this report for 74 farms in the Southwest Florida Water Management District. Most data are for 1978-90, but 18 farms have data extending back to the early 1970's. Data include site number and location, season and year, crop type, irrigation system, monitoring method, and inches of water applied per acre. Crop types include citrus, cucumbers, pasture, peanuts, sod, strawberries, and tropical fish farms are also included. Water-application rates per growing season ranged from 0 inches per acre for several citrus and pasture sites to 239.7 inches per acre for a nursery site. The report also includes rainfall data for 12 stations throughout the study area. (USGS)
The TICTOP nozzle: a new nozzle contouring concept
NASA Astrophysics Data System (ADS)
Frey, Manuel; Makowka, Konrad; Aichner, Thomas
2017-06-01
Currently, mainly two types of nozzle contouring methods are applied in space propulsion: the truncated ideal contour (TIC) and the thrust-optimized parabola (TOP). This article presents a new nozzle contouring method called TICTOP, combining elements of TIC and TOP design. The resulting nozzle is shock-free as the TIC and therefore does not induce restricted shock separation leading to excessive side-loads. Simultaneously, the TICTOP nozzle will allow higher nozzle wall exit pressures and hence give a better separation margin than is the case for a TIC. Hence, this new nozzle type combines the good properties of TIC and TOP nozzles and eliminates their drawbacks. It is especially suited for first stage application in launchers where flow separation and side-loads are design drivers.
New Results on Gain-Loss Asymmetry for Stock Markets Time Series
NASA Astrophysics Data System (ADS)
Grudziecki, M.; Gnatowska, E.; Karpio, K.; Orłowski, A.; Załuska-Kotur, M.
2008-09-01
A method called investment horizon approach was successfully used to analyze stock markets of many different countries. Here we apply a version of this method to study characteristics of the Polish Pioneer mutual funds. We decided to analyze Pioneer because of its longest involvement in investing on the Polish market. Moreover, it apparently manages the biggest amount of money among all similar institutions in Poland. We compare various types of Pioneer mutual funds, characterized by different financial instruments they invest in. Previously, investment horizon approach produced different characteristics of emerging markets as opposed to mature ones, providing a possible way to quantify stock market maturity. Here we generalize the above mentioned results for mutual funds of various types.
NASA Astrophysics Data System (ADS)
Kaplan, Melike; Hosseini, Kamyar; Samadani, Farzan; Raza, Nauman
2018-07-01
A wide range of problems in different fields of the applied sciences especially non-linear optics is described by non-linear Schrödinger's equations (NLSEs). In the present paper, a specific type of NLSEs known as the cubic-quintic non-linear Schrödinger's equation including an anti-cubic term has been studied. The generalized Kudryashov method along with symbolic computation package has been exerted to carry out this objective. As a consequence, a series of optical soliton solutions have formally been retrieved. It is corroborated that the generalized form of Kudryashov method is a direct, effectual, and reliable technique to deal with various types of non-linear Schrödinger's equations.
Classification by causes of dark circles and appropriate evaluation method of dark circles.
Park, S R; Kim, H J; Park, H K; Kim, J Y; Kim, N S; Byun, K S; Moon, T K; Byun, J W; Moon, J H; Choi, G S
2016-08-01
Dark circles refer to a symptom that present darkness under the eyes. Because of improvement in the quality of life, the dark circles have been recognized as one of major cosmetic concerns. However, it is not easy to classify the dark circles because they have various causes. To select suitable instruments and detailed evaluation items, the dark circles were classified according to the causes through visual assessment, Wood's lamp test, and medical history survey for 100 subjects with dark circles. After the classification, were newly recruited for instrument conformity assessment. Through this, suitable instruments for dark circle evaluation were selected. We performed a randomized clinical trial for dark circles, a placebo-controlled double-blind study, using effective parameters of the instruments selected from the preliminary test. Dark circles of vascular type (35%) and mixed type (54%), a combination of pigmented and vascular types, were the most common. Twenty four subjects with the mixed type dark circles applied the test product (Vitamin C 3%, Vitamin A 0.1%, Vitamin E 0.5%) and placebo on randomized split-face for 8 weeks. The effective parameters (L*, a, M.I., E.I., quasi L*, quasi a* and dermal thickness) were measured during the study period. Result showed that the L* value of Chromameter(®) , Melanin index (M.I.) of Mexameter(®) and quasi L* value obtained by image analysis improved with statistical significance after applying the test product compared with the placebo product. We classified the dark circles according to the causes of the dark circles and verified the reliability of the parameter obtained by the instrument conformity assessment used in this study through the efficacy evaluation. Also based on this study, we were to suggest newly established methods which can be applied to the evaluation of efficacy of functional cosmetics for dark circles. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Intercomparison of Lab-Based Soil Water Extraction Methods for Stable Water Isotope Analysis
NASA Astrophysics Data System (ADS)
Pratt, D.; Orlowski, N.; McDonnell, J.
2016-12-01
The effect of pore water extraction technique on resultant isotopic signature is poorly understood. Here we present results of an intercomparison of five common lab-based soil water extraction techniques: high pressure mechanical squeezing, centrifugation, direct vapor equilibration, microwave extraction, and cryogenic extraction. We applied five extraction methods to two physicochemically different standard soil types (silty sand and clayey loam) that were oven-dried and rewetted with water of known isotopic composition at three different gravimetric water contents (8, 20, and 30%). We tested the null hypothisis that all extraction techniques would provide the same isotopic result independent from soil type and water content. Our results showed that the extraction technique had a significant effect on the soil water isotopic composition. Each method exhibited deviations from spiked reference water, with soil type and water content showing a secondary effect. Cryogenic extraction showed the largest deviations from the reference water, whereas mechanical squeezing and centrifugation provided the closest match to the reference water for both soil types. We also compared results for each extraction technique that produced liquid water on both an OA-ICOS and IRMS; differences between them were negligible.
NASA Astrophysics Data System (ADS)
Živanović, Vladimir; Jemcov, Igor; Dragišić, Veselin; Atanacković, Nebojša
2017-04-01
Delineation of sanitary protection zones of groundwater source is a comprehensive and multidisciplinary task. Uniform methodology for protection zoning for various type of aquifers is not established. Currently applied methods mostly rely on horizontal groundwater travel time toward the tapping structure. On the other hand, groundwater vulnerability assessment methods evaluate the protective function of unsaturated zone as an important part of groundwater source protection. In some particular cases surface flow might also be important, because of rapid transfer of contaminants toward the zones with intense infiltration. For delineation of sanitary protection zones three major components should be analysed: vertical travel time through unsaturated zone, horizontal travel time through saturated zone and surface water travel time toward intense infiltration zones. Integrating the aforementioned components into one time-dependent model represents a basis of presented method for delineation of groundwater source protection zones in rocks and sediments of different porosity. The proposed model comprises of travel time components of surface water, as well as groundwater (horizontal and vertical component). The results obtained using the model, represent the groundwater vulnerability as the sum of the surface and groundwater travel time and corresponds to the travel time of potential contaminants from the ground surface to the tapping structure. This vulnerability assessment approach do not consider contaminant properties (intrinsic vulnerability) although it can be easily improved for evaluating the specific groundwater vulnerability. This concept of the sanitary protection zones was applied at two different type of aquifers: karstic aquifer of catchment area of Blederija springs and "Beli Timok" source of intergranular shallow aquifer. The first one represents a typical karst hydrogeological system with part of the catchment with allogenic recharge, and the second one, the groundwater source within shallow intergranular alluvial aquifer, dominantly recharged by river bank filtration. For sanitary protection zones delineation, the applied method has shown the importance of introducing all travel time components equally. In the case of the karstic source, the importance of the surface flow toward ponor zones has been emphasized, as a consequence of rapid travel time of water in relation to diffuse infiltration from autogenic part. When it comes to the shallow intergranular aquifer, the character of the unsaturated zone gets more prominent role in the source protection, as important buffer of the vertical movement downward. The applicability of proposed method has been shown regardless of the type of the aquifer, and at the same time intelligible results of the delineated sanitary protection zones are possible to validate with various methods. Key words: groundwater protection zoning, time dependent model, karst aquifer, intergranular aquifer, groundwater source protection
Careflow Mining Techniques to Explore Type 2 Diabetes Evolution.
Dagliati, Arianna; Tibollo, Valentina; Cogni, Giulia; Chiovato, Luca; Bellazzi, Riccardo; Sacchi, Lucia
2018-03-01
In this work we describe the application of a careflow mining algorithm to detect the most frequent patterns of care in a type 2 diabetes patients cohort. The applied method enriches the detected patterns with clinical data to define temporal phenotypes across the studied population. Novel phenotypes are discovered from heterogeneous data of 424 Italian patients, and compared in terms of metabolic control and complications. Results show that careflow mining can help to summarize the complex evolution of the disease into meaningful patterns, which are also significant from a clinical point of view.
Making the Connection: Parent and Community Involvement and the California Urban Superintendent
ERIC Educational Resources Information Center
Nguyen-Hernandez, Amy
2010-01-01
The purpose of this study was to redress the paucity of research regarding superintendents and their role in implementing parent and community involvement policies. Another aim was to examine which types of parent and community involvement practices, if any, may be related to student achievement. A mixed methods inquiry was applied. The Measure…
Apply Pesticides Correctly, A Guide for Commercial Applicators: Right-of-Way Pest Control.
ERIC Educational Resources Information Center
Wamsley, Mary Ann, Ed.; Vermeire, Donna M., Ed.
This guide contains basic information to meet specific standards for pesticide applicators. The text is concerned with the recognition of weeds and methods of their control in rights-of-way. Different types of application equipment both airborne and ground are discussed with precautions for the safe and effective use of herbicides. (CS)
Determining the rate of value increase for oaks
Paul S. DeBald; Joseph J. Mendel
1971-01-01
A method used to develop rate of value increase is described as an aid to management decision-making. Regional rates of value increase and financial maturity diameters for ten species common to the oak-hickory type are outlined, and the economic principles involved are explained to show how they apply to either individual trees or stands.
Two Paper Airplane Design Challenges: Customizing for Different Learning Objectives
ERIC Educational Resources Information Center
Meyer, Daniel Z.; Meyer, Allison Antink
2012-01-01
The incorporation of scientific inquiry into college classrooms has steadily risen as faculty work to move away from exclusively didactic methods. One type of inquiry structure, the design task, produces a product rather than simply a conclusion. This offers students a context to apply their understanding of content in a tangible way that has…
Infant Eye-Tracking in the Context of Goal-Directed Actions
ERIC Educational Resources Information Center
Corbetta, Daniela; Guan, Yu; Williams, Joshua L.
2012-01-01
This paper presents two methods that we applied to our research to record infant gaze in the context of goal-oriented actions using different eye-tracking devices: head-mounted and remote eye-tracking. For each type of eye-tracking system, we discuss their advantages and disadvantages, describe the particular experimental setups we used to study…
A Search for the Sources of Excellence: Applying Contemporary Management Theory to Theatre Research.
ERIC Educational Resources Information Center
Jones, Tom; White, Donald D.
A study was conducted to learn about the effective practice of theatre through the application of research methods developed in studies involving other types of organizations. Successful and unsuccessful play directors, as determined by evaluations of their plays in the Southwest Region of the American College Theatre Festival, were surveyed to…
Analysis of Serial and Parallel Algorithms for Use in Molecular Dynamics.. Review and Proposals
NASA Astrophysics Data System (ADS)
Mazzone, A. M.
This work analyzes the stability and accuracy of multistep methods, either for serial or parallel calculations, applied to molecular dynamics simulations. Numerical testing is made by evaluating the equilibrium configurations of mono-elemental crystalline lattices of metallic and semiconducting type (Ag and Si, respectively) and of a cubic CuY compound.
Exploring supervised and unsupervised methods to detect topics in biomedical text
Lee, Minsuk; Wang, Weiqing; Yu, Hong
2006-01-01
Background Topic detection is a task that automatically identifies topics (e.g., "biochemistry" and "protein structure") in scientific articles based on information content. Topic detection will benefit many other natural language processing tasks including information retrieval, text summarization and question answering; and is a necessary step towards the building of an information system that provides an efficient way for biologists to seek information from an ocean of literature. Results We have explored the methods of Topic Spotting, a task of text categorization that applies the supervised machine-learning technique naïve Bayes to assign automatically a document into one or more predefined topics; and Topic Clustering, which apply unsupervised hierarchical clustering algorithms to aggregate documents into clusters such that each cluster represents a topic. We have applied our methods to detect topics of more than fifteen thousand of articles that represent over sixteen thousand entries in the Online Mendelian Inheritance in Man (OMIM) database. We have explored bag of words as the features. Additionally, we have explored semantic features; namely, the Medical Subject Headings (MeSH) that are assigned to the MEDLINE records, and the Unified Medical Language System (UMLS) semantic types that correspond to the MeSH terms, in addition to bag of words, to facilitate the tasks of topic detection. Our results indicate that incorporating the MeSH terms and the UMLS semantic types as additional features enhances the performance of topic detection and the naïve Bayes has the highest accuracy, 66.4%, for predicting the topic of an OMIM article as one of the total twenty-five topics. Conclusion Our results indicate that the supervised topic spotting methods outperformed the unsupervised topic clustering; on the other hand, the unsupervised topic clustering methods have the advantages of being robust and applicable in real world settings. PMID:16539745
RECENT ADVANCES IN QUANTITATIVE NEUROPROTEOMICS
Craft, George E; Chen, Anshu; Nairn, Angus C
2014-01-01
The field of proteomics is undergoing rapid development in a number of different areas including improvements in mass spectrometric platforms, peptide identification algorithms and bioinformatics. In particular, new and/or improved approaches have established robust methods that not only allow for in-depth and accurate peptide and protein identification and modification, but also allow for sensitive measurement of relative or absolute quantitation. These methods are beginning to be applied to the area of neuroproteomics, but the central nervous system poses many specific challenges in terms of quantitative proteomics, given the large number of different neuronal cell types that are intermixed and that exhibit distinct patterns of gene and protein expression. This review highlights the recent advances that have been made in quantitative neuroproteomics, with a focus on work published over the last five years that applies emerging methods to normal brain function as well as to various neuropsychiatric disorders including schizophrenia and drug addiction as well as of neurodegenerative diseases including Parkinson’s disease and Alzheimer’s disease. While older methods such as two-dimensional polyacrylamide electrophoresis continued to be used, a variety of more in-depth MS-based approaches including both label (ICAT, iTRAQ, TMT, SILAC, SILAM), label-free (label-free, MRM, SWATH) and absolute quantification methods, are rapidly being applied to neurobiological investigations of normal and diseased brain tissue as well as of cerebrospinal fluid (CSF). While the biological implications of many of these studies remain to be clearly established, that there is a clear need for standardization of experimental design and data analysis, and that the analysis of protein changes in specific neuronal cell types in the central nervous system remains a serious challenge, it appears that the quality and depth of the more recent quantitative proteomics studies is beginning to shed light on a number of aspects of neuroscience that relates to normal brain function as well as of the changes in protein expression and regulation that occurs in neuropsychiatric and neurodegenerative disorders. PMID:23623823
Recent advances in quantitative neuroproteomics.
Craft, George E; Chen, Anshu; Nairn, Angus C
2013-06-15
The field of proteomics is undergoing rapid development in a number of different areas including improvements in mass spectrometric platforms, peptide identification algorithms and bioinformatics. In particular, new and/or improved approaches have established robust methods that not only allow for in-depth and accurate peptide and protein identification and modification, but also allow for sensitive measurement of relative or absolute quantitation. These methods are beginning to be applied to the area of neuroproteomics, but the central nervous system poses many specific challenges in terms of quantitative proteomics, given the large number of different neuronal cell types that are intermixed and that exhibit distinct patterns of gene and protein expression. This review highlights the recent advances that have been made in quantitative neuroproteomics, with a focus on work published over the last five years that applies emerging methods to normal brain function as well as to various neuropsychiatric disorders including schizophrenia and drug addiction as well as of neurodegenerative diseases including Parkinson's disease and Alzheimer's disease. While older methods such as two-dimensional polyacrylamide electrophoresis continued to be used, a variety of more in-depth MS-based approaches including both label (ICAT, iTRAQ, TMT, SILAC, SILAM), label-free (label-free, MRM, SWATH) and absolute quantification methods, are rapidly being applied to neurobiological investigations of normal and diseased brain tissue as well as of cerebrospinal fluid (CSF). While the biological implications of many of these studies remain to be clearly established, that there is a clear need for standardization of experimental design and data analysis, and that the analysis of protein changes in specific neuronal cell types in the central nervous system remains a serious challenge, it appears that the quality and depth of the more recent quantitative proteomics studies is beginning to shed light on a number of aspects of neuroscience that relates to normal brain function as well as of the changes in protein expression and regulation that occurs in neuropsychiatric and neurodegenerative disorders. Copyright © 2013. Published by Elsevier Inc.
High-resolution typing of Chlamydia trachomatis: epidemiological and clinical uses.
de Vries, Henry J C; Schim van der Loeff, Maarten F; Bruisten, Sylvia M
2015-02-01
A state-of-the-art overview of molecular Chlamydia trachomatis typing methods that are used for routine diagnostics and scientific studies. Molecular epidemiology uses high-resolution typing techniques such as multilocus sequence typing, multilocus variable number of tandem repeats analysis, and whole-genome sequencing to identify strains based on their DNA sequence. These data can be used for cluster, network and phylogenetic analyses, and are used to unveil transmission networks, risk groups, and evolutionary pathways. High-resolution typing of C. trachomatis strains is applied to monitor treatment efficacy and re-infections, and to study the recent emergence of lymphogranuloma venereum (LGV) amongst men who have sex with men in high-income countries. Chlamydia strain typing has clinical relevance in disease management, as LGV needs longer treatment than non-LGV C. trachomatis. It has also led to the discovery of a new variant Chlamydia strain in Sweden, which was not detected by some commercial C. trachomatis diagnostic platforms. After a brief history and comparison of the various Chlamydia typing methods, the applications of the current techniques are described and future endeavors to extend scientific understanding are formulated. High-resolution typing will likely help to further unravel the pathophysiological mechanisms behind the wide clinical spectrum of chlamydial disease.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matuttis, Hans-Georg; Wang, Xiaoxing
Decomposition methods of the Suzuki-Trotter type of various orders have been derived in different fields. Applying them both to classical ordinary differential equations (ODEs) and quantum systems allows to judge their effectiveness and gives new insights for many body quantum mechanics where reference data are scarce. Further, based on data for 6 × 6 system we conclude that sampling with sign (minus-sign problem) is probably detrimental to the accuracy of fermionic simulations with determinant algorithms.
Quantification of petroleum-type hydrocarbons in avian tissue
Gay, M.L.; Belisle, A.A.; Patton, J.F.
1980-01-01
Methods were developed for the analysis of 16 hydrocarbons in avian tissue. Mechanical extraction with pentane was followed by clean-up on Florisil and Silicar. Residues were determined by gas—liquid chromatography and gas—liquid, chromatography—mass spectrometry. The method was applied to the analysis of liver, kidney, fat, and brain tissue of mallard ducks (Anas platyrhynchos) fed a mixture of hydrocarbons. Measurable concentrations of all compounds analyzed were present in all tissues except brain. Highest concentrations were in fat.
Binary CFG Rebuilt of Self-Modifying Codes
2016-10-03
ABOVE ORGANIZATION. 1. REPORT DATE (DD-MM-YYYY) 04-10-2016 2. REPORT TYPE Final 3. DATES COVERED (From - To) 12 May 2014 to 11 May 2016 4. TITLE ...industry to analyze malware is a dynamic analysis in a sand- box . Alternatively, we apply a hybrid method combining concolic testing (dynamic symbolic...virus software based on binary signatures. A popular method in industry to analyze malware is a dynamic analysis in a sand- box . Alternatively, we
Electron Injection by E-Field Drift and its Application in Starting-up Tokamaks at Low Loop Voltage
NASA Astrophysics Data System (ADS)
Pan, Yuan; Yan, Xiao-Lin; Liu, Bao-Hua
2003-05-01
We propose an innovative method of electron injection by E-field drift into a plasma device and discuss its application in starting-up tokamak plasmas at low loop voltage. The experimental results obtained from HT-6M Tokamak are also presented. The breakdown loop voltage is obviously reduced and the discharge performance is improved by using the electron injection method. It could be applied to some other types of plasma device.
On Solutions for the Transient Response of Beams
NASA Technical Reports Server (NTRS)
Leonard, Robert W.
1959-01-01
Williams type modal solutions of the elementary and Timoshenko beam equations are presented for the response of several uniform beams to a general applied load. Example computations are shown for a free-free beam subject to various concentrated loads at its center. Discussion includes factors influencing the convergence of modal solutions and factors to be considered in a choice of beam theory. Results obtained by two numerical procedures, the traveling-wave method and Houbolt's method, are also presented and discussed.
Positron lifetime beam for defect studies in thin epitaxial semiconductor structures
NASA Astrophysics Data System (ADS)
Laakso, A.; Saarinen, K.; Hautojärvi, P.
2001-12-01
Positron annihilation spectroscopies are methods for direct identification of vacancy-type defects by measuring positron lifetime and Doppler broadening of annihilation radiation and providing information about open volume, concentration and atoms surrounding the defect. Both these techniques are easily applied to bulk samples. Only the Doppler broadening spectroscopy can be employed in thin epitaxial samples by utilizing low-energy positron beams. Here we describe the positron lifetime beam which will provide us with a method to measure lifetime in thin semiconductor layers.
Reasons to value the health care intangible asset valuation.
Reilly, Robert F
2012-01-01
There are numerous individual reasons to conduct a health care intangible asset valuation. This discussion summarized many of these reasons and considered the common categories of these individual reasons. Understanding the reason for the intangible asset analysis is an important prerequisite to conducting the valuation, both for the analyst and the health care owner/operator. This is because an intangible asset valuation may not be the type of analysis that the owner/operator really needs. Rather, the owner/operator may really need an economic damages measurement, a license royalty rate analysis, an intercompany transfer price study, a commercialization potential evaluation, or some other type of intangible asset analysis. In addition, a clear definition of the reason for the valuation will allow the analyst to understand if (1) any specific analytical guidelines, procedures, or regulations apply and (2) any specific reporting requirement applies. For example, intangible asset valuations prepared for fair value accounting purposes should meet specific ASC 820 fair value accounting guidance. Intangible asset valuations performed for intercompany transfer price tax purposes should comply with the guidance provided in the Section 482 regulations. Likewise, intangible asset valuations prepared for Section 170 charitable contribution purposes should comply with specific reporting requirements. The individual reasons for the health care intangible asset valuation may influence the standard of value applied, the valuation date selected, the valuation approaches and methods applied, the form and format of valuation report prepared, and even the type of professional employed to perform the valuation.
Electromagnetic Inverse Methods and Applications for Inhomogeneous Media Probing and Synthesis.
NASA Astrophysics Data System (ADS)
Xia, Jake Jiqing
The electromagnetic inverse scattering problems concerned in this thesis are to find unknown inhomogeneous permittivity and conductivity profiles in a medium from the scattering data. Both analytical and numerical methods are studied in the thesis. The inverse methods can be applied to geophysical medium probing, non-destructive testing, medical imaging, optical waveguide synthesis and material characterization. An introduction is given in Chapter 1. The first part of the thesis presents inhomogeneous media probing. The Riccati equation approach is discussed in Chapter 2 for a one-dimensional planar profile inversion problem. Two types of the Riccati equations are derived and distinguished. New renormalized formulae based inverting one specific type of the Riccati equation are derived. Relations between the inverse methods of Green's function, the Riccati equation and the Gel'fand-Levitan-Marchenko (GLM) theory are studied. In Chapter 3, the renormalized source-type integral equation (STIE) approach is formulated for inversion of cylindrically inhomogeneous permittivity and conductivity profiles. The advantages of the renormalized STIE approach are demonstrated in numerical examples. The cylindrical profile inversion problem has an application for borehole inversion. In Chapter 4 the renormalized STIE approach is extended to a planar case where the two background media are different. Numerical results have shown fast convergence. This formulation is applied to inversion of the underground soil moisture profiles in remote sensing. The second part of the thesis presents the synthesis problem of inhomogeneous dielectric waveguides using the electromagnetic inverse methods. As a particular example, the rational function representation of reflection coefficients in the GLM theory is used. The GLM method is reviewed in Chapter 5. Relations between modal structures and transverse reflection coefficients of an inhomogeneous medium are established in Chapter 6. A stratified medium model is used to derive the guidance condition and the reflection coefficient. Results obtained in Chapter 6 provide the physical foundation for applying the inverse methods for the waveguide design problem. In Chapter 7, a global guidance condition for continuously varying medium is derived using the Riccati equation. It is further shown that the discrete modes in an inhomogeneous medium have the same wave vectors as the poles of the transverse reflection coefficient. An example of synthesizing an inhomogeneous dielectric waveguide using a rational reflection coefficient is presented. A summary of the thesis is given in Chapter 8. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.).
Li, Xiaomeng; Fang, Dansi; Cong, Xiaodong; Cao, Gang; Cai, Hao; Cai, Baochang
2012-12-01
A method is described using rapid and sensitive Fourier transform near-infrared spectroscopy combined with high-performance liquid chromatography-diode array detection for the simultaneous identification and determination of four bioactive compounds in crude Radix Scrophulariae samples. Partial least squares regression is selected as the analysis type and multiplicative scatter correction, second derivative, and Savitzky-Golay filter were adopted for the spectral pretreatment. The correlation coefficients (R) of the calibration models were above 0.96 and the root mean square error of predictions were under 0.028. The developed models were applied to unknown samples with satisfactory results. The established method was validated and can be applied to the intrinsic quality control of crude Radix Scrophulariae.
Ferreira, Fábio S.; Pereira, João M.S.; Duarte, João V.; Castelo-Branco, Miguel
2017-01-01
Background: Although voxel based morphometry studies are still the standard for analyzing brain structure, their dependence on massive univariate inferential methods is a limiting factor. A better understanding of brain pathologies can be achieved by applying inferential multivariate methods, which allow the study of multiple dependent variables, e.g. different imaging modalities of the same subject. Objective: Given the widespread use of SPM software in the brain imaging community, the main aim of this work is the implementation of massive multivariate inferential analysis as a toolbox in this software package. applied to the use of T1 and T2 structural data from diabetic patients and controls. This implementation was compared with the traditional ANCOVA in SPM and a similar multivariate GLM toolbox (MRM). Method: We implemented the new toolbox and tested it by investigating brain alterations on a cohort of twenty-eight type 2 diabetes patients and twenty-six matched healthy controls, using information from both T1 and T2 weighted structural MRI scans, both separately – using standard univariate VBM - and simultaneously, with multivariate analyses. Results: Univariate VBM replicated predominantly bilateral changes in basal ganglia and insular regions in type 2 diabetes patients. On the other hand, multivariate analyses replicated key findings of univariate results, while also revealing the thalami as additional foci of pathology. Conclusion: While the presented algorithm must be further optimized, the proposed toolbox is the first implementation of multivariate statistics in SPM8 as a user-friendly toolbox, which shows great potential and is ready to be validated in other clinical cohorts and modalities. PMID:28761571
Nikolakakis, I; Aragon, O B; Malamataris, S
1998-07-01
The purpose of this study was to compare some indicators of capsule-filling performance, as measured by tapped density under different conditions, and elucidate possible quantitative relationships between variation of capsule fill-weight (%CV) and gravitational and inter-particle forces (attractive or frictional) derived from measurements of particle size, true density, low compression and tensile strength. Five common pharmaceutical diluents (lactose, maize starch, talc, Emcocel and Avicel) were investigated and two capsule-filling methods (pouring powder and dosator nozzle) were employed. It was found that for the pouring-type method the appropriateness of Hausner's ratio (HR), Carr's compressibility index (CC%) and Kawakita's constant (alpha) as indicators of capsule fill-weight variation decreases in the order alpha > CC% > HR; the appropriateness of these indicators also decreases with increasing cylinder size and with impact velocity during tapping. For the dosator-type method the appropriateness of the indicators decreases in the order HR > CC% > alpha, the opposite of that for the pouring-type method; the appropriateness of the indicators increases with decreasing cylinder size and impact velocity. The relationship between %CV and the ratio of inter-particle attractive to gravitational forces calculated from measurements of particle size and true density (Fvdw/Wp) was more significant for the pouring-type capsule-filling method. For the dosator-type method a significant relationship (1% level) was found between %CV and the product of Fvdw/Wp and a function expressing the increase, with packing density (p(f)), in the ratio of frictional to attractive inter-particle forces derived from compression (P) and tensile-strength (T) testing, d(log(P/T))/d(p(f)). The value of tapped density in predictions of capsule-filling performance is affected by the testing conditions in a manner depending on the filling method applied. For the pouring-type method predictions can be based on the ratio of attractive (inter-particle) to gravitational forces, whereas for the dosator-type method the contribution of frictional and attractive forces should, because of packing density change, also be taken into account.
NASA Astrophysics Data System (ADS)
Jin, Y.; Lee, D.
2017-12-01
North Korea (the Democratic People's Republic of Korea, DPRK) is known to have some of the most degraded forest in the world. The characteristics of forest landscape in North Korea is complex and heterogeneous, the major vegetation cover types in the forest are hillside farm, unstocked forest, natural forest, and plateau vegetation. Better classification of types in high spatial resolution of deforested areas could provide essential information for decisions about forest management priorities and restoration of deforested areas. For mapping heterogeneous vegetation covers, the phenology-based indices are helpful to overcome the reflectance value confusion that occurs when using one season images. Coarse spatial resolution images may be acquired with a high repetition rate and it is useful for analyzing phenology characteristics, but may not capture the spatial detail of the land cover mosaic of the region of interest. Previous spatial-temporal fusion methods were only capture the temporal change, or focused on both temporal change and spatial change but with low accuracy in heterogeneous landscapes and small patches. In this study, a new concept for spatial-temporal image fusion method focus on heterogeneous landscape was proposed to produce fine resolution images at both fine spatial and temporal resolution. We classified the three types of pixels between the base image and target image, the first type is only reflectance changed caused by phenology, this type of pixels supply the reflectance, shape and texture information; the second type is both reflectance and spectrum changed in some bands caused by phenology like rice paddy or farmland, this type of pixels only supply shape and texture information; the third type is reflectance and spectrum changed caused by land cover type change, this type of pixels don't provide any information because we can't know how land cover changed in target image; and each type of pixels were applied different prediction methods. Results show that both STARFM and FSDAF predicted in low accuracy in second type pixels and small patches. Classification results used spatial-temporal image fusion method proposed in this study showed overall classification accuracy of 89.38%, with corresponding kappa coefficients of 0.87.
Gas Chromatography Data Classification Based on Complex Coefficients of an Autoregressive Model
Zhao, Weixiang; Morgan, Joshua T.; Davis, Cristina E.
2008-01-01
This paper introduces autoregressive (AR) modeling as a novel method to classify outputs from gas chromatography (GC). The inverse Fourier transformation was applied to the original sensor data, and then an AR model was applied to transform data to generate AR model complex coefficients. This series of coefficients effectively contains a compressed version of all of the information in the original GC signal output. We applied this method to chromatograms resulting from proliferating bacteria species grown in culture. Three types of neural networks were used to classify the AR coefficients: backward propagating neural network (BPNN), radial basis function-principal component analysismore » (RBF-PCA) approach, and radial basis function-partial least squares regression (RBF-PLSR) approach. This exploratory study demonstrates the feasibility of using complex root coefficient patterns to distinguish various classes of experimental data, such as those from the different bacteria species. This cognition approach also proved to be robust and potentially useful for freeing us from time alignment of GC signals.« less
NASA Astrophysics Data System (ADS)
Asadi Haroni, Hooshang; Hassan Tabatabaei, Seyed
2016-04-01
Muteh gold mining area is located in 160 km NW of Isfahan town. Gold mineralization is meso-thermal type and associated with silisic, seresitic and carbonate alterations as well as with hematite and goethite. Image processing and interpretation were applied on the ASTER satellite imagery data of about 400 km2 at the Muteh gold mining area to identify hydrothermal alterations and iron oxides associated with gold mineralization. After applying preprocessing methods such as radiometric and geometric corrections, image processing methods of Principal Components Analysis (PCA), Least Square Fit (Ls-Fit) and Spectral Angle Mapper (SAM) were applied on the ASTER data to identify hydrothermal alterations and iron oxides. In this research reference spectra of minerals such as chlorite, hematite, clay minerals and phengite identified from laboratory spectral analysis of collected samples were used to map the hydrothermal alterations. Finally, identified hydrothermal alteration and iron oxides were validated by visiting and sampling some of the mapped hydrothermal alterations.
Life-long battle: Perceptions of type 2 diabetes in Thailand.
Suparee, Nitima; McGee, Paula; Khan, Salim; Pinyopasakul, Wanpen
2015-03-01
The number of people in Thailand who have Type 2 diabetes has increased dramatically making it one of the country's major health problems. The rising prevalence of diabetes in Thailand is associated with dietary changes, reduced physical activity and health education. Although there is much research about health education programmes, the most effective methods for promoting sustainability and adherence to self-management among diabetics remains unclear. To examine the perceptions of participants in Thailand regarding Type 2 diabetes and to utilize the findings to formulate a model for patient education. A grounded theory approach was selected and semi-structured face to face interviews and focus group were used to gather data from 33 adults with Type 2 diabetes. Five explanatory categories emerged from the data: causing lifelong stress and worry, finding their own ways, after a while, still cannot and wanting a normal life. A new approach to patient education about Type 2 diabetes in Thailand is needed to give patients a better understanding, provide recommendations that they can apply to their daily lives, and include information about alternative medication. The Buddhist way of thinking and effective strategies enhancing self-efficacy should be applied to patient education to promote sustainability and adherence to self-management. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
NASA Astrophysics Data System (ADS)
Rosid, M. S.; Augusta, F. F.; Haidar, M. W.
2018-05-01
In general, carbonate secondary pore structure is very complex due to the significant diagenesis process. Therefore, the determination of carbonate secondary pore types is an important factor which is related to study of production. This paper mainly deals not only to figure out the secondary pores types, but also to predict the distribution of the secondary pore types of carbonate reservoir. We apply Differential Effective Medium (DEM) for analyzing pore types of carbonate rocks. The input parameter of DEM inclusion model is fraction of porosity and the output parameters are bulk moduli and shear moduli as a function of porosity, which is used as input parameter for creating Vp and Vs modelling. We also apply seismic post-stack inversion technique that is used to map the pore type distribution from 3D seismic data. Afterward, we create porosity cube which is better to use geostatistical method due to the complexity of carbonate reservoir. Thus, the results of this study might show the secondary porosity distribution of carbonate reservoir at “FR” field. In this case, North – Northwest of study area are dominated by interparticle pores and crack pores. Hence, that area has highest permeability that hydrocarbon can be more accumulated.
Matrix form of Legendre polynomials for solving linear integro-differential equations of high order
NASA Astrophysics Data System (ADS)
Kammuji, M.; Eshkuvatov, Z. K.; Yunus, Arif A. M.
2017-04-01
This paper presents an effective approximate solution of high order of Fredholm-Volterra integro-differential equations (FVIDEs) with boundary condition. Legendre truncated series is used as a basis functions to estimate the unknown function. Matrix operation of Legendre polynomials is used to transform FVIDEs with boundary conditions into matrix equation of Fredholm-Volterra type. Gauss Legendre quadrature formula and collocation method are applied to transfer the matrix equation into system of linear algebraic equations. The latter equation is solved by Gauss elimination method. The accuracy and validity of this method are discussed by solving two numerical examples and comparisons with wavelet and methods.
NASA Astrophysics Data System (ADS)
Tian, Wenli; Cao, Chengxuan
2017-03-01
A generalized interval fuzzy mixed integer programming model is proposed for the multimodal freight transportation problem under uncertainty, in which the optimal mode of transport and the optimal amount of each type of freight transported through each path need to be decided. For practical purposes, three mathematical methods, i.e. the interval ranking method, fuzzy linear programming method and linear weighted summation method, are applied to obtain equivalents of constraints and parameters, and then a fuzzy expected value model is presented. A heuristic algorithm based on a greedy criterion and the linear relaxation algorithm are designed to solve the model.
A simple test for spacetime symmetry
NASA Astrophysics Data System (ADS)
Houri, Tsuyoshi; Yasui, Yukinori
2015-03-01
This paper presents a simple method for investigating spacetime symmetry for a given metric. The method makes use of the curvature conditions that are obtained from the Killing equations. We use the solutions of the curvature conditions to compute an upper bound on the number of Killing vector fields, as well as Killing-Yano (KY) tensors and closed conformal KY tensors. We also use them in the integration of the Killing equations. By means of the method, we thoroughly investigate KY symmetry of type D vacuum solutions such as the Kerr metric in four dimensions. The method is also applied to a large variety of physical metrics in four and five dimensions.
Specialized CFD Grid Generation Methods for Near-Field Sonic Boom Prediction
NASA Technical Reports Server (NTRS)
Park, Michael A.; Campbell, Richard L.; Elmiligui, Alaa; Cliff, Susan E.; Nayani, Sudheer N.
2014-01-01
Ongoing interest in analysis and design of low sonic boom supersonic transports re- quires accurate and ecient Computational Fluid Dynamics (CFD) tools. Specialized grid generation techniques are employed to predict near- eld acoustic signatures of these con- gurations. A fundamental examination of grid properties is performed including grid alignment with ow characteristics and element type. The issues a ecting the robustness of cylindrical surface extrusion are illustrated. This study will compare three methods in the extrusion family of grid generation methods that produce grids aligned with the freestream Mach angle. These methods are applied to con gurations from the First AIAA Sonic Boom Prediction Workshop.
An inviscid-viscous interaction approach to the calculation of dynamic stall initiation on airfoils
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cebeci, T.; Platzer, M.F.; Jang, H.M.
An interactive boundary-layer method is described for computing unsteady incompressible flow over airfoils, including the initiation of dynamic stall. The inviscid unsteady panel method developed by Platzer and Teng is extended to include viscous effects. The solutions of the boundary-layer equations are obtained with an inverse finite-difference method employing an interaction law based on the Hilbert integral, and the algebraic eddy-viscosity formulation of Cebeci and Smith. The method is applied to airfoils subject to periodic and ramp-type motions and its abilities are examined for a range of angles of attack, reduced frequency, and pitch rate.
Improvement of pre-treatment method for 36Cl/Cl measurement of Cl in natural groundwater by AMS
NASA Astrophysics Data System (ADS)
Nakata, Kotaro; Hasegawa, Takuma
2011-02-01
Estimation of 36Cl/Cl by accelerator mass spectrometry (AMS) is a useful method to trace hydrological processes in groundwater. For accurate estimation, separation of SO42- from Cl - in groundwater is required because 36S affects AMS measurement of 36Cl. Previous studies utilized the difference in solubility between BaSO 4 and BaCl 2 (BaSO 4 method) to chemically separate SO42- from Cl -. However, the accuracy of the BaSO 4 method largely depends on operator skill, and consequently Cl - recovery is typically incomplete (70-80%). In addition, the method is time consuming (>1 week), and cannot be applied directly to dilute solutions. In this study, a method based on ion-exchange column chromatography (column method) was developed for separation of Cl - and SO42-. Optimum conditions were determined for the diameter and height of column, type and amount of resin, type and concentration of eluent, and flow rate. The recovery of Cl - was almost 100%, which allowed complete separation from SO42-. The separation procedure was short (<6 h), and was successfully applied to dilute (1 mg/L Cl) solution. Extracted pore water and diluted seawater samples were processed by the column and BaSO 4 methods, and then analyzed by AMS to estimate 36S counts and 36Cl/Cl values. 36S counts in samples processed by the column method were stable and lower than those from the BaSO 4 method. The column method has the following advantages over the BaSO 4 method: (1) complete and stable separation of Cl - and SO42-, (2) less operator influence on results, (3) short processing time (<6 h), (4) high (almost 100%) recovery of Cl -, and (5) concentration of Cl - and separation from SO42- in the one system for dilute solutions.
Systematic text condensation: a strategy for qualitative analysis.
Malterud, Kirsti
2012-12-01
To present background, principles, and procedures for a strategy for qualitative analysis called systematic text condensation and discuss this approach compared with related strategies. Giorgi's psychological phenomenological analysis is the point of departure and inspiration for systematic text condensation. The basic elements of Giorgi's method and the elaboration of these in systematic text condensation are presented, followed by a detailed description of procedures for analysis according to systematic text condensation. Finally, similarities and differences compared with other frequently applied methods for qualitative analysis are identified, as the foundation of a discussion of strengths and limitations of systematic text condensation. Systematic text condensation is a descriptive and explorative method for thematic cross-case analysis of different types of qualitative data, such as interview studies, observational studies, and analysis of written texts. The method represents a pragmatic approach, although inspired by phenomenological ideas, and various theoretical frameworks can be applied. The procedure consists of the following steps: 1) total impression - from chaos to themes; 2) identifying and sorting meaning units - from themes to codes; 3) condensation - from code to meaning; 4) synthesizing - from condensation to descriptions and concepts. Similarities and differences comparing systematic text condensation with other frequently applied qualitative methods regarding thematic analysis, theoretical methodological framework, analysis procedures, and taxonomy are discussed. Systematic text condensation is a strategy for analysis developed from traditions shared by most of the methods for analysis of qualitative data. The method offers the novice researcher a process of intersubjectivity, reflexivity, and feasibility, while maintaining a responsible level of methodological rigour.
Lin, Shih-Yen; Liu, Chih-Wei
2014-01-01
This study combines cluster analysis and LRFM (length, recency, frequency, and monetary) model in a pediatric dental clinic in Taiwan to analyze patients' values. A two-stage approach by self-organizing maps and K-means method is applied to segment 1,462 patients into twelve clusters. The average values of L, R, and F excluding monetary covered by national health insurance program are computed for each cluster. In addition, customer value matrix is used to analyze customer values of twelve clusters in terms of frequency and monetary. Customer relationship matrix considering length and recency is also applied to classify different types of customers from these twelve clusters. The results show that three clusters can be classified into loyal patients with L, R, and F values greater than the respective average L, R, and F values, while three clusters can be viewed as lost patients without any variable above the average values of L, R, and F. When different types of patients are identified, marketing strategies can be designed to meet different patients' needs. PMID:25045741
Simorgh, L; Torkaman, G; Firoozabadi, S M
2008-01-01
This study aimed at examining the effect of tripolar TENS of vertebral column on the activity of slow and fast motoneurons on 10 healthy non-athlete women aged 22.7 +/- 2.21 yrs. H-reflex recovery curve of soleus (slow) and gastrocnemius (fast) muscles were recorded before and after applying tripolar TENS. For recording of this curve, rectangular paired stimuli were applied on tibial nerve (with 40-520 ISI, frequency of 0.2 Hz and pulse width of 600 micros). Our findings showed that maximum H-reflex recovery in gastrocnemius muscle appeared in the shorter ISI, while in soleus muscle, it appeared in the longer ISI and its amplitude slightly decreased after applying tripolar TENS. It is suggested that tripolar TENS excites not only the skin but also Ia and Ib afferents in the dorsal column. A Synaptic interaction of these afferents in spinal cord causes the inhibition of type I MNs and facilitation of type II MNs. This effect can be used in muscle tone modulation.
Medical Image Analysis by Cognitive Information Systems - a Review.
Ogiela, Lidia; Takizawa, Makoto
2016-10-01
This publication presents a review of medical image analysis systems. The paradigms of cognitive information systems will be presented by examples of medical image analysis systems. The semantic processes present as it is applied to different types of medical images. Cognitive information systems were defined on the basis of methods for the semantic analysis and interpretation of information - medical images - applied to cognitive meaning of medical images contained in analyzed data sets. Semantic analysis was proposed to analyzed the meaning of data. Meaning is included in information, for example in medical images. Medical image analysis will be presented and discussed as they are applied to various types of medical images, presented selected human organs, with different pathologies. Those images were analyzed using different classes of cognitive information systems. Cognitive information systems dedicated to medical image analysis was also defined for the decision supporting tasks. This process is very important for example in diagnostic and therapy processes, in the selection of semantic aspects/features, from analyzed data sets. Those features allow to create a new way of analysis.
Wu, Hsin-Hung; Lin, Shih-Yen; Liu, Chih-Wei
2014-01-01
This study combines cluster analysis and LRFM (length, recency, frequency, and monetary) model in a pediatric dental clinic in Taiwan to analyze patients' values. A two-stage approach by self-organizing maps and K-means method is applied to segment 1,462 patients into twelve clusters. The average values of L, R, and F excluding monetary covered by national health insurance program are computed for each cluster. In addition, customer value matrix is used to analyze customer values of twelve clusters in terms of frequency and monetary. Customer relationship matrix considering length and recency is also applied to classify different types of customers from these twelve clusters. The results show that three clusters can be classified into loyal patients with L, R, and F values greater than the respective average L, R, and F values, while three clusters can be viewed as lost patients without any variable above the average values of L, R, and F. When different types of patients are identified, marketing strategies can be designed to meet different patients' needs.
Cognitive neuropsychology and its vicissitudes: The fate of Caramazza's axioms.
Shallice, Tim
2015-01-01
Cognitive neuropsychology is characterized as the discipline in which one draws conclusions about the organization of the normal cognitive systems from the behaviour of brain-damaged individuals. In a series of papers, Caramazza, later in collaboration with McCloskey, put forward four assumptions as the bridge principles for making such inferences. Four potential pitfalls, one for each axiom, are discussed with respect to the use of single-case methods. Two of the pitfalls also apply to case series and group study procedures, and the other two are held to be indirectly testable or avoidable. Moreover, four other pitfalls are held to apply to case series or group study methods. It is held that inferences from single-case procedures may profitably be supported or rejected using case series/group study methods, but also that analogous support needs to be given in the other direction for functionally based case series or group studies. It is argued that at least six types of neuropsychological method are valuable for extrapolation to theories of the normal cognitive system but that the single- or multiple-case study remains a critical part of cognitive neuropsychology's methods.
NASA Astrophysics Data System (ADS)
Yusoh, R.; Saad, R.; Saidin, M.; Anda, S. T.; Muhammad, S. B.; Ashraf, M. I. M.; Hazreek, Z. A. M.
2018-04-01
Magnetic and resistivity method has become a reliable option in archeological exploration. The use of both method has become popular these day. However, both method gives different type of sensing in detecting anomalies and direct interpret from the anomalies will result large coverage area for excavation. Therefore, to overcome this issue, both anomalies can be extracted using ArcGIS software to reduce excavated coverage area. The case study located at Sungai Batu, Lembah Bujang near SB2ZZ lot expected buried clay brick monument which will be a perfect case to apply this technique. Magnetic and resistivity method was implemented at the study area where the anomalies coverage area for magnetic and resistivity is 531.5 m2 and 636 m2 respectively which total area of both anomalies was 764 m2. By applying combine technique, the anomalies area reduce to 403.7 m2 which reduce the suspected anomalies by 47.16 %. The unsuspected clay brick monument area was increase from 15.86% to 55.54% which improve the cost and labor work for excavation.
High-order time-marching reinitialization for regional level-set functions
NASA Astrophysics Data System (ADS)
Pan, Shucheng; Lyu, Xiuxiu; Hu, Xiangyu Y.; Adams, Nikolaus A.
2018-02-01
In this work, the time-marching reinitialization method is extended to compute the unsigned distance function in multi-region systems involving arbitrary number of regions. High order and interface preservation are achieved by applying a simple mapping that transforms the regional level-set function to the level-set function and a high-order two-step reinitialization method which is a combination of the closest point finding procedure and the HJ-WENO scheme. The convergence failure of the closest point finding procedure in three dimensions is addressed by employing a proposed multiple junction treatment and a directional optimization algorithm. Simple test cases show that our method exhibits 4th-order accuracy for reinitializing the regional level-set functions and strictly satisfies the interface-preserving property. The reinitialization results for more complex cases with randomly generated diagrams show the capability our method for arbitrary number of regions N, with a computational effort independent of N. The proposed method has been applied to dynamic interfaces with different types of flows, and the results demonstrate high accuracy and robustness.
Dawidowicz, Andrzej L; Czapczyńska, Natalia B; Wianowska, Dorota
2012-05-30
The influence of different Purge Times on the effectiveness of Pressurized Liquid Extraction (PLE) of volatile oil components from cypress plant matrix (Cupressus sempervirens) was investigated, applying solvents of diverse extraction efficiencies. The obtained results show the decrease of the mass yields of essential oil components as a result of increased Purge Time. The loss of extracted components depends on the extrahent type - the greatest mass yield loss occurred in the case of non-polar solvents, whereas the smallest was found in polar extracts. Comparisons of the PLE method with Sea Sand Disruption Method (SSDM), Matrix Solid-Phase Dispersion Method (MSPD) and Steam Distillation (SD) were performed to assess the method's accuracy. Independent of the solvent and Purge Time applied in the PLE process, the total mass yield was lower than the one obtained for simple, short and relatively cheap low-temperature matrix disruption procedures - MSPD and SSDM. Thus, in the case of volatile oils analysis, the application of these methods is advisable. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hosseini, Kamyar; Mayeli, Peyman; Bekir, Ahmet; Guner, Ozkan
2018-01-01
In this article, a special type of fractional differential equations (FDEs) named the density-dependent conformable fractional diffusion-reaction (DDCFDR) equation is studied. Aforementioned equation has a significant role in the modelling of some phenomena arising in the applied science. The well-organized methods, including the \\exp (-φ (\\varepsilon )) -expansion and modified Kudryashov methods are exerted to generate the exact solutions of this equation such that some of the solutions are new and have been reported for the first time. Results illustrate that both methods have a great performance in handling the DDCFDR equation.
Linguistic Analysis of the Human Heartbeat Using Frequency and Rank Order Statistics
NASA Astrophysics Data System (ADS)
Yang, Albert C.-C.; Hseu, Shu-Shya; Yien, Huey-Wen; Goldberger, Ary L.; Peng, C.-K.
2003-03-01
Complex physiologic signals may carry unique dynamical signatures that are related to their underlying mechanisms. We present a method based on rank order statistics of symbolic sequences to investigate the profile of different types of physiologic dynamics. We apply this method to heart rate fluctuations, the output of a central physiologic control system. The method robustly discriminates patterns generated from healthy and pathologic states, as well as aging. Furthermore, we observe increased randomness in the heartbeat time series with physiologic aging and pathologic states and also uncover nonrandom patterns in the ventricular response to atrial fibrillation.
A One-Year Evaluation of a Free Fissure Sealant Program
M, Bakhtiar; N, Azadi; A, Golkari
2016-01-01
Statement of Problem: Pit and fissure sealant therapy has been approved as an effective measure in the prevention of occlusal dental caries. Resin based materials are the most common materials used worldwide. A variety of resin based fissure sealants are produced and used. Most of them have been presented with ideal results in research environment. However, their effectiveness in the real life, especially in a mass application program such as Iran’s oral health reform plan is not clear. Objectives: To evaluate the longevity of different fissure sealant applied in Iran’s oral health reform plan in Fars Province (south of Iran) after one year. Materials and Methods: Seven counties were selected. One hundred 6- to 8-year-old school children who had undergone fissure sealant therapy in spring 2015 were randomly selected from each county. Their first molars were examined to evaluate the status of the fissure sealants which were applied one year ago. Data on the type/brand of fissure sealant materials, type and experience of clinicians who applied them, existence of a chair-side assistant, and whether the children were caries-free at the time of fissure sealant application were collected from the existing reports. Results: Data of 1974 teeth from 598 children were used for the final analysis. The effects of type/brand of the material was significant on the final results and remained significant (p < 0.001) after adjustments for the level of fluoride, urban/rural area, upper/lower jaw, type of clinician who applied the sealant, existence of a chair-side assistant, and child’s gender, age, and being caries-free. Conclusions: Many factors affect the success rate of a fissure sealant therapy program. The type/brand of the material remained significantly related to the success rate of the fissure sealant even after adjustments for other influencing factors. In this study, ClinproTM Sealant (3M/ESPE, USA) showed better longevity after one year of application. PMID:28959758
Karimi, H; Ghaedi, M; Shokrollahi, A; Rajabi, H R; Soylak, M; Karami, B
2008-02-28
A simple, selective and rapid flotation method for the separation-preconcentration of trace amounts of cobalt, nickel, iron and copper ions using phenyl 2-pyridyl ketone oxime (PPKO) has been developed prior to their flame atomic absorption spectrometric determinations. The influence of pH, amount of PPKO as collector, type and amount of eluting agent, type and amount of surfactant as floating agent and ionic strength was evaluated on the recoveries of analytes. The influences of the concomitant ions on the recoveries of the analyte ions were also examined. The enrichment factor was 93. The detection limits based on 3 sigma for Cu, Ni, Co and Fe were 0.7, 0.7, 0.8, and 0.7 ng mL(-1), respectively. The method has been successfully applied for determination of trace amounts of ions in various real samples.
NASA Astrophysics Data System (ADS)
Kovalevsky, Louis; Langley, Robin S.; Caro, Stephane
2016-05-01
Due to the high cost of experimental EMI measurements significant attention has been focused on numerical simulation. Classical methods such as Method of Moment or Finite Difference Time Domain are not well suited for this type of problem, as they require a fine discretisation of space and failed to take into account uncertainties. In this paper, the authors show that the Statistical Energy Analysis is well suited for this type of application. The SEA is a statistical approach employed to solve high frequency problems of electromagnetically reverberant cavities at a reduced computational cost. The key aspects of this approach are (i) to consider an ensemble of system that share the same gross parameter, and (ii) to avoid solving Maxwell's equations inside the cavity, using the power balance principle. The output is an estimate of the field magnitude distribution in each cavity. The method is applied on a typical aircraft structure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weiss, J.
1985-09-01
We propose a method for finding the Lax pairs and rational solutions of integrable partial differential equations. That is, when an equation possesses the Painleve property, a Baecklund transformation is defined in terms of an expansion about the singular manifold. This Baecklund transformation obtains (1) a type of modified equation that is formulated in terms of Schwarzian derivatives and (2) a Miura transformation from the modified to the original equation. By linearizing the (Ricati-type) Miura transformation the Lax pair is found. On the other hand, consideration of the (distinct) Baecklund transformations of the modified equations provides a method for themore » iterative construction of rational solutions. This also obtains the Lax pairs for the modified equations. In this paper we apply this method to the Kadomtsev--Petviashvili equation and the Hirota--Satsuma equations.« less
Approach to magnetic neutron capture therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuznetsov, Anatoly A.; Podoynitsyn, Sergey N.; Filippov, Victor I.
2005-11-01
Purpose: The method of magnetic neutron capture therapy can be described as a combination of two methods: magnetic localization of drugs using magnetically targeted carriers and neutron capture therapy itself. Methods and Materials: In this work, we produced and tested two types of particles for such therapy. Composite ultradispersed ferro-carbon (Fe-C) and iron-boron (Fe-B) particles were formed from vapors of respective materials. Results: Two-component ultradispersed particles, containing Fe and C, were tested as magnetic adsorbent of L-boronophenylalanine and borax and were shown that borax sorption could be effective for creation of high concentration of boron atoms in the area ofmore » tumor. Kinetics of boron release into the physiologic solution demonstrate that ultradispersed Fe-B (10%) could be applied for an effective magnetic neutron capture therapy. Conclusion: Both types of the particles have high magnetization and magnetic homogeneity, allow to form stable magnetic suspensions, and have low toxicity.« less
[Diagnosis of primary hyperlipoproteinemia in umbilical cord blood (author's transl)].
Parwaresch, M R; Radzun, H J; Mäder, C
1977-10-01
The aim of the present investigation was to assay the frequency of primary dyslipoproteinemia in a random sample of one hundred newborns and to describe the minimal methodical requirements for sound diagnosis. After comparison of different methods total lipids were determined by gravimetry, cholesterol and triglycerides by enzymatic methods, nonesterified fatty acids by direct colorimetry; phospholipids were estimated indirectly. All measurements were applied to umbilical cord sera and to lipoprotein fractions separated by selective precipitation. The diagnosis of hyperlipoproteinemia type IV, which is the most frequent one in adults, is highly afflicted with pitfalls in the postnatal period. A primary hyper-alpha-liproteinemia occured in one case and type II-hyperlipoproteinemia in two cases, one of the parents being involved in each case. For mass screening triglycerides should be assayed in serum and cholesterol in precipitated and resolubilized LDL-fraction, for which the minimal requirements are described.
Golovko, Oksana; Koba, Olga; Kodesova, Radka; Fedorova, Ganna; Kumar, Vimal; Grabic, Roman
2016-07-01
The aim of this study was to develop a simple extraction procedure and a multiresidual liquid chromatography-tandem mass spectrometry method for determination of a wide range of pharmaceuticals from various soil types. An extraction procedure for 91 pharmaceuticals from 13 soil types, followed by liquid chromatography-tandem mass spectrometry analysis, was optimized. The extraction efficiencies of three solvent mixtures for ultrasonic extraction were evaluated for 91 pharmaceuticals. The best results were obtained using acetonitrile/water (1/1 v/v with 0.1 % formic acid) followed by acetonitrile/2-propanol/water (3/3/4 v/v/v with 0.1 % formic acid) for extracting 63 pharmaceuticals. The method was validated at three fortification levels (10, 100, and 1000 ng/g) in all types of representative soils; recovery of 44 pharmaceuticals ranged between 55 and 135 % across all tested soils. The method was applied to analyze actual environmental samples of sediments, soils, and sludge, and 24 pharmaceuticals were found above limit of quantification with concentrations ranging between 0.83 ng/g (fexofenadine) and 223 ng/g (citalopram).